OpenShift Foundations
[https://access.redhat.com/downloads/content/290/ver=3.10/rhel---7/3.10.14/x86_64/product-software
oc login -u <Your OPENTLC UserID> -p <Your OPENTLC Password> https://master.na39.openshift.opentlc.com
oc login -u alexander.soul-computacenter.com -p Pass1word! https://master.na39.openshift.opentlc.com
oc new-project opentlc-ocp-project1
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
OpenShift Architecture and Concepts
https://blog.openshift.com/an-open-source-load-balancer-for-openshift/
- OS3.7 is a PaaS
- Secure and scalable multi-tenant operating system for today's enterprise-class applications, while providing integrated application runtime and libraries
- OCP runs on RHEL or RHE-Atomic-Host
Nodes - 2x Types of hosts - Masters and Nodes
- Nodes are orchestrated by masters
- Application instances and components run in containers
- OS node can run many containers
POD
- One or more containers deployed together on one host (albeit fewer use cases for multi-container pod)
- Consists of co-located group of containers with shared resources such as volumes and IP addresses
- Smallest compute unit that can be defined, deployed and managee
- May contain one or more co-located application that are relatively tightly coupled -- run with shared context
- Example: web server and file puller/syncer
- Orchestrated unit in OpenShift
- Complex applications made up of many pods
- Each with own container
- OS runs Docker images in containers wrapped by meta object called pod
- Different application components such as application server and database generally not placed in a single pod
- Most applications benefit from flexibility of single-container pod
- Allows for individual application components to be easily scaled horizontally
- Services are how application components are 'wired' together
Service
- Defines logical set of pods and policy for accessing them
- As pods are created and destroyed by scaling up and down, permanent IP or hostname must be available for other applications to connect to
- Service represents group of pods and provides permanent internal IP and hostname for other applications to use
- Service layer connects application components together
- Front-end web services connects to database instance by communicating with database service
- Services allow simple internal load balancing across application components
- OpenShift automatically injects service information into running containers for ease of discovery
Labels
- Used to organize, group or select API objects
- Example: Pods tagged with labels, services use label selectors to identify pods they proxy to
- Makes it possible for services to reference groups of pods
- Can even treat pods with potentially different Docker containers as related entities
- Labels are simple key-value pairs:
labels:
key1: value1
key2: value2
Master Host
- Primary Functions:
- Orchestrate all activities on nodes
- Know and maintain state within OpenShift Environment
- Use multiple masters for High Availability
- Master provides single API that all tooling and systems interact with
- Any request goes through this API
- All API requests SSL-encrypted and authenticated
- Authorisations handled via fine-grained role-based access control
- Master can be tied into external identity management systems
- Examples: LDAP, Active Directory, OAuth providers like GitHub & Google
Access - Web UI, CLI, IDE, API
- All user access OpenShift through same standard interfaces
- Web UI, CLI, and IDEs all go through authenticated and RBAC-controlled API
- Users do not need system-level access to OpenShift hosts
- Even for complicated debugging and troubleshooting
- Continuous Integration (CI) and continuous deployment (CD) systems integrate with OpenShift through these interfaces
Desired and Current State
- Held in data store that uses etcd as distributed key-value store
- Also holds things like RBAC rules, application environment information, and non-application user data
Health and Scaling
- Master monitors health of pods and automatically scales them
- Users configure pod probes for liveness and readiness
- Pods can be automatically scaled based on CPU utilization
Remediating POD Failures
- Master automatically restarts PODS that fail probes or exit due to container crash
- Pods that fail too often marked as bad and temporarily not restarted
- Service layer sends traffic only to healthy pods
- Master automatically orchestrates to maintain component availability
Scheduler
- Responsible for determining pod placement
- Takes current memory, CPU and other environment utilitsation nito account when placing Pods on nodes
- For application high availability, spread pod replicas between nodes
- Can use real-world topology of OpenShift deployment (regions, zones)
- Uses JSON file in combination with node labels to carve up OpenShift environment, for example, to look like the real-world topology
Integrated Docker Registry
- OpenShift includes integrated Docker registry used by OpenShift to store and manage Docker images
- Whenever new image pushed to registry, registry notifies OpenShift and passes along image information such as namespace, name and image metadata
- Various parts of OpenShift can react to new image by creating builds and deployments
Service Broker and Service Catalog
- Many ways to provision different resources and share their coordinates, credentials, and configuration, depending on the service provider and the platform
- OpenShift Container Platform includes a service catalog, an implementation of the "Open Service Broker API" (OSB API) for Kubernetes
- The "service catalog" allows cluster administrators to integrate multiple platforms using a single API specification
When developing microservices-based applications to run on cloud native platforms, there are many ways to provision different resources and share their coordinates, credentials, and configuration, depending on the service provider and the platform.
To give developers a more seamless experience, OpenShift Container Platform includes a service catalog, an implementation of the Open Service Broker API (OSB API) for Kubernetes. This allows users to connect any of their applications deployed in OpenShift Container Platform to a wide variety of service brokers.
The service catalog allows cluster administrators to integrate multiple platforms using a single API specification. The OpenShift Container Platform web console displays the cluster service classes offered by service brokers in the service catalog, allowing users to discover and instantiate those services for use with their applications.
As a result, service users benefit from ease and consistency of use across different types of services from different providers, while service providers benefit from having one integration point that gives them access to multiple platforms.
Application Data
- Containers are natively ephemeral
- Data not save when containers restarted or created
- OpenShift provides persistent storage subsystem that automatically connects real-world storage to correct pods
- Allows use of stateful applications
- Wide array of persistent storage types
- Raw devices: iSCSI, FC
- Enterprise storage: NFS
- Cloud-type options: Gluster/Ceph, AWS EBS, pDisk
Routing Layer
- External clients need to access applications running inside OpenShift
- Routing layer is close partner to service layer
- Runs in pods inside OpenShift
- Provides automated load balancing to pods for external clients
- Provides load balancing and auto-routing around unhealthy pods like service layer does
- Routing layer pluggable and extensible if hardware or non-OpenShift software router desired
Replication Controller
- Ensures that specified number of pod replicas running at all times
- If pods exit or deleted, replication controller instantiates more
- If more pods running than desired, replication controller deletes as many as necessary
Replication controller’s definition includes:
- Number of replicas desired (adjustable at anytime)
- Pod definition for creating replicated pod
- Selector for identifying managed pods
Selector is set of labels assigned to all pods managed by replication controller
- Included in pod definition that replication controller instantiates
- Used by replication controller to determine how many pod instances are running, to adjust as needed
Networking Workflow
OpenShift Networking
- Container networking based on integrated Open vSwitch
- Platform-wide routing tier
- Ability to plug in third-party software-defined network (SDN) solutions
- Integrated with DNS and existing routing and load balancing
Route
- Exposes service by giving it externally reachable hostname
- Consists of route name, service selector, and (optional) security configuration
- Router can consume defined route and endpoints identified by service
- Provides named connectivity
- Lets external clients reach OpenShift-hosted applications
Router
- Easy-to-deploy multi-tier application
- Routing layer required to reach applications from outside OpenShift environment
- Router container can run on any node host in environment
- Administrator creates wildcard DNS entry (CNAME) on DNS server
- CNAME resolves to node host hosting router container
- Router is ingress point for traffic destined for OpenShift-hosted pods
- Router container resolves external requests (https://myapp.cloudapps-guid.ose.opentlc.com) and proxies requests to right pods
Scenario: External client points browser to myApp.cloudapps.ml.opentlc.com:80
- DNS resolves to host running router container
- Using openshift-sdn overlay network:
- Router checks if route exists for request
- Proxies request to internal pod IP:port (10.1.2.3:8080)
POD Connectivity
- Pods use network of OpenShift node host to connect to other pods and external networks
Scenario: Pod transmits packet to pod in another node host in OpenShift environment
- Container sends packet to target pod using IP 10.1.2.3:8080
- OpenShift node uses Open vSwitch to route packet to OpenShift node hosting target container
- Receiving node routes packet to target container
Services and PODS
- Services often used to provide permanent IP to group of similar pods
- Internally, when accessed, services load-balance and proxy to an appropriate backing pod
- Backing pods can be added to or removed from service arbitrarily while service remains consistently available
- Enables anything that depends on service to refer to it at consistent internal address
Scenario: Pod transmits packet to service representing one or more pods
- Container sends packet to target service using IP 172.30.0.99:9999
- When service requested, OpenShift node proxies packet to one of the pods represented by service (10.1.2.3:8080)
Container Deployment Workflow
Scenario: New application requested via CLI, web console, or API
-
OpenShift API/authentication layer:
- Approves request, considering user’s permissions, resource quotas, and other information
- Creates supporting resources: deployment configuration, replication controllers, services, routes, and persistent storage claims
-
OpenShift scheduling layer:
- Designates node host for each pod, considering resources' availability and load and application spread between nodes for application high availability
-
OpenShift node:
- Pulls down image to be used from external or integrated registry
- Starts container (pod) on node
OpenShift User Experience
Users and Projects
Quotas
oc get quota -n opentlc-ocp-project1oc describe quota core-object-counts -n opentlc-ocp-project1
Templates
oc export all --as-template=<template_name>
Exploring OpenShift Resources
- Create a Project
[username-domain.com@bastion ~]$ export GUID=tok
[username-domain.com@bastion ~]$ oc new-project ${GUID}-exploring-openshift --description="This is the Project for exploring OpenShift UI" --display-name="Exploring OpenShift UI"
Now using project "tok-exploring-openshift" on server "https://master.na39.openshift.opentlc.com".
export GUID=5205
oc new-project ${GUID}-exploring-openshift --description="This is the Project for exploring OpenShift UI" --display-name="Exploring OpenShift UI"
Add a label to a new project
oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world
Update a label on a project
- Switch to the project
oc project 5205-exploring-openshift - Show current labels
oc describe pods | grep -wE 'Name:|Labels:' - Update the label for a selected pod
oc label pods cakephp-mysql-example-1-9mhrg status=healthy - Check the updated label
oc describe pods cakephp-mysql-example-1-9mhrg | grep -A3 -wE 'Labels:'
Dump out Console Logs
oc logs node-led-app-4-vftf8
Exec into a container
Get Pods in Project
oc get pods
as@ubuntu:~$ oc get po
NAME READY STATUS RESTARTS AGE
node-led-app-1-build 0/1 Completed 0 3h
node-led-app-4-qkwvz 1/1 Running 0 3h
node-led-app-4-vftf8 1/1 Running 0 3h
Get container you want to exec into from a certain pod
as@ubuntu:~$ oc describe pod node-led-app-4-vftf8 | grep -A5 -i containers
Containers:
node-led-app:
Container ID: docker://c40f1c17c
...
Exec into the container
as@ubuntu:~$ oc exec node-led-app-4-vftf8 -c node-led-app -i -t -- bash -il
bash-4.2$ pwd
/opt/app-root/src
CI/CD and Pipelines with OpenShift
- OpenShift deployments provide fine-grained management over applications based on user-defined template called a deployment configuration.
- The deployment system, in response to a deployment configuration, creates a replication controller to run an application.
Features Provided by Deployment System
- Deployment configuration (which is template for running applications)
- Contains version number that is incremented each time new replication controller is created from that configuration
- Contains cause of last deployed replication controller, added to deployment configuration
- Triggers that drive automated deployments in response to events
- Strategies to transition from previous version to new version
- Rollbacks to previous version, either manually or automatically in case of deployment failure
- Manual replication scaling and autoscaling
Rollbacks
- Deployments allow rollback to previous versions of application
- Rollbacks revert application back to previous revision
- Can be performed using REST API, CLI, or web console
- Deployment configurations also support automatically rolling back to last successful revision of configuration in case latest template fails to deploy
Deployment Strategy
- Defined by deployment configuration
- Determines deployment process
- During deployments each application has different requirements for availability and other considerations
- OpenShift provides strategies to support variety of deployment scenarios
- Readiness checks determine if new pod is ready for use
- If readiness check fails, deployment configuration retries until it times out
Rolling Deployment Strategy
- Performs rolling update
- Supports life-cycle hooks for injecting code into deployment process
- Waits for pods to pass readiness check before scaling down old components
- Does not allow pods that do not pass readiness check within timeout
- Used by default if no strategy specified in deployment configuration
Rolling Deployment Strategy Process
- Steps in rolling strategy process:
- Execute pre life-cycle hook
- Scale up new deployment by one or more pods (based on maxSurge value)
- Wait for readiness checks to complete
- Scale down old deployment by one or more pods (based on maxUnavailable value)
- Repeat scaling until:
- New deployment reaches desired replica count
- Old deployment has scaled to zero
- Execute any post life-cycle hook
- When scaling down, strategy waits for pods to become ready
- Lets it decide whether further scaling would affect availability
- If scaled-up pods never become ready, deployment times out
- Results in deployment failure
Recreate Deployment Strategy
- Has basic rollout behavior
- Supports life-cycle hooks for injecting code into deployment process
- Steps in recreate strategy deployment:
- Execute pre life-cycle hook
- Scale down previous deployment to zero
- Scale up new deployment
- Execute post life-cycle hook
Custom Deployment Strategy
- You determine deployment behavior
- Example:
"strategy": {
"type": "Custom",
"customParams": {
"image": "organization/strategy",
"command": ["command", "arg1"],
"environment": [
{
"name": "ENV_1",
"value": "VALUE_1"
}
]
}
}
- organization/strategy Docker image provides deployment behavior
- Optional command array overrides CMD directive specified in image Dockerfile
- Optional environment variables added to strategy process’s execution environment
Demonstrate Deployments and Deployment Strategies
Deploying an application using S2I
oc new-project cotd --display-name="City of the day" --description='City of the day'
oc new-app openshift/php:5.6~https://github.com/<repo>/cotd.git
oc expose svc cotd
Rolling Deployment
OpenShift deploys a new pod replica and removes an old deployment pod replica repeatedly until the new deployment is at the required replica count and the old deployment is at zero.
- A new pod is created for the new deployment, and after a health-check test, an old deployment pod is destroyed.
- OpenShift continues to increase the size of the new deployment and decrease the old deployment one pod at a time.
- This deployment strategy is good for minimizing application downtime when the new and old deployments can live side by side for a short while
OpenShift Pipeline Integration
Create Projects for the Demo
export GUID=432f
oc new-project pipeline-${GUID}-dev --description="Cat of the Day Development Environment" --display-name="Cat Of The Day - Dev"
oc new-project pipeline-${GUID}-test --description="Cat of the Day Testing Environment" --display-name="Cat Of The Day - Test"
oc new-project pipeline-${GUID}-prod --description="Cat of the Day Production Environment" --display-name="Cat Of The Day - Prod"
Deploy CI/CD Environment
oc new-app jenkins-persistent -p ENABLE_OAUTH=false -e JENKINS_PASSWORD=openshiftpipelines -n pipeline-${GUID}-dev
Enable the Jenkins service account to manage resources in the pipeline-${GUID}-test and pipeline-${GUID}-prod projects:
oc policy add-role-to-user edit system:serviceaccount:pipeline-${GUID}-dev:jenkins -n pipeline-${GUID}-test
oc policy add-role-to-user edit system:serviceaccount:pipeline-${GUID}-dev:jenkins -n pipeline-${GUID}-prod
Enable the pulling of images from the pipeline-${GUID}-dev project to the pipeline-${GUID}-test and pipeline-${GUID}-prod projects:
oc policy add-role-to-group system:image-puller system:serviceaccounts:pipeline-${GUID}-test -n pipeline-${GUID}-dev
oc policy add-role-to-group system:image-puller system:serviceaccounts:pipeline-${GUID}-prod -n pipeline-${GUID}-dev
Deploy Mock Applications
Deploy the "Cat of The Day" (cotd) application in the dev project (note that it’s a "tilde" ~ sign — not a dash — between php and http)
oc new-app php~https://github.com/StefanoPicozzi/cotd2 -n pipeline-${GUID}-dev
Follow the logs
oc logs -f build/cotd2-1 -n pipeline-${GUID}-dev
Tag the images
oc tag cotd2:latest cotd2:testready -n pipeline-${GUID}-dev
oc tag cotd2:testready cotd2:prodready -n pipeline-${GUID}-dev
Check the image stream to see that the tags were created
oc describe is cotd2 -n pipeline-${GUID}-dev
Deploy the cotd2 application in the test and prod projects:
oc new-app pipeline-${GUID}-dev/cotd2:testready --name=cotd2 -n pipeline-${GUID}-test
oc new-app pipeline-${GUID}-dev/cotd2:prodready --name=cotd2 -n pipeline-${GUID}-prod
Create routes for all three applications:
oc expose service cotd2 -n pipeline-${GUID}-dev
oc expose service cotd2 -n pipeline-${GUID}-test
oc expose service cotd2 -n pipeline-${GUID}-prod
Disable automatic deployment for all deployment configurations in your demonstration:
`The following commands actually remove the line from the yaml:
oc get dc cotd2 -o yaml -n pipeline-${GUID}-dev
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- cotd2
triggers:
- type: ConfigChange
- imageChangeParams:
containerNames:
- cotd2
oc get dc cotd2 -o yaml -n pipeline-${GUID}-dev | sed 's/automatic: true/automatic: false/g' | oc replace -f -
oc get dc cotd2 -o yaml -n pipeline-${GUID}-test | sed 's/automatic: true/automatic: false/g' | oc replace -f -
oc get dc cotd2 -o yaml -n pipeline-${GUID}-prod | sed 's/automatic: true/automatic: false/g' | oc replace -f -
Create Initial Build Config Pipeline