3 minutes
Deploying OpenShift OKD 3.11 on CentOS 7
Introduction
OpenShift OKD is a container management platform built through the RedHat community, based on Kubernetes. OpenShift takes care of many CI/CD processes that developers typically undertake when deploying containers to platforms such as Kubernetes. In this brief article, I’ll describe the process in setting up a basic OpenShift cluster, loging into to the cluster cli, configuring the initial user roles and deploying your first OpenShift project.
Reference Architecture
To get started, you will need a number of servers configured, with SSH access via key auth to all of them, so please make sure you share your public ssh key amongst them.
Openshift Masters
FQDN | IP Address |
---|---|
osm1.local.lan | 192.168.1.10 |
osm2.local.lan | 192.168.1.11 |
osm3.lacal.lan | 192.168.1.12 |
OpenShift Nodes
FQDN | IP Address |
---|---|
osn1.local.lan | 192.168.1.20 |
osn2.local.lan | 192.168.1.21 |
OpenShift Infrastructure Nodes
FQDN | IP Address |
---|---|
osi1.local.lan | 192.168.1.22 |
OpenShift HAProxy
FQDN | IP Address |
---|---|
haproxy.local.lan | 192.168.1.30 |
Getting Started
Firstly, you need to ensure each host can be referenced through DNS, so it is highly desirable to have a DNS environment in which you can add A records for each host. In addition to A Records for the hosts, an A Record / CNAME is required for the OpenShift Cluster address which ultimately points to the HAProxy IP address. Finally, a wildcard DNS entry is required for the OpenShift router (in our example, hosts on the Infrastructure node) such as:
*.openshift.local.lan
Throughout this guide, I’ll be using the HAProxy server to run the deployment from - feel free to use an appropriate jumpbox / management server instead if you have on available.
On the HAProxy, we’ll start by doing the basics and ensure our environment is up to date and contains the useful stuff:
$ yum update -y
$ yum install vim -y
Next we’ll generate an ssh key pair, and copy the public key to each of the servers:
$ ssh-keygen
$ ssh-copy-id root@<server>
with that done, we’ll move onto the deployment.
Deployment
On the HAProxy / management server:
$ yum update
$ yum install vim git epel-release
And now we’ll download the OpenShift ansible playbooks from github, and checkout the correct version:
$ git clone https://github.com/openshift/openshift-ansible.git
$ cd openshift-ansible/
$ git checkout release-3.11
Next we will make a backup of the example inventory file, and ammend it to our needs
$ cp inventory/hosts.example inventory/hosts.custom
$ vim inventory/hosts.custom
We need to add the following:
openshift_disable_check=docker_image_availability,memory_availability
$ yum -y install ansible-2.6.5
At this point, if you are using a hyper visor such as VMWare or Hyper-V, I would recommend taking a snapshot or checkpoint. Similarly, if you are using a public cloud provider such as AWS, take an EBS snapshot etc.
ansible-playbook -i inventory/hosts.custom playbooks/prerequisites.yml
ansible-playbook -i openshift-ansible/inventory/hosts.custom openshift-ansible/playbooks/deploy_cluster.yml
Once the playbook has completed without error, it’s time to switch over to one of the master nodes:
ssh root@osm1.local.lan
Once logged in, we’ll check that OpenShift is running:
$ oc get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system master-api-osm1.local.lan 1/1 Running 0 22m
kube-system master-api-osm2.local.lan 1/1 Running 0 22m
kube-system master-api-osm3.local.lan 1/1 Running 1 22m
kube-system master-controllers-osm1.local.lan 1/1 Running 0 22m
kube-system master-controllers-osm2.local.lan 1/1 Running 0 22m
kube-system master-controllers-osm3.local.lan 1/1 Running 0 22m
kube-system master-etcd-osm1.local.lan 1/1 Running 1 29m
kube-system master-etcd-osm2.local.lan 1/1 Running 1 28m
kube-system master-etcd-osm3.local.lan 1/1 Running 1 28m
openshift-node sync-79hhm 1/1 Running 0 17m
openshift-node sync-8b6d4 1/1 Running 0 17m
openshift-node sync-h6rw4 1/1 Running 0 17m
549 Words
2019-02-26 00:00 +0000