7 minutes
Deploying an OKD 4.4 Cluster on AWS
Introduction
OpenShift 4 formally released earlier last year and substantially changed the architecture that powers OpenShift, and dramatically changed the way clusters are deployed.
Taken straight from the OKD github repository:
OKD is the Origin community distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams. OKD is also referred to as Origin in github and in the documentation. OKD makes launching Kubernetes on any cloud or bare metal a snap, simplifies running and updating clusters, and provides all of the tools to make your containerized-applications succeed.
With the scene set, we’ll get started with deploying and OKD4 cluster on AWS using Fedora 31.
Deployment
To start, you will need to sign up for an AWS Account. This can be done by following AWS’ guide found here:
https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/
Be sure to note down your AWS access key, and secret access key which will be used in a later step below
Once setup, we need to create an AWS IAM User with administrative privileges which amongst other things, needs to be able to create VPCs, EC2 istances and EBS volumes, and has programmatic access to AWS. We will do this using the AWS cli to begin with, initially using your root AWS credentials to log in and create the new IAM user.
In addition you will need to create a DNS Hosted Zone in Route 53, which can be achieved by either transferring a DNS zone, or registering a new DNS domain and creating a hosted zone. Please refer here for more information regarding Route 53:
https://aws.amazon.com/route53/
So lets get started and ensure we have the latest versions of python, and python pip installed.
sudo dnf install python3
sudo dnf install python3-pip
With Python3 installed, we will install virtualenv and activate a virtual envirnonment in which we will install the awscli using the Python3 pip package manager:
$ sudo pip install virtualenv
$ mkdir -p ~/.virtualenvs/aws-root
$ virtualenv ~/.virtualenvs/aws-root
$ mkdir -p ~/.virtualenvs/aws-okd4
$ virtualenv ~/.virtualenvs/aws-okd4
source ~/.virtualenvs/aws-root/bin/activate
pip3 install awscli
With the AWS cli now installed, we can run the following command to make sure all is well:
$ aws --version
That should produce output similar to the following:
$ aws-cli/1.16.284 Python/3.7.5 Linux/5.3.11-300.fc31.x86_64 botocore/1.13.20
Now, we’ll configure the aws cli to use the access key and secret access key you created earlier when setting up your AWS account. Enter the following command, and follow the on screen prompts. In tutorial, I use the AWS Region eu-west-2 as that is my local AWS Region. Please feel free to use something more suitable if your requirements differ
$ aws configure
We can test connectivity by running the following:
$ aws ec2 describe-instances
Which at this stage, should produce the following:
{
"Reservations": []
}
Now that we have the cli configured, we need to create an administrator user and assign an appropriate policy, which we will use to deploy our OKD cluster. First up, we’ll create the IAM user for the cluster deployment:
$ aws iam create-user --user-name openshift-admin
This has created our used, with information similar to the following:
{
"User": {
"Path": "/",
"UserName": "openshift-admin",
"UserId": "AIDAWTKVP3BKWQGSKNB5F",
"Arn": "arn:aws:iam::453835216981:user/openshift-admin",
"CreateDate": "2020-02-01T15:40:56Z"
}
}
Great, we’ll create a user policy that will give our new user administrative rights over the cluster. Paste the following into a file called “okd4-iam-cluster-admin-policy.json”
{
"Version": "2012-10-17",
"Statement": [
{
"Resource": "*",
"Effect": "Allow",
"Action": "*"
}
]
}
Next, we will use the json file to create a user policy against the user we created earlier:
$ aws iam put-user-policy --user-name openshift-admin --policy-name okd-cluster-admin --policy-document file://okd4-iam-cluster-admin-policy.json
$ aws iam list-user-policies --user-name openshift-admin
{
"PolicyNames": [
"okd-cluster-admin"
]
}
So far we have created the user and policy necessary to build out the various AWS components that the installer will create (VPCs, Subnets, EBS volumes, Security Groups, EC2 Instances, Load Balancers etc). We now need to create the programmatic access keys for the installer to use:
$ aws iam create-access-key --user-name openshift-admin
Which will output something similar to the following. This information should be viewed as secure, so please don’t share it with anyone.
{
"AccessKey": {
"UserName": "openshift-admin",
"AccessKeyId": "---",
"Status": "Active",
"SecretAccessKey": "---",
"CreateDate": "2020-02-01T15:57:26Z"
}
}
Take a note of the access ket and secret access key, which we will use in a few steps coming up. First, we’ll use virtualenv to select a the okd4 profile we created earlier:
$ source ~/.virtualenvs/aws-okd4/bin/activate
$ pip3 install awscli
We’ll now create our AWS profile (that will be used by the OKD4 deployment binary) using the Access Key and Secret Access Key from the steps above
$ aws configure
We’re almost ready to deploy OKD4 with the majority of the AWS related tasks now complete. During the installation process, the installation binary will ask for an SSH public key to assign to the built servers and will by default use the keys found in ~/.ssh/ - however you can supply alternatives. If you don’t have an SSH key pair created, now is the time to do so:
$ ssh-keygen
So now we’re at the end of the AWS configuration, it’s time to get an OKD4 cluster up. The first stage is to grab the OKD deployment binary and extract it:
$ wget https://github.com/openshift/okd/releases/download/4.4.0-0.okd-2020-01-28-022517/openshift-install-linux-4.4.0-0.okd-2020-01-28-022517.tar.gz
$ tar -xvf openshift-install-linux-4.4.0-0.okd-2020-01-28-022517.tar.gz
This will unpack “openshift-install”. As always with anything new, it’s always worth checking the relevant help documentation:
$ openshift-install --help
Now that you’re fully clued up, will use the installer to get a quick and dirty cluster spun up:
$ ./openshift-install create cluster
You will be presented with a text based menu, which will ask the following questions:
Cloud provider - (Choose AWS) region - Leave this default, or change if you are unhappy with what you selected in “aws configure” Route53 domain - Choose the domain name you created cluster name - choose a cluster name such as “okd4” Pull secret - this is obtained from cloud.redhat.com
After that, the deployment will start and take around 30 minutes to complete.
Whilst we wait, we can take a look at what is happening using the AWS cli. Run the following command to list all EC2 instances via their name tag and AZ.
$ aws ec2 describe-instances \
--filter Name=tag-key,Values=Name \
--query 'Reservations[*].Instances[*].{Instance:InstanceId,AZ:Placement.AvailabilityZone,Name:Tags[?Key==`Name`]|[0].Value}' \
--output table
-----------------------------------------------------------------------------
| DescribeInstances |
+------------+-----------------------+--------------------------------------+
| AZ | Instance | Name |
+------------+-----------------------+--------------------------------------+
| eu-west-2a| i-0f56ab75bc1b2e182 | okd4-cvd7n-worker-eu-west-2a-md4kg |
| eu-west-2c| i-071b9b3812fb3f981 | okd4-cvd7n-worker-eu-west-2c-kzmtc |
| eu-west-2b| i-028210a432acd2dbc | okd4-cvd7n-master-1 |
| eu-west-2b| i-0fcf6615b798f5201 | okd4-cvd7n-worker-eu-west-2b-tf68m |
| eu-west-2a| i-0e99aedadb5626230 | okd4-cvd7n-master-0 |
| eu-west-2c| i-018f288c340d2f337 | okd4-cvd7n-master-2 |
| eu-west-2a| i-0a9d02676f3e76713 | okd4-cvd7n-bootstrap |
+------------+-----------------------+--------------------------------------+
This shows us that the installer does a few things. It crates a bootstrap EC2 instance which acts as a “jumpbox” from which the installer will configure other EC2 instances that it crates. The bootstrap instance is removed at the end of the deployment. In addition to that, it creates 3 master servers which are evenly distributed across three availability zones in AWS. In this case, I have deployed to eu-west-2 which has three availability zone. In addition to the three masters, we also see three worker nodes, again deployed in a highly available manner.
One more thing to note, is that none of the masters or worker nodes gain a public IP address:
$ aws ec2 describe-instances --filter "Name=tag:Name, Values=*bootstrap*" --query "Reservations[*].Instances[*].PublicIpAddress"
Which outputs a single IP address:
[
[
"18.130.174.69"
]
]
Belonging to the bootstrap server. After while, your cluster deployment will be completed and you will be provided with a url with which you can log into the cli and web console to administer your new cluster.
1292 Words
2020-02-04 00:00 +0000