Getting started with Skupper - connecting OpenShift and Kubernetes (AWS EKS)
Introduction
Skupper is a layer 7 interconnect. Its primary use case is to connect separate kubernetes clusters to one another exposing service objects that run on each unique cluster to each other, allowing developers the ability to create distributed microservices applications, with less of an need to be concerned about the under lying architecture and network configuration.
Skupper achieves this by creating a skupper network across the two (or more) clusters placing a skupper router on each as an endpoint. Communictaion between the skupper routers (and therefore Kubernetes clusters) is secured through mutal TLS.
Once deployed, services are annotated using “kubectl annotate” to mark services as routable through the skupper network, giving finite control over what is and is not exposed.
In this demo, we will use the Isto project’s “Book info” application and deploy it across two Kubernetes clusters - One AWS EKS and the other OpenShift. Becuase OpenShift is based on Kubernetes, and Kuberentes can run anyware, Skupper offers a stepping stone for deploying application across a Hyrbid / Multi Cloud typology.
This blog post, is based on the official documentation for the Skupper Porject: https://github.com/skupperproject/skupper-example-bookinfo
But focuses on a specific use case of Skupper deployed across an OpenShift and Kubernetes environmnet, in a hybrid manner similiar to real world scenarios.
Getting set up
On you local client, we’ll download and install the skupper client first:
$ curl https://skupper.io/install.sh | sh
Now we will clone the “Skupper Bookinfo” project from github:
$ git clone https://github.com/skupperproject/skupper-example-bookinfo.git
Deploying it to our Hybrid Cloud Environment
Against your OpenShift Environment:
I will assume you have an OpenShift environment ready to deploy to. If not, OpenShift is available as an AWS Managed Service (https://aws.amazon.com/rosa/) or Azure managed service (https://azure.microsoft.com/en-us/services/openshift/) which are probably the quickest routes to a deployed cluster.
Ensure you are logged into your OCP cluster via the “oc login” command.
$ oc login -u <user> https://api.<server>:6443
First, we will deploy the “Public Cloud” components of the “Skupper Bookinfo” project
$ oc new-project bookinfo-public
$ cd skupper-example-bookinfo/
$ oc apply -f public-cloud.yaml
After a few moments, we should see that the pods are running, and necessary services are present:
$ oc get pods
NAME READY STATUS RESTARTS AGE
productpage-v1-66d64c95ff-c2rtw 0/1 Pending 0 80s
ratings-v1-5c7bc8db49-4t9vl 0/1 Pending 0 80s
And after awhile (I grabbed a quick coffee):
$ oc get pods
NAME READY STATUS RESTARTS AGE
productpage-v1-66d64c95ff-c2rtw 1/1 Running 0 7m6s
ratings-v1-5c7bc8db49-4t9vl 1/1 Running 0 7m6s
At this point, we have the front end part of the “Bookinfo” project running on our openshift cluster.
Looking at this point at the services deployed within our OCP project, we can only see “productpage” and “ratings”:
$ oc get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
productpage LoadBalancer 172.30.206.172 acc71238daaae424c99f758ee729c583-1522379229.us-east-2.elb.amazonaws.com 9080:31630/TCP 33s
ratings ClusterIP 172.30.54.78 <none> 9080/TCP 32s
We need to expose the proudct page service and create an OpenShift route (Similar to a Kubernetes Ingress if you are new to OpenShift):
$ oc expose deployment/productpage-v1 --port 9080 --type LoadBalancer
service/productpage-v1 exposed
$ oc create route edge productpage --service=productpage
route.route.openshift.io/productpage created
Next, we can obtain the route unique to your cluster:
echo $(oc get route/productpage -o jsonpath='https://{.status.ingress[0].host}')
If you visit that link at this stage you will see the Book Info page, but notice it is not fuctioning as the backend components of the project are missing.
Next, we will deploy a skupper router to this cluster, and create a connection token.
$ skupper init
Skupper is now installed in namespace 'bookinfo-public'. Use 'skupper status' to get more information.
And as suggested, we can get further status information:
$ skupper status
Skupper is enabled for namespace "bookinfo-public" in interior mode. It is not connected to any other sites. It has no exposed services.
The site console url is: https://skupper-bookinfo-public.apps.cluster-8ttp5.8ttp5.sandbox430.opentlc.com
The credentials for internal console-auth mode are held in secret: 'skupper-console-users'
Next, we shall create a skupper connection token, for use in linking our OCP environment with our EKS environment:
$ skupper token create ${HOME}/PVT-to-PUB-connection-token.yaml
Token written to /home/matt/PVT-to-PUB-connection-token.yaml
Becuase we’re all curious, we can quickly see what a connection token looks like:
$ cat ~/PVT-to-PUB-connection-token.yaml
Which will produce yaml similar to the following:
apiVersion: v1
data:
ca.crt: <removed>
password: <removed>
kind: Secret
metadata:
annotations:
skupper.io/generated-by: 68d9d44d-f1bc-40ce-b6bd-a023efa20bbc
skupper.io/site-version: 1.0.2
skupper.io/url: https://claims-bookinfo-public.apps.cluster-8ttp5.8ttp5.sandbox430.opentlc.com:443/0790d7a8-3050-11ed-9db4-b060881c91ca
creationTimestamp: null
labels:
skupper.io/type: token-claim
name: 0790d7a8-3050-11ed-9db4-b060881c91ca
Essentially, a connection token is a password and CA cert stored as a secret within our cluster.
$ oc get secret
NAME TYPE DATA AGE
0790d7a8-3050-11ed-9db4-b060881c91ca Opaque 1 3m14s
builder-dockercfg-4nwhg kubernetes.io/dockercfg 1 5m29s
builder-token-hlwkp kubernetes.io/service-account-token 4 5m31s
builder-token-qm64z kubernetes.io/service-account-token 4 5m33s
default-dockercfg-dpchh kubernetes.io/dockercfg 1 5m24s
default-token-8rv4g kubernetes.io/service-account-token 4 5m30s
default-token-rstbt kubernetes.io/service-account-token 4 5m30s
deployer-dockercfg-gpwzj kubernetes.io/dockercfg 1 5m28s
deployer-token-9lqq9 kubernetes.io/service-account-token 4 5m33s
deployer-token-j69pz kubernetes.io/service-account-token 4 5m31s
skupper-claims-server kubernetes.io/tls 3 4m7s
skupper-console-certs kubernetes.io/tls 2 4m8s
skupper-console-users Opaque 1 4m11s
skupper-local-ca kubernetes.io/tls 2 4m13s
skupper-local-client kubernetes.io/tls 4 4m11s
skupper-local-server kubernetes.io/tls 3 4m12s
skupper-router-dockercfg-gh4zk kubernetes.io/dockercfg 1 4m6s
skupper-router-token-l8xjb kubernetes.io/service-account-token 4 4m14s
skupper-router-token-pb2ht kubernetes.io/service-account-token 4 4m6s
skupper-service-ca kubernetes.io/tls 2 4m12s
skupper-service-client kubernetes.io/tls 3 4m11s
skupper-service-controller-dockercfg-h75rv kubernetes.io/dockercfg 1 4m6s
skupper-service-controller-token-4wtpg kubernetes.io/service-account-token 4 4m9s
skupper-service-controller-token-rv7c5 kubernetes.io/service-account-token 4 4m6s
skupper-site-ca kubernetes.io/tls 2 4m13s
skupper-site-server kubernetes.io/tls 3 4m9s
As mentioned towards the start of this post, skupper uses annotations on services to enable routing. So next, we’ll annotate the ratings service with the skupper.io/proxy=http tag.
$ oc annotate service ratings skupper.io/proxy=http
service/ratings annotated
Against your Kubernetes Environment:
For our Kubernetes Environment, we will use an AWS EKS cluster and quickly deploy it using eksctl.
As per the AWS Documentation, we’ll download and install eksctl command line utility (https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html):
$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
$ sudo mv /tmp/eksctl /usr/local/bin
$ eksctl version
0.110.0
We need to ensure that our local aws configuration (~/.aws/config and ~/.aws/credentials) is set for cli / api access to our environmnet. For the purposes of this demonstation, we’ll be using an adminstrative user with comprehensive rights to our cluster. Use the following command, and follow the prompts as you are asked for your AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY.
$ aws configure
Once configured, let’s ensure everything is working correctly:
$ aws sts get-caller-identity
{
"UserId": "-",
"Account": "-",
"Arn": "arn:aws:iam::-:user/-"
}
(Details removed for security purposes)
We can now progress to deploying a AWS EKS Kubernetes cluster, which will take several minutes to deploy and initialise (time for coffee number 2). Feel free to choose a more suitable region for your location and needs:
$ eksctl create cluster --name skupper --region eu-west-2
Eventually, you should see a final terminal text line similar to the following:
2022-09-09 16:31:00 [✔] EKS cluster "skupper" in "eu-west-2" region is ready
For reference, eksctl creates a managed AWS EKS control plane, and two worker nodes on EC2 using this methodology. We can explore the instances created in a little more detail:
$ aws ec2 describe-instances \
--filter Name=tag-key,Values=Name \
--query 'Reservations[*].Instances[*].{Instance:InstanceId,AZ:Placement.AvailabilityZone,Name:Tags[?Key==`Name`]|[0].Value}' \
--output table
-------------------------------------------------------------------
| DescribeInstances |
+------------+-----------------------+----------------------------+
| AZ | Instance | Name |
+------------+-----------------------+----------------------------+
| eu-west-2b| i-00751bddd83e29b99 | skupper-ng-b16a282c-Node |
| eu-west-2a| i-0bbc75b3f778606fa | skupper-ng-b16a282c-Node |
+------------+-----------------------+----------------------------+
From the above we can see that we have a highly available pair of worker nodes with each deployed in a separate AWS availability zone.
eksctl also conveniently sets our kubectl context to the newly created cluster, so at this stage we can explore our environment a little bit more:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-192-168-5-159.eu-west-2.compute.internal Ready <none> 9m41s v1.22.12-eks-ba74326 192.168.5.159 18.170.74.97 Amazon Linux 2 5.4.209-116.363.amzn2.x86_64 docker://20.10.17
ip-192-168-86-156.eu-west-2.compute.internal Ready <none> 9m37s v1.22.12-eks-ba74326 192.168.86.156 13.42.21.33 Amazon Linux 2 5.4.209-116.363.amzn2.x86_64 docker://20.10.17
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-cf62m 1/1 Running 0 10m
kube-system aws-node-tsndg 1/1 Running 0 10m
kube-system coredns-679c9bd45b-rjgt6 1/1 Running 0 17m
kube-system coredns-679c9bd45b-sjt9l 1/1 Running 0 17m
kube-system kube-proxy-d6kxd 1/1 Running 0 10m
kube-system kube-proxy-jb2vm 1/1 Running 0 10m
We can see the cluster runs, as default pods for the AWS managed service offering, DNS and kube proxy.
Excellent, we can now deploy the back-end component of the Bookinfo project onto our Kubernetes cluster.
We’ll start by creating a namespace for the bookinfo project and initialising skupper in that namespace.
$ kubectl create namespace bookinfo-private
namespace/bookinfo-private created
$ skupper init -n bookinfo-private
Waiting for LoadBalancer IP or hostname...
Waiting for LoadBalancer IP or hostname...
Skupper is now installed in namespace 'bookinfo-private'. Use 'skupper status' to get more information.
As with our OCP cluster, we can get more information.
$ skupper status -n bookinfo-private
Skupper is enabled for namespace "bookinfo-private" in interior mode. It is not connected to any other sites. It has no exposed services.
The site console url is: https://aa12c2790dba846baaaa762f6605298f-1572819193.eu-west-2.elb.amazonaws.com:8080
The credentials for internal console-auth mode are held in secret: 'skupper-console-users'
As before, lets ensure we are in the skupper-example-bookinfo github project folder:
$ cd skupper-example-bookinfo/
And apply the backend components of the project:
$ kubectl apply -f private-cloud.yaml -n bookinfo-private
service/details created
deployment.apps/details-v1 created
service/reviews created
deployment.apps/reviews-v3 created
Lets apply the connection token we created earlier, linking the two clusters:
$ skupper link create ${HOME}/PVT-to-PUB-connection-token.yaml -n bookinfo-private
Site configured to link to https://claims-bookinfo-public.apps.cluster-8ttp5.8ttp5.sandbox430.opentlc.com:443/0790d7a8-3050-11ed-9db4-b060881c91ca (name=link1)
Check the status of the link using 'skupper link status'.
skupper link status -n bookinfo-private
Link link1 is active
At this stage, we have bridged the two Kubernetes clusters using skupper and mutal TLS. For our application to work however, we need to annotate the services as we did on our OpenShift cluster:
kubectl annotate service details skupper.io/proxy=http -n bookinfo-private
kubectl annotate service details skupper.io/proxy=http -n bookinfo-private
kubectl annotate service reviews skupper.io/proxy=http -n bookinfo-private
And as before, we’ll expose the necesary services:
$ skupper expose service details --address details --protocol http -n bookinfo-private
service details exposed as details
$ skupper expose service reviews --address reviews --protocol http -n bookinfo-private
service reviews exposed as reviews
Great, all being well, the two Kubernetes clusters are connected via skupper routing the front end and back end traffic between them via the skupper routers. We can finally check the status of the application.
Getting the results
Back on the OpenShift environment, if you don’t have the same output from earlier, you’ll need to grab your route to the project page:
$ echo $(oc get route/productpage -o jsonpath='https://{.status.ingress[0].host}')
Visiting that in a web broswer should display the bookinfo application, running in a hybrid / multi cloud enviornment.