Kubernetes on AWS

Intro

So creating a new K8s cluster on AWS is easy as ever with Kops. It can help you save a lot of time, help you in setting up your autoscaling groups and HA config, as well as help you with maintaining central versioned cluster config. We'll go through setting up Kops and using it to create a HA, autoscaling K8s cluster.

Prerequisites

You will need the following:

  • An AWS account
  • A linux host or VM (sorry windows!)
  • A resolvable DNS or subdomain (this is important!)
Install Kops

First thing is first, lets install the Kops tools. Its a Golang project so you can grab it from source via the following:

go get -d k8s.io/kops
cd ${GOPATH}/src/k8s.io/kops/
git checkout release
make

Caveats:

  • Make sure you have set your GOPATH in your env. Kops expects the source to be checked out in $GOPATH/src/k8s.io/kops. You will get errors if this is not the case.
  • You will require Go 1.6+

Alternatively if you are using a Mac and have Homebrew you can simply do:

brew update && brew install --HEAD kops

Now that is done you should get something like below when using Kops in your terminal:

kops version
Version 1.6.0-alpha.2 (git-3fe6e04)
Install Kubectl

Kubectl is the controller interface between you and your cluster. You will want this to talk to the API server and issue commands to your cluster.

For Mac users:

brew install kubernetes-cli

Linux users:

  1. curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

  2. Make the binary excutable chmod +x ./kubectl

  3. Move the binary file to your PATH sudo mv ./kubectl /usr/local/bin/kubectl

Install AWSCLI

This is be used to create AWS resource quickly as part of the initial setup. Simply install AWSCLI via:

pip install awscli

We will use AWS CLI to create some initial resources in your AWS account. Make sure you have place an API key and secret pair for your AWS account in $ ~/.aws/credentials. An example of the file should look like so:

# AWS creds
[default]
aws_access_key_id = XXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Your awscli config can be found in ~/.aws/config. Set up as desired, example below:

[default]

# If web proxy is needed
#proxy=http://www-cache.my.web.proxy:80

# AWS regions
#London
#region = eu-west-2

# Frankfurt
region = eu-central-1

# Ireland
#region = eu-west-1
Prepare AWS Account

Now you need to prep your account. We will create the following initial resources before using Kops to setup the cluster.

  • Create a dedicated group for Kops in your AWS account.
  • Create a dedicated user for Kops in your AWS account.
  • Assign the neccessary IAM policies

Firstly create the group with:

aws iam create-group --group-name kops-group

Next assign the following group policies:

aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops-group
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops-group
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops-group
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops-group
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops-group

Then create the Kops user:

aws iam create-user --user-name kops

Assign that user to the Kops group:

aws iam add-user-to-group --user-name kops --group-name kops

Lastly create an API key and secret for that user. These credentials will be used by the Kops tool.

aws iam create-access-key --user-name kops

This returns the pair to your terminal. Use these to update ~/.aws/credentials with the new pair. Also export the following in your term with your new key and secret:

export AWS_ACCESS_KEY_ID=<access key>
export AWS_SECRET_ACCESS_KEY=<secret key>
DNS configuration

This is the important part, without this the rest won't work. Kops will by default rely on communicating with your cluster through a public DNS.
As there are many providers out there I will talk about the high levels of what you need to do.

For a dedicated hosted DNS, all that is required is purchasing a domain with AWS. Simply go the Route53 console. Once this is done you will have a hosted zone. No other action is needed

For testing purposes you will probably want to use a subdomain, instead of purchasing and entire domain. Chances are this is hosted by a different DNS provider. Not an issue.

To do this, you will firstly need to create a hosted domain in Route53 for the subdomain you want to use. For example this my be, k8s.mydomain.com. In that hosted zone create an NS record set for your subdomain. Doing this will give 4 name servers. Make a note of these name servers. Log into your dns providers portal and create an NS record with them. Place the subdomain as your host and the name servers you took note of as the values. You should have something that looks like below for our example of k8s.mydomain.com.

Host: k8s
Value: aws.example.dns1.com, aws.example.dns2.org, aws.example.dns3.net, aws.example.dns4.co.uk

Refer to your Domain providers documenation around creating a subdomain.

You can test your subdomain setup is correct by doing a dig and looking for the 4 noted NS servers to be returned:

dig ns subdomain.example.com
Cluster state

Ok with DNS out the way, next we need to establish a place for our cluster state to live. Best place to put this is into an S3 bucket as it is a highly reliable source of truth for our cluster, bundled with the versioning capabilities S3 provides allows us to capture state changes to the cluster for auditing.

Create your S3 bucket in the region desired:

aws s3api create-bucket --bucket k8s-mydomain-com-state-store --region eu-central-1

Enable versioning on the bucket (optional)

aws s3api put-bucket-versioning --bucket k8s-mydomain-com-state-store  --versioning-configuration Status=Enabled
Let do this!

So still with us?.....Great as now we are ready to kick this thing off. Export the following variable to your env:

# simply a name you want to use for your cluster for my example it can be.
export NAME=myfirstcluster.k8s.mydomain.com

# The location of the S3 that holds the state of the cluster, we created this earlier
export KOPS_STATE_STORE=s3://k8s-mydomain-com-state-store

Now, lets create our cluster config.

kops create cluster --master-zones=eu-central-1a,eu-central-1b,eu-central-1c \
--master-size=m4.large \
--zones=eu-central-1a,eu-central-1b,eu-central-1c \
--node-count=4 --node-size=m4.large ${NAME}

So what is going on here, we are tell Kops to:

  • --master-zones create our master nodes across 3 different availability zones for an HA deployment.
  • --master-size tell it our EC2 size for the master node
    --zones deploy our worker nodes between 3 different availability zones, again for HA.
  • --node-count how many nodes you want in the cluster
  • --node-size the size of the EC2 instances for the nodes.

Once this has successfully completed, take a look in your S3 bucket. You will see the cluster config Kops as created for you. Now that this is done, you can deploy your cluster with

kops update cluster ${NAME} --yes

This will take a little bit complete. Go take 20 mins, you've earned it at this point ^_^

Once done check out your cluster via kops get clusters or check out your instance groups for your cluster with kops get ig, you should see 4 groups split up between 3 separate masters and one node group.

Now you can start using kubectl to interact with our cluster. Luckily Kops has taken care of the kubectl config for us when we span up the cluster. Check out ~/.kube/config.

You can check your cluster is happy by doing kops validate cluster and get your cluster nodes via kubectl get nodes

Congratulations!

You've stuck with it and now you have a fully HA K8s cluster on AWS!