In my last post, we explored reasons why we would want to use the container orchestration tool kubernetes to manage deployment of our applications. Of the numerous choices of tools available to deploy Kubernetes, we’ve chosen kops because it works really well with Amazon Web Service (AWS).
Now that we have chosen kops and AWS, let’s take a closer look at how to use them.
What does it take to create a cluster with kops?
If you are in need of some guidance, I highly suggest the Kops: Getting Started document as kops needs at least 1 environment variable set.
$ kops create cluster \
Although you may be able to get by with fewer options, I recommend these as good set associated with a fairly secure setup.
Here are the options we’ve specified:
- Role Based Authentication Control, a personal must have for a production environment. It can be a bit daunting at first to get going, but the documentation is great at https://kubernetes.io/docs/admin/authorization/rbac/. While technically optional, it is definitely something to learn to do in your cluster
- This works with ‘
--topology=private’ to make sure that no one cant access your nodes directly from the internet. It stands up a bastion host in an AutoScaling Group to make sure that you can access your nodes via SSH
- This works with ‘
- Kops can also work with GCE
- This tells kops to create redundant master api servers. If you do more than one, you have to do at least 3. This option is per zone
- We have primarily been using weave to manage our pod network. It is mostly transparent, but has some neat abilities, and can tie into weave.works cloud monitoring and management.
- Initial number of nodes to bring up. We will go into this a little further in a later section
- This option increases security by minimizing the public network footprint of your nodes. You will not be able to directly access your nodes from the public internet, which also makes NodePort mostly useless (but that’s OK). You will need to set up an ingress controllers or Elastic LoadBalancers to access your services from the public internet
- Which AWS zone you want your cluster to be in. You can specify multiple zones, or keep it all in a single zone. Keep in mind, if you span across multiple zones, the `–master-count` will mean that you get that many redundant masters per zone. Due to the way that ETCD works, if you do more than one, you will need to make sure that you have at least 3 masters total in all zones.
- This is the name of our cluster, and also it’s FQDN, so make sure that you have the ability to create a dns entry for this. If you are using Route53 for managing DNS, and your FQDN is in one of your DNS zones, the entries will be made automatically on cluster creation
Exploring all of the available options
By running `kops create cluster
--help`, you will be able see all of the available options. Look through this list to make sure there aren’t any other options you might you want, like specifying ssh keys, or setting a whitelist of ip addresses that the Kubernetes API server will respond to, and setting the sizes of your nodes and masters.
After this point, kops will use your ‘current-context’ from your kubeconfig file to determine which cluster we are working with.
The command above will only create the framework for the cluster. When you have your options ready, and run the command, you will see something like this:
* list clusters with: kops get cluster
* edit this cluster with: kops edit cluster k8s.example.com
* edit your node instance group: kops edit ig --name=k8s.example.com nodes
* edit your master instance group: kops edit ig --name=k8s.example.com master-us-east-1d
By editing the cluster and instance groups, you can actually have some great fun. You can create extra node instance groups through ‘kubectl create ig’, if for example, you want some GPU instances for a certain type of service. Also you can add labels in AWS for accounting as cloudLabels, nodeLabels for managing groups inside kubernetes, and taints and tolerations for advanced pod scheduling.
And when you are ready to create your cluster:
$ kops update cluster k8s.example.com
To run any command in kops that makes changes to the cluster, you have to specify ‘
After you run this command (with ‘
--yes), it will take a few minutes for AWS to provision your cluster.
$ kubectl cluster-info
Kubernetes master is running at https://api.k8s.example.com
KubeDNS is running at https://api.k8s.example.com/api/v1/proxy/namespaces/kube-system/services/kube-dns
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Now, when you get any data from the next command, your cluster is up and ready for you to deploy your code.
$ kubectl get nodes
Keeping your cluster up to date
Making changes to, and keeping up with the latest version of kubernetes is very painless with kops. Since Kops applies a rolling-update to each instance group that needs updates, there should be no downtime. With our testing, we noticed only a little bit of a latency spike in this part of the update.
$ kops update cluster
$ kops rolling-update cluster
Cluster clean up
If you are creating extra infrastructure with terraform, or manually (as mentioned in the last post), be sure that you use that same method to delete anything that shares the same subnet or vpc in your cluster first.
$ kops delete cluster --name k8s.example.com
We chose Kops and AWS for flexibility, and reliability in creating clusters. We’ve also chosen to use Terraform to build out any external infrastructure we need inside AWS for our clusters. Using Kops to create, maintain, and destroy clusters is very straightforward, and predictable.
We are regularly creating clusters, and deleting them for testing using this outlined method. Kops has been indispensable with it’s feature set, and stellar documentation.