Kubernetes

Kops on AWS

Damian Igbe, Phd
Feb. 26, 2022, 7:36 p.m.

Subscribe to Newsletter

Be first to know about new blogs, training offers, and company news.

kops, Kubernetes Operations, is a toolkit for deploying and managing Kuberntes on public cloud. Compared to kubeadm, kops can provision the resources before deploying Kubernetes. It is designed to be used for managing the overall lifecycle of  a Kubernetes cluster. This tutorial uses  kops on AWS to deploy a production kubernetes cluster. This tutorial assumes that you are familiar with AWS and Kubernetes Architecture.

The installation procedures are divided into 2 main sections:

  • Section 1 is building the kops infrstructure and
  • Section 2 is deploying a Kubernetes cluster with kops.

You can watch the video here:

Section 1: Create kops infrastructure

The first step is to create the server where kops will be installed. The server or virtual instance can be on your laptop or on AWS. Once kops is installed, it can be used to deploy the kubernetes cluster. Here the deployment instance will be created on AWS.

Step 1: Go to the AWS management console, create an instance, ssh into the virtual instance and perform the following installation.

Step 2: Install the required tools like kubectl, kops and AWS tools

[ec2-user@ip-172-31-51-120 ~]$curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
[ec2-user@ip-172-31-51-120 ~]$chmod +x ./kubectl
[ec2-user@ip-172-31-51-120 ~]$sudo mv ./kubectl /usr/local/bin/kubectl

Step 3: Install kops

[ec2-user@ip-172-31-51-120 ~]$sudo wget https://github.com/kubernetes/kops/releases/download/1.8.0/kops-linux-amd64 
[ec2-user@ip-172-31-51-120 ~]$sudo chmod +x kops-linux-amd64 
[ec2-user@ip-172-31-51-120 ~]$sudo mv kops-linux-amd64 /usr/local/bin/kops

step 4: Amazon Linux is used here and it comes with aws tool installed.  You will have to install aws tools if you use a different image.

Step 5: Create a sub-domain for clusters in route53, leaving the domain at another registrar

Kubernetes makes use of DNS for discovery within the cluster so that you can reach out kubernetes-API-server from clients. A real registered domain is needed and from the domain you can created a sub-domain. You can either host your domain on AWS or host it with a domain company outside of AWS. In this tutorial the domain (cloudtechexperts.com) was hosted with a registrar outside of AWS while the subdomain (cte.cloudtechexperts.com) was created and hosted on AWS route53.

Step 6:  In AWS route53, create a subdomain and note your name servers

When a domain is hosted by an outside registrar and only the subdomain is hosted on  AWS Route53 you must modify your registrar’s NS (NameServer) records.  Create a hosted zone in Route53, and then migrate the subdomain’s NS records to your other registrar. The instructions for doing this varies with registrars so check the information for your own registrar.

You will need to install jq to get the command to work. Create the subdomain, and note your name servers.

[ec2-user@ip-172-31-51-120 ~]$aws configure

[ec2-user@ip-172-31-51-120 ~]$yum install -y jq

[ec2-user@ip-172-31-51-120 ~]ID=$(uuidgen) && aws route53 create-hosted-zone --name cte.cloudtechexpertscom --caller-reference $ID | jq .DelegationSet.NameServers
[
 "ns-650.awsdns-17.net.",
 "ns-1300.awsdns-34.org.",
 "ns-1883.awsdns-43.co.uk.",
 "ns-10.awsdns-01.com."
]

Note that these are randomly generated numbers and if you run the command several times, the values would be different

Step 7: Make modifications to your domain with your registrar

You will now go to your registrar’s page and log in to create a new subdomain, and use the 4 NS records received from the above command for the new subdomain. This must be done in order to use your cluster.  Be careful here not to change your top level NS record, or you might take your site offline.

Step 8: Test that the subdomain is resolving

[ec2-user@ip-172-31-51-120 ~]$dig NS cte.cloudtechexperts.com

; <<>> DiG 9.9.4-RedHat-9.9.4-51.amzn2 <<>> ns cte.cloudtechexperts.com
 ;; global options: +cmd
 ;; Got answer:
 ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53049
 ;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
 ; EDNS: version: 0, flags:; udp: 4096
 ;; QUESTION SECTION:
 ;cte.cloudtechexperts.com. IN NS

;; ANSWER SECTION:
 cte.cloudtechexperts.com. 60 IN NS ns-650.awsdns-17.net.
 cte.cloudtechexperts.com. 60 IN NS ns-1300.awsdns-34.org.
 cte.cloudtechexperts.com. 60 IN NS ns-1883.awsdns-43.co.uk.
 cte.cloudtechexperts.com. 60 IN NS ns-10.awsdns-01.com.

;; Query time: 40 msec
 ;; SERVER: 172.31.0.2#53(172.31.0.2)
 ;; WHEN: Fri Jan 19 05:34:11 UTC 2018
 ;; MSG SIZE rcvd: 189

Step 9: Create and export an S3 bucket

[ec2-user@ip-172-31-51-120 ~]$export KOPS_STATE_STORE=s3://clusters.cte.cloudtechexperts.com

Note that kops depend heavily on the KOPS_STATE_STORE value

 Section 2: Create Kubernetes Cluster

Step 1: Generate ssh key-pair which is required for kops installation

[ec2-user@ip-172-31-51-120 ~]$ ssh-keygen
 Generating public/private rsa key pair.
 Enter file in which to save the key (/home/ec2-user/.ssh/id_rsa):
 Enter passphrase (empty for no passphrase):
 Enter same passphrase again:
 Your identification has been saved in /home/ec2-user/.ssh/id_rsa.
 Your public key has been saved in /home/ec2-user/.ssh/id_rsa.pub.
 The key fingerprint is:
 SHA256:HvbhrP9HjgrUQes5AhDNwTDsAzUQc8rto1l4hH9OBzs ec2-user@ip-172-31-51-120
 The key's randomart image is:
 +---[RSA 2048]----+
 | +=O*.. . |
 | ..*.++ . . |
 | +oo o o |
 | =o + o o |
 | . *.E S * |
 | = = * * o . |
 | o . o + + |
 | o . o |
 | ..oo.. |
 +----[SHA256]-----+

Step 2: Create Kubernetes cluster

[ec2-user@ip-172-31-51-120 ~]$ kops create cluster --cloud=aws --zones=us-east-1d --name=cte.cloudtechexperts.com --dns-zone=cte.cloudtechexperts.com --dns public
I0123 00:35:39.615698    2989 create_cluster.go:971] Using SSH public key: /home/ec2-user/.ssh/id_rsa.pub
I0123 00:35:39.732854    2989 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet us-east-1d
Previewing changes that will be made:

I0123 00:35:40.752302    2989 executor.go:91] Tasks: 0 done / 73 total; 31 can run
I0123 00:35:40.897996    2989 executor.go:91] Tasks: 31 done / 73 total; 24 can run
I0123 00:35:42.895658    2989 executor.go:91] Tasks: 55 done / 73 total; 16 can run
I0123 00:35:43.014471    2989 executor.go:91] Tasks: 71 done / 73 total; 2 can run
I0123 00:35:43.053721    2989 executor.go:91] Tasks: 73 done / 73 total; 0 can run
Will create resources:
  AutoscalingGroup/master-us-east-1d.masters.cte.cloudtechexperts.com
  	MinSize             	1
  	MaxSize             	1
  	Subnets             	[name:us-east-1d.cte.cloudtechexperts.com]
  	Tags                	{k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup: master-us-east-1d, k8s.io/role/master: 1, Name: master-us-east-1d.masters.cte.cloudtechexperts.com, KubernetesCluster: cte.cloudtechexperts.com}
  	LaunchConfiguration 	name:master-us-east-1d.masters.cte.cloudtechexperts.com

  AutoscalingGroup/nodes.cte.cloudtechexperts.com
  	MinSize             	2
  	MaxSize             	2
  	Subnets             	[name:us-east-1d.cte.cloudtechexperts.com]
  	Tags                	{k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup: nodes, k8s.io/role/node: 1, Name: nodes.cte.cloudtechexperts.com, KubernetesCluster: cte.cloudtechexperts.com}
  	LaunchConfiguration 	name:nodes.cte.cloudtechexperts.com

  DHCPOptions/cte.cloudtechexperts.com
  	DomainName          	ec2.internal
  	DomainNameServers   	AmazonProvidedDNS

  EBSVolume/d.etcd-events.cte.cloudtechexperts.com
  	AvailabilityZone    	us-east-1d
  	VolumeType          	gp2
  	SizeGB              	20
  	Encrypted           	false
  	Tags                	{k8s.io/etcd/events: d/d, k8s.io/role/master: 1, Name: d.etcd-events.cte.cloudtechexperts.com, KubernetesCluster: cte.cloudtechexperts.com}

  EBSVolume/d.etcd-main.cte.cloudtechexperts.com
  	AvailabilityZone    	us-east-1d
  	VolumeType          	gp2
  	SizeGB              	20
  	Encrypted           	false
  	Tags                	{Name: d.etcd-main.cte.cloudtechexperts.com, KubernetesCluster: cte.cloudtechexperts.com, k8s.io/etcd/main: d/d, k8s.io/role/master: 1}

  IAMInstanceProfile/masters.cte.cloudtechexperts.com

  IAMInstanceProfile/nodes.cte.cloudtechexperts.com

  IAMInstanceProfileRole/masters.cte.cloudtechexperts.com
  	InstanceProfile     	name:masters.cte.cloudtechexperts.com id:masters.cte.cloudtechexperts.com
  	Role                	name:masters.cte.cloudtechexperts.com

  IAMInstanceProfileRole/nodes.cte.cloudtechexperts.com
  	InstanceProfile     	name:nodes.cte.cloudtechexperts.com id:nodes.cte.cloudtechexperts.com
  	Role                	name:nodes.cte.cloudtechexperts.com

  IAMRole/masters.cte.cloudtechexperts.com
  	ExportWithID        	masters

  IAMRole/nodes.cte.cloudtechexperts.com
  	ExportWithID        	nodes

  IAMRolePolicy/masters.cte.cloudtechexperts.com
  	Role                	name:masters.cte.cloudtechexperts.com

  IAMRolePolicy/nodes.cte.cloudtechexperts.com
  	Role                	name:nodes.cte.cloudtechexperts.com

  InternetGateway/cte.cloudtechexperts.com
  	VPC                 	name:cte.cloudtechexperts.com
  	Shared              	false

  Keypair/apiserver-aggregator
  	Subject             	cn=aggregator
  	Type                	client
  	Signer              	name:apiserver-aggregator-ca id:cn=apiserver-aggregator-ca

  Keypair/apiserver-aggregator-ca
  	Subject             	cn=apiserver-aggregator-ca
  	Type                	ca

  Keypair/apiserver-proxy-client
  	Subject             	cn=apiserver-proxy-client
  	Type                	client
  	Signer              	name:ca id:cn=kubernetes

  Keypair/ca
  	Subject             	cn=kubernetes
  	Type                	ca

  Keypair/kops
  	Subject             	o=system:masters,cn=kops
  	Type                	client
  	Signer              	name:ca id:cn=kubernetes

  Keypair/kube-controller-manager
  	Subject             	cn=system:kube-controller-manager
  	Type                	client
  	Signer              	name:ca id:cn=kubernetes

  Keypair/kube-proxy
  	Subject             	cn=system:kube-proxy
  	Type                	client
  	Signer              	name:ca id:cn=kubernetes

  Keypair/kube-scheduler
  	Subject             	cn=system:kube-scheduler
  	Type                	client
  	Signer              	name:ca id:cn=kubernetes

  Keypair/kubecfg
  	Subject             	o=system:masters,cn=kubecfg
  	Type                	client
  	Signer              	name:ca id:cn=kubernetes

  Keypair/kubelet
  	Subject             	o=system:nodes,cn=kubelet
  	Type                	client
  	Signer              	name:ca id:cn=kubernetes

  Keypair/kubelet-api
  	Subject             	cn=kubelet-api
  	Type                	client
  	Signer              	name:ca id:cn=kubernetes

  Keypair/master
  	Subject             	cn=kubernetes-master
  	Type                	server
  	AlternateNames      	[100.64.0.1, 127.0.0.1, api.cte.cloudtechexperts.com, api.internal.cte.cloudtechexperts.com, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local]
  	Signer              	name:ca id:cn=kubernetes

  LaunchConfiguration/master-us-east-1d.masters.cte.cloudtechexperts.com
  	ImageID             	kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-01-05
  	InstanceType        	m3.medium
  	SSHKey              	name:kubernetes.cte.cloudtechexperts.com-1d:72:9d:30:82:f5:ce:29:65:41:52:20:03:36:b9:54 id:kubernetes.cte.cloudtechexperts.com-1d:72:9d:30:82:f5:ce:29:65:41:52:20:03:36:b9:54
  	SecurityGroups      	[name:masters.cte.cloudtechexperts.com]
  	AssociatePublicIP   	true
  	IAMInstanceProfile  	name:masters.cte.cloudtechexperts.com id:masters.cte.cloudtechexperts.com
  	RootVolumeSize      	64
  	RootVolumeType      	gp2
  	SpotPrice           	

  LaunchConfiguration/nodes.cte.cloudtechexperts.com
  	ImageID             	kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-01-05
  	InstanceType        	t2.medium
  	SSHKey              	name:kubernetes.cte.cloudtechexperts.com-1d:72:9d:30:82:f5:ce:29:65:41:52:20:03:36:b9:54 id:kubernetes.cte.cloudtechexperts.com-1d:72:9d:30:82:f5:ce:29:65:41:52:20:03:36:b9:54
  	SecurityGroups      	[name:nodes.cte.cloudtechexperts.com]
  	AssociatePublicIP   	true
  	IAMInstanceProfile  	name:nodes.cte.cloudtechexperts.com id:nodes.cte.cloudtechexperts.com
  	RootVolumeSize      	128
  	RootVolumeType      	gp2
  	SpotPrice           	

  ManagedFile/cte.cloudtechexperts.com-addons-bootstrap
  	Location            	addons/bootstrap-channel.yaml

  ManagedFile/cte.cloudtechexperts.com-addons-core.addons.k8s.io
  	Location            	addons/core.addons.k8s.io/v1.4.0.yaml

  ManagedFile/cte.cloudtechexperts.com-addons-dns-controller.addons.k8s.io-k8s-1.6
  	Location            	addons/dns-controller.addons.k8s.io/k8s-1.6.yaml

  ManagedFile/cte.cloudtechexperts.com-addons-dns-controller.addons.k8s.io-pre-k8s-1.6
  	Location            	addons/dns-controller.addons.k8s.io/pre-k8s-1.6.yaml

  ManagedFile/cte.cloudtechexperts.com-addons-kube-dns.addons.k8s.io-k8s-1.6
  	Location            	addons/kube-dns.addons.k8s.io/k8s-1.6.yaml

  ManagedFile/cte.cloudtechexperts.com-addons-kube-dns.addons.k8s.io-pre-k8s-1.6
  	Location            	addons/kube-dns.addons.k8s.io/pre-k8s-1.6.yaml

  ManagedFile/cte.cloudtechexperts.com-addons-limit-range.addons.k8s.io
  	Location            	addons/limit-range.addons.k8s.io/v1.5.0.yaml

  ManagedFile/cte.cloudtechexperts.com-addons-rbac.addons.k8s.io-k8s-1.8
  	Location            	addons/rbac.addons.k8s.io/k8s-1.8.yaml

  ManagedFile/cte.cloudtechexperts.com-addons-storage-aws.addons.k8s.io-v1.6.0
  	Location            	addons/storage-aws.addons.k8s.io/v1.6.0.yaml

  ManagedFile/cte.cloudtechexperts.com-addons-storage-aws.addons.k8s.io-v1.7.0
  	Location            	addons/storage-aws.addons.k8s.io/v1.7.0.yaml

  Route/0.0.0.0/0
  	RouteTable          	name:cte.cloudtechexperts.com
  	CIDR                	0.0.0.0/0
  	InternetGateway     	name:cte.cloudtechexperts.com

  RouteTable/cte.cloudtechexperts.com
  	VPC                 	name:cte.cloudtechexperts.com

  RouteTableAssociation/us-east-1d.cte.cloudtechexperts.com
  	RouteTable          	name:cte.cloudtechexperts.com
  	Subnet              	name:us-east-1d.cte.cloudtechexperts.com

  SSHKey/kubernetes.cte.cloudtechexperts.com-1d:72:9d:30:82:f5:ce:29:65:41:52:20:03:36:b9:54
  	KeyFingerprint      	2b:e8:ab:91:a5:c5:32:a4:42:a9:42:b7:ca:15:05:f7

  Secret/admin

  Secret/kube

  Secret/kube-proxy

  Secret/kubelet

  Secret/system:controller_manager

  Secret/system:dns

  Secret/system:logging

  Secret/system:monitoring

  Secret/system:scheduler

  SecurityGroup/masters.cte.cloudtechexperts.com
  	Description         	Security group for masters
  	VPC                 	name:cte.cloudtechexperts.com
  	RemoveExtraRules    	[port=22, port=443, port=2380, port=2381, port=4001, port=4002, port=4789, port=179]

  SecurityGroup/nodes.cte.cloudtechexperts.com
  	Description         	Security group for nodes
  	VPC                 	name:cte.cloudtechexperts.com
  	RemoveExtraRules    	[port=22]

  SecurityGroupRule/all-master-to-master
  	SecurityGroup       	name:masters.cte.cloudtechexperts.com
  	SourceGroup         	name:masters.cte.cloudtechexperts.com

  SecurityGroupRule/all-master-to-node
  	SecurityGroup       	name:nodes.cte.cloudtechexperts.com
  	SourceGroup         	name:masters.cte.cloudtechexperts.com

  SecurityGroupRule/all-node-to-node
  	SecurityGroup       	name:nodes.cte.cloudtechexperts.com
  	SourceGroup         	name:nodes.cte.cloudtechexperts.com

  SecurityGroupRule/https-external-to-master-0.0.0.0/0
  	SecurityGroup       	name:masters.cte.cloudtechexperts.com
  	CIDR                	0.0.0.0/0
  	Protocol            	tcp
  	FromPort            	443
  	ToPort              	443

  SecurityGroupRule/master-egress
  	SecurityGroup       	name:masters.cte.cloudtechexperts.com
  	CIDR                	0.0.0.0/0
  	Egress              	true

  SecurityGroupRule/node-egress
  	SecurityGroup       	name:nodes.cte.cloudtechexperts.com
  	CIDR                	0.0.0.0/0
  	Egress              	true

  SecurityGroupRule/node-to-master-tcp-1-2379
  	SecurityGroup       	name:masters.cte.cloudtechexperts.com
  	Protocol            	tcp
  	FromPort            	1
  	ToPort              	2379
  	SourceGroup         	name:nodes.cte.cloudtechexperts.com

  SecurityGroupRule/node-to-master-tcp-2382-4000
  	SecurityGroup       	name:masters.cte.cloudtechexperts.com
  	Protocol            	tcp
  	FromPort            	2382
  	ToPort              	4000
  	SourceGroup         	name:nodes.cte.cloudtechexperts.com

  SecurityGroupRule/node-to-master-tcp-4003-65535
  	SecurityGroup       	name:masters.cte.cloudtechexperts.com
  	Protocol            	tcp
  	FromPort            	4003
  	ToPort              	65535
  	SourceGroup         	name:nodes.cte.cloudtechexperts.com

  SecurityGroupRule/node-to-master-udp-1-65535
  	SecurityGroup       	name:masters.cte.cloudtechexperts.com
  	Protocol            	udp
  	FromPort            	1
  	ToPort              	65535
  	SourceGroup         	name:nodes.cte.cloudtechexperts.com

  SecurityGroupRule/ssh-external-to-master-0.0.0.0/0
  	SecurityGroup       	name:masters.cte.cloudtechexperts.com
  	CIDR                	0.0.0.0/0
  	Protocol            	tcp
  	FromPort            	22
  	ToPort              	22

  SecurityGroupRule/ssh-external-to-node-0.0.0.0/0
  	SecurityGroup       	name:nodes.cte.cloudtechexperts.com
  	CIDR                	0.0.0.0/0
  	Protocol            	tcp
  	FromPort            	22
  	ToPort              	22

  Subnet/us-east-1d.cte.cloudtechexperts.com
  	VPC                 	name:cte.cloudtechexperts.com
  	AvailabilityZone    	us-east-1d
  	CIDR                	172.20.32.0/19
  	Shared              	false
  	Tags                	{Name: us-east-1d.cte.cloudtechexperts.com, KubernetesCluster: cte.cloudtechexperts.com, kubernetes.io/cluster/cte.cloudtechexperts.com: owned, kubernetes.io/role/elb: 1}

  VPC/cte.cloudtechexperts.com
  	CIDR                	172.20.0.0/16
  	EnableDNSHostnames  	true
  	EnableDNSSupport    	true
  	Shared              	false
  	Tags                	{Name: cte.cloudtechexperts.com, KubernetesCluster: cte.cloudtechexperts.com, kubernetes.io/cluster/cte.cloudtechexperts.com: owned}

  VPCDHCPOptionsAssociation/cte.cloudtechexperts.com
  	VPC                 	name:cte.cloudtechexperts.com
  	DHCPOptions         	name:cte.cloudtechexperts.com

Must specify --yes to apply changes

Cluster configuration has been created.

Suggestions:
 * list clusters with: kops get cluster
 * edit this cluster with: kops edit cluster cte.cloudtechexperts.com
 * edit your node instance group: kops edit ig --name=cte.cloudtechexperts.com nodes
 * edit your master instance group: kops edit ig --name=cte.cloudtechexperts.com master-us-east-1d

Finally configure your cluster with: kops update cluster cte.cloudtechexperts.com --yes

[ec2-user@ip-172-31-51-120 ~]$ 

Step 3: Apply the command to create the actual cluster

[ec2-user@ip-172-31-51-120 ~]$ kops update cluster cte.cloudtechexperts.com --yes
I0123 00:38:14.858883 2995 executor.go:91] Tasks: 0 done / 73 total; 31 can run
I0123 00:38:15.502235 2995 vfs_castore.go:430] Issuing new certificate: "ca"
I0123 00:38:15.556668 2995 vfs_castore.go:430] Issuing new certificate: "apiserver-aggregator-ca"
I0123 00:38:15.725336 2995 executor.go:91] Tasks: 30 done / 73 total; 19 can run
I0123 00:38:16.889692 2995 vfs_castore.go:430] Issuing new certificate: "apiserver-proxy-client"
I0123 00:38:17.067337 2995 vfs_castore.go:430] Issuing new certificate: "kubelet"
I0123 00:38:17.261994 2995 vfs_castore.go:430] Issuing new certificate: "apiserver-aggregator"
I0123 00:38:18.069315 2995 vfs_castore.go:430] Issuing new certificate: "kube-controller-manager"
I0123 00:38:18.114266 2995 vfs_castore.go:430] Issuing new certificate: "kubelet-api"
I0123 00:38:18.266735 2995 vfs_castore.go:430] Issuing new certificate: "kube-proxy"
I0123 00:38:18.342228 2995 vfs_castore.go:430] Issuing new certificate: "kubecfg"
I0123 00:38:18.455437 2995 vfs_castore.go:430] Issuing new certificate: "master"
I0123 00:38:18.525959 2995 vfs_castore.go:430] Issuing new certificate: "kops"
I0123 00:38:18.831827 2995 vfs_castore.go:430] Issuing new certificate: "kube-scheduler"
I0123 00:41:00.753649 2995 executor.go:91] Tasks: 48 done / 73 total; 1 can run
I0123 00:41:02.037505 2995 executor.go:91] Tasks: 49 done / 73 total; 6 can run
I0123 00:41:04.816587 2995 executor.go:91] Tasks: 55 done / 73 total; 16 can run
I0123 00:41:05.464700 2995 executor.go:91] Tasks: 71 done / 73 total; 2 can run
I0123 00:41:05.926524 2995 executor.go:91] Tasks: 73 done / 73 total; 0 can run
I0123 00:41:05.926663 2995 dns.go:153] Pre-creating DNS records
I0123 00:41:06.381707 2995 update_cluster.go:248] Exporting kubecfg for cluster
kops has set your kubectl context to cte.cloudtechexperts.com

Cluster is starting. It should be ready in a few minutes.

Suggestions:
 * validate cluster: kops validate cluster
 * list nodes: kubectl get nodes --show-labels
 * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.cte.cloudtechexperts.com
The admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
 * read about installing addons: https://github.com/kubernetes/kops/blob/master/docs/addons.md

Note that user kops is created with the following privileges:
AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
IAMFullAccess
AmazonVPCFullAccess

Note: You can delete the cluster with:
kops delete cluster --name=cte.cloudtechexperts.com --yes

Step 4: Test the cluster nodes to see if all are ready

It will take about 5 minutes to get the Kubernetes cluster fully ready so be patient.

[ec2-user@ip-172-31-51-120 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-20-39-174.ec2.internal Ready node 20s v1.8.6
ip-172-20-45-236.ec2.internal Ready node 39s v1.8.6
ip-172-20-51-231.ec2.internal Ready master 2m v1.8.6

Step 5: Get the number of pods to ensure that they are all running

[ec2-user@ip-172-31-51-120 ~]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system dns-controller-5667d8d9f6-t8xmb 1/1 Running 0 2m
kube-system etcd-server-events-ip-172-20-51-231.ec2.internal 1/1 Running 0 1m
kube-system etcd-server-ip-172-20-51-231.ec2.internal 1/1 Running 0 2m
kube-system kube-apiserver-ip-172-20-51-231.ec2.internal 1/1 Running 0 1m
kube-system kube-controller-manager-ip-172-20-51-231.ec2.internal 1/1 Running 0 2m
kube-system kube-dns-7f56f9f8c7-7x2f8 3/3 Running 0 15s
kube-system kube-dns-7f56f9f8c7-bcnlq 3/3 Running 0 2m
kube-system kube-dns-autoscaler-f4c47db64-hdrxj 1/1 Running 0 2m
kube-system kube-proxy-ip-172-20-39-174.ec2.internal 1/1 Running 0 27s
kube-system kube-proxy-ip-172-20-51-231.ec2.internal 1/1 Running 0 2m
kube-system kube-scheduler-ip-172-20-51-231.ec2.internal 1/1 Running 0 1m

Step 6: Create a pod to test if cluster is working properly

[ec2-user@ip-172-31-51-120 ~]$ 
[ec2-user@ip-172-31-51-120 ~]$ kubectl run pod1 --image=nginx
deployment "pod1" created

[ec2-user@ip-172-31-51-120 ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
pod1-7c9dd54f98-vh5xv 1/1 Running 0 8s
[ec2-user@ip-172-31-51-120 ~]$ 

Install dashboard

To install the dashboard, see the section on install dashboard of this blog post on how to deploy Kubernetes using kubeadm .

Summary of Commands

step 1
 curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
 chmod +x ./kubectl
 sudo mv ./kubectl /usr/local/bin/kubectl

Step 2: Install kops
 sudo wget https://github.com/kubernetes/kops/releases/download/1.8.0/kops-linux-amd64
 sudo chmod +x kops-linux-amd64
 sudo mv kops-linux-amd64 /usr/local/bin/kops

Step 3: DNS
 aws configure
 sudo yum install -y jq

ID=$(uuidgen) && aws route53 create-hosted-zone --name cte.cloudtechexpertscom --caller-reference $ID | jq .DelegationSet.NameServers

dig NS cte.cloudtechexperts.com

Step4: S3
 export KOPS_STATE_STORE=s3://clusters.cte.cloudtechexperts.com
 ssh-keygen

step 5: create cluster
 kops create cluster --cloud=aws --zones=us-east-1d --name=cte.cloudtechexperts.com --dns-zone=cte.cloudtechexperts.com --dns public
 kops update cluster cte.cloudtechexperts.com --yes
 kubectl get nodes
 kubectl run pod1 --image=nginx

Conclusion

Here, I have presented how to deploy Kubernetes on AWS using kops. If you follow the kops on AWS you will be able to stand a Kubernetes cluster in no time. Hope you find this useful and if you do, share and like the blog below.

Zero-to-Hero Program: We Train and Mentor you to land your first Tech role