Using kubectl through SSH
This method comes very handy when accessing a kubernetes api server on a master/control plane node which is instantiated with kubeadm in the public cloud. Cause the port 6443 of that node has to be open to public access. Instead of that, the same node can be used as an SSH server and through the SSH channel kubectl api server communication can be tunneled. Same can be done in a more structured way by using bastion hosts which is explained in this article.
- Things to configure in kubeconfig :
- Repoint the api server to : localhost’ s port 6443
kubectl config set clusters.kubernetes.server https://127.0.0.1:6443
- Repoint the tls server to the clusterip of the kubernetes service. Otherwise certificate error is generated because the only IPs that are included in the apiserver certificate is the actual apiserver IP and the internal clusterIP of the api service (which is 10.96.0.1)
kubectl config set clusters.kubernetes.tls-server-name 10.96.0.1
2. Configuring the access on the client :
- Copy the kubectl config file from the kubeadm master/control plane node to the client
scp -i .ssh/DefaultKeyPAir.pem ubuntu@${cp}:.kube/config mykubeconfig
- Set the kubeconfig environment variable
export KUBECONFIG=$(pwd)/mykubeconfig
3. Instantiate the ssh tunnel
export masterpublicip=1.2.3.4
export username=ubuntussh -N -L 6443:localhost:6443 username@$masterpublicip -i .ssh/DefaultKeyPAir.pem &
With the configurations done above, the master/control plane node will be open to SSH requests and from the client to the master only SSH will be established. Yet kubectl commands can be used on the client to access the apiserver on the master/control plane node.