brad058
Kubectl cleanup plugin:
Kubectl cleanup plugin: https://github.com/b23llc/kubectl-config-cleanup
At B23, much of our work involves spinning up new Kubernetes clusters on edge computing and storage devices to test our applied machine learning (“AML”) workloads. Using our B23 Data Platform (“BDP”), we can quickly configure and connect to a new Kubernetes cluster on an edge device, on-premise, or in any number of public vendor clouds. This lets us iterate and develop our data engineering and AML workloads quickly and at low cost for our customers.
For most people who manage and operate multiple Kubernetes clusters, it can get overwhelming the number of configurations required to connect to those clusters. In my case, I have 37 different context entries in my ~/.kube/config. Only 3 of those entries are persistent clusters — in this case for dev, staging, and prod. The rest of the entries were for ephemeral clusters which barely lasted for a single work day. This workflow has become commonplace with the increased availability of managed Kubernetes services from cloud vendors like GCP, AWS, Azure, and Digital Ocean to name a few.
Commands like gcloud container clusters get-credentials and az aks get-credentials are really convenient for obtaining credentials for a newly launched cluster and connecting right away, but a cluster a day quickly turns into this:
Kubectl cleanup plugin: https://github.com/b23llc/kubectl-config-cleanup
At B23, much of our work involves spinning up new Kubernetes clusters on edge computing and storage devices to test our applied machine learning (“AML”) workloads. Using our B23 Data Platform (“BDP”), we can quickly configure and connect to a new Kubernetes cluster on an edge device, on-premise, or in any number of public vendor clouds. This lets us iterate and develop our data engineering and AML workloads quickly and at low cost for our customers.
For most people who manage and operate multiple Kubernetes clusters, it can get overwhelming the number of configurations required to connect to those clusters. In my case, I have 37 different context entries in my ~/.kube/config. Only 3 of those entries are persistent clusters — in this case for dev, staging, and prod. The rest of the entries were for ephemeral clusters which barely lasted for a single work day. This workflow has become commonplace with the increased availability of managed Kubernetes services from cloud vendors like GCP, AWS, Azure, and Digital Ocean to name a few.
Commands like gcloud container clusters get-credentials and az aks get-credentials are really convenient for obtaining credentials for a newly launched cluster and connecting right away, but a cluster a day quickly turns into this:

Cleaning up an entry created by one of the above commands is easy enough… just run:
$ kubectl config delete-context arn:aws:eks:us-east1::cluster/awsss-5c6ef73dedd4ee5d4a540da1
$ kubectl config delete-cluster arn:aws:eks:us-east1::cluster/awsss-5c6ef73dedd4ee5d4a540da1
$ kubectl config unset users.arn:aws:eks:us-east1::cluster/awsss-5c6ef73dedd4ee5d4a540da1
And then do that 33 more times for all the other clusters. Unsurprisingly, I got tired of doing this every day. So instead I use:
$ kubectl config-cleanup --raw > ./kubeconfig
The plugin will attempt to connect to each cluster defined by a context. If it fails then the context, user, and cluster entries are removed from the result.
https://cdn-images-1.medium.com/max/600/1*SWkExUus6YAltgTnZRS5CA.png
If you also have 37 context entries in your kubeconfig, then try out the plugin for yourself: https://github.com/b23llc/kubectl-config-cleanup/releases/latest
If you’re interested in learning more about our Edge capabilities or data engineering services, head over to our website https://b23.io or email us at info@b23.io
Cleaning up an entry created by one of the above commands is easy enough… just run:
$ kubectl config delete-context arn:aws:eks:us-east1::cluster/awsss-5c6ef73dedd4ee5d4a540da1
$ kubectl config delete-cluster arn:aws:eks:us-east1::cluster/awsss-5c6ef73dedd4ee5d4a540da1
$ kubectl config unset users.arn:aws:eks:us-east1::cluster/awsss-5c6ef73dedd4ee5d4a540da1
And then do that 33 more times for all the other clusters. Unsurprisingly, I got tired of doing this every day. So instead I use:
$ kubectl config-cleanup --raw > ./kubeconfig
The plugin will attempt to connect to each cluster defined by a context. If it fails then the context, user, and cluster entries are removed from the result.

If you also have 37 context entries in your kubeconfig, then try out the plugin for yourself: https://github.com/b23llc/kubectl-config-cleanup/releases/latest
If you’re interested in learning more about our Edge capabilities or data engineering services, head over to our website https://b23.io or email us at info@b23.io
#Kubectl #EdgeComputing #AppliedMachineLearning #dataengineering #Kubernetes