All Containership Kubernetes Engine clusters are provisioned with open firewall rules on their respective cloud providers. If you would like to go into your cloud provider and close off traffic from the cluster, you must allow ingress and egress to and from our API servers in order for our coordinator to sync registries, upgrades, plugins, registries, and ssh keys as well as for our proxy service to facilities calls from the cloud dashboard and kubectl.

The follow are the IPs of our API servers:

35.243.206.101/32
35.185.77.206/32
35.237.130.185/32
35.227.90.174/32
35.229.76.52/32
35.231.164.206/32

Cluster Security

Your pods are not exposed to the internet unless you put a NodePort or LoadBalancer service on them. Be aware of creating these and ensure your application can handle the internet traffic appropriately.

Internal cluster communication is exposed through any service type. You can determine what pods can communicate with your workloads and vice-versa using Kubernetes Network Policies

All requests to the Containership Coordinator and Kubernetes API are authenticated through our proxy, and all communication is done over TLS. All etcd communication is strictly done locally on the cluster using strong credentials and mutual certificate auth with the API server. You can read more about securing Kubernetes clusters here.

Did this answer your question?