By default, workloads deployed to Kubernetes, are only accessible from containers in the same cluster. You can add Services & Endpoints, in order to connect workloads to each other, or to access your workloads from outside of the cluster.

To create a new service , select the workload that you would like to expose:

Select the Load Balancers tab, and click the Create Loadbalancer  button:

Select the cluster where the service  will be created:

Enter YAML to define your service , or drag & drop an existing YAML file:

Service Types

The service's type  determines where to expose the service. Services can be accessible from only within the cluster, or they can be configured to allow remote access, to clients outside of the cluster. Each subsequent Service Type builds upon the previous one. For example, a nodePort service utilizes clusterIP features, in order to route traffic to the appropriate target container port. 


If not specified, the service type will default to ClusterIP . This only allows access to the workload, from within the cluster, on a virtual IP address. This would be useful, for workloads that should only receive traffic from within the cluster, like a database server:

In the example above, a load balancer was created that listens on port 5432 , and proxies TCP  traffic to the pod's target port 5432 . A virtual clusterIP is automatically allocated for the service, and will serve as the workload's endpoint . After the service has been created, you can get the clusterIP , by running the command:

kubectl get services postgres -o=yaml  

In this case, the command returned: clusterIP: . So, any pod within the cluster, should now be able to access Postgres at


When the service's type  is set to NodePort , a new, static port will be exposed on all nodes, that will proxy traffic from the nodePort, on the node's public IP address, to the service. The service then uses the clusterNetwork to proxy traffic from the port , to the appropriate pod's clusterIP  and targetPort .

This is useful to allow incoming traffic, from outside of the cluster. For example, lets assume that your cluster has two nodes, with public IP addresses:  and . Running the following command will reveal that 32220 was automatically allocated as the nodePort :

kubectl get services postgres -o=yaml 

 The pod should be externally accessible on both and  . If you created a DNS record named , and pointed it at the public IPs above, you could access your service remotely at


Some cloud providers, such as AWS, Azure and GCE/GKE, support external load balancers. An example external load balancer, would be an ELB, on AWS. Creating a LoadBalancer  type service, will automatically provision a new load balancer in your provider, and add an ingress to allow access from outside of the cluster. Here is an example of a LoadBalancer  service on GKE:

Except for the type  field, the configuration is almost identical to that of the nodePort  service above. After the service has been created, you should see a new load balancer appear in your cloud provider's console:

You can run the following command, to reveal the nodePort , and see that the service is mapped to this load balancer's IP address:

kubectl get services postgres -o=yaml 

In this case, the nodePort was set to 31731 . The load balancer will proxy traffic from , to each node's public IP address, on the nodePort  . The service then proxies traffic over the clusterNetwork , to the appropriate pod's targetPort

You can view details about your service & endpoint, on each workload's detail page. After your service  is configured, your can add additional L7 routing logic, using ingresss rules. 

Did this answer your question?