Today’s post is a follow-up to my Use Kubernetes In WSL blog post, where I outlined how to install Kubernetes on WSL. As noted at the end of the post, I was having issues connecting from the host, a windows machine, to Kubernetes in WSL.

Connection Issue

The main issue I was facing was that I could not connect to a pod running on Kubernetes using window’s localhost. Take the following Nginx deployment obtained from the official Kubernetes documentation.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

I can use the YAML content above to create a deployment by executing the following kubectl command.

1
kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml

I can confirm the deployment was successful by verifying that all 3 pods are up and running using the following kubectl command.

1
kubectl get pods

Once the pods are ready to receive traffic we can expose our deployment using the following kubectl command.

1
kubectl expose deployment nginx-deployment --port=31080 --target-port=80 --type=NodePort
  • target-port is where the container is listening for requests coming from outside the node.
  • port is where the container is listening inside the cluster, in this case, port 80.

Once the service has been created you can use the following kubectl command to see the cluster port, the service port assigned to the Nginx deployment service, and the randomly generated port for local connections.

1
kubectl get svc -o wide

Here is my output of the command above.

1
2
3
NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE     SELECTOR
kubernetes         ClusterIP   10.152.183.1     <none>        443/TCP           9m27s   <none>
nginx-deployment   NodePort    10.152.183.122   <none>        31080:30454/TCP   83s     app=nginx

This is where I incorrectly assumed that I could reach the Nginx service running on Kubernetes from Windows’ host using localhost. I made this assumption because WSL’s localhost now by default binds to Windows’ localhost. I assumed that by exposing the deployment, the randomly generated port, 30454, which is used to connect to the service through localhost on WSL would bind to port 30454 on the Window’s host and allow me to access the Nginx service.

These assumptions were wrong. The following screenshots confirm it.

Failed to connect to port 30454 from Windows' localhost.

Failed to connect on port 30454

I really needed to figure out a way to connect to the Nginx app running in k8s on WSL while using localhost from windows. I ended up chasing four possible solutions that I now want to share with you.

Using the Node IP

The first approach I took to connect to Nginx from Windows was to use the node IP. If you followed the commands under Connection Issue and have all three Nginx pods up and running then replace localhost with the node IP on your browser in Windows.

In my case, the node IP is 172.23.207.235, I obtain that value by running the following command.

1
kubectl get node -o wide

Here is the output of the command above.

1
2
NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION                      CONTAINER-RUNTIME
gohan   Ready    <none>   30m   v1.25.4   172.23.207.235   <none>        Ubuntu 22.04.1 LTS   5.15.79.1-microsoft-standard-WSL2   containerd://1.6.9

To connect to NGINX I opened up my browser to 172.23.207.235:30454.

The following screenshot shows that I can connect to a service running in Kubernetes on WSL from Windows using the node IP address.

Connect to the service using node ip

Bingo, I can reach the service from Windows. I wanted to connect to the service using localhost, so I thought, why not map 172.23.207.235 to Window’s localhost, as it turns out, that is big fat NO. I learned that you should never attempt it, leave localhost alone. Instead map it to something similar, perhaps local.host or www.localhost.com

The following screenshot show you can create your own version of localhost.

Connected using a customer local host

If you still cannot reach the service after having modified the host’s file, flush your DNS in Windows using the following command.

1
ipconfig /flushdns

I Then realize that using the node IP to approach had two flaws.

  1. The port is randomly generated when the service is created. I needed the port to predetermined.
  2. The Node IP can change if you restart or shut down WSL. As of December 2022, there is no easy way to set the IP of WSL to be static.

These two flaws made this approach not a viable solution so I moved on to the next approach, port forwarding.

Using Port-Forward

This approach involves using port forwarding

After following the commands under Connection Issue and having verified that all 3 pods are up and running, use the following kubectl command to port forward from the WSL’s localhost to Windows’ localhost. Remember, they are now one and the same.

To start port forwarding traffic to the Nginx pods run the following command.

1
kubectl port-forward --address 0.0.0.0 service/nginx-deployment 31080

Kubernetes will forward traffic from 31080 to port 80 on the container, and since WSL’s localhost:31080 is now the same as Windows port 31080, I can open up the browser to localhost:31080 to connect to the service.

The following screenshot shows that you can connect using localhost so long as you are port-forwarding the traffic in WSL.

Connecte to the service using port-forward

Another approach would be to forward traffic from the pod instead of the service using the following command, the result is the same.

1
kubectl port-forward nginx-deployment-7fb96c846b-4c4r4 32196:80
  • nginx-deployment-7fb96c846b-4c4r4 is the name of one of the three pods running.

This is great, I love being able to connect using localhost, however, this solution is temporary, as soon as you stop port-forwarding traffic, the connection will stop work working on Windows.

On to the next approach, using MetalLB.

Using MetalLB

This approach involves using a microk8s addon, MetalLB, to allow load balancing. After going through it I realized that this approach is exactly as Using Node Ip. If you didn’t like that solution then you can skip this part or not, you can learn how to use MetalLB. Fun!

MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. It can be enabled in microk8s using the following command.

1
microk8s enable metallb

Note that when you execute the command, MetalLb is going to expect you to provide an IP. You can specify it as a range like 10.64.140.43-10.64.140.49,192.168.0.105-192.168.0.111 or using CDIR notation. I prefer CDIR notation.

First, I need an IP address. If I run the following command I will get the IP of the node.

1
kubectl get node -o wide

Gives me the following output.

1
2
NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION                      CONTAINER-RUNTIME
gohan   Ready    <none>   19h   v1.25.4   172.23.200.34   <none>        Ubuntu 22.04.1 LTS   5.15.79.1-microsoft-standard-WSL2   containerd://1.6.9

For my load balancer IP, I’m going to change the last octet from 34 to 49, so the IP for MetalLb is going to be 172.23.200.49.

1
microk8s enable metallb:172.23.200.49/32

Note that I’m using /32, this keeps the load balancer IP static. If you don’t understand why /32 makes the IP static then may I recommend watching Understanding CIDR Ranges and dividing networks

Delete the Nginx service if you created one.

1
kubectl delete svc nginx-deployment

Now expose the deployment again but this time the type will be LoadBalancer, not NodePort

1
kubectl expose deployment nginx-deployment --port=31080 --target-port=80 --type=LoadBalancer

Back on Windows, I’ll open a web browser and navigate to http://172.23.200.49:31080/ to confirm I can reach the Nginx service.

As expected, I can reach it.

Nginx load balancer

Just like the Using the Node IP approach, if don’t enjoy using an IP address to access the service, modify the windows hosts file and map the IP of the load balancer to a custom domain.

You could also port proxy the traffic from Windows to WSL using netsh interface portproxy.

For example,

1
netsh interface portproxy add v4tov4 listenport=31080 listenaddress=0.0.0.0 connectport=31080 connectaddress=172.23.207.235

Remember, if you restart WSL, a new IP will be assigned to the node, which means your load balance IP will no longer router traffic, you will need to reenable MetalLB using the new node IP to get traffic flowing into the Kubernetes service and also remap the netsh interace.

Using HostPort

What ultimately ended up being my preferred solution. HostPort keeps everything simple, no hosts files, no IPs, and no fuss. I consider this approach to be the same as port forwarding but unlike Port Forwarding, this approach is a more permanent solution, well so long as WSL is not restarted or shut down.

HostPort applies to the Kubernetes containers. The port is exposed to the WSL network host. This is often an approach not recommended because the Host IP can change. In WSL that happens when WSL is restarted or shut down.

To see it in action start from a clean slate, and delete all nginx-deployments & nginx-services running, now we are going to deploy Nginx again but this time we are going to modify the deployment by adding an additional configuration, hostPort as seen in the YAML below which is the content of my deploy.yaml file.

The number of replicas was reduced from 3 to 1, because as mentioned above, the host port applies to a container running in a pod, and you cannot map one port to multiple containers. You could leave the number of containers as 3 but note that when the pods are created, Kubernetes will only set one of the three as ready, that one pod that is ready will be the only one that can serve traffic.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
          hostPort: 5700

As you can tell from the YAML, I decide to use port 5700. I can now apply this deployment using the following command.

1
kubectl apply -f deploy.yaml

You can use describe to see that the deployment is now bound to port 5700 in WSL.

1
kubectl describe deployment nginx-deployment
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Thu, 15 Dec 2022 23:59:27 -0500
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=nginx
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.14.2
    Port:         80/TCP
    Host Port:    5700/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    True    ReplicaSetUpdated
OldReplicaSets:  <none>
NewReplicaSet:   nginx-deployment-66cdbdf488 (1/1 replicas created)
Events:          <none>

Great, need one additional configuration. This time on the host, open up Powershell as an administrator and execute the following command.

1
netsh interface portproxy add v4tov4 listenport=5700 listenaddress=0.0.0.0 connectport=5700 connectaddress=172.23.207.235

Where listenport is a port in our window’s host, listenaddress is the host’s localhost, connectport is the host port defined in our deploy.yaml file, and connectaddress is the node IP address obtained using the following command. This command forwards traffice from http://localhost:5700 to http://172.23.207.235:5700

1
kubectl get node -o wide

After the netsh interface portproxy command is executed I was ready to connect. I opened a browser to localhost:5700 on the Windows machine.

The following screenshot shows that I can connect to the Nginx service from Windows' localhost.

Connected to service using Host Port

Perfect I can now connect to Nginx as if it were natively running on Windows. Quick tip, before deciding which port to use on the command netsh interface portproxy run the following command.

1
netsh interface portproxy show all

It will output any mapping, you may already have created or had created by another service. The port listed is unavailable, and therefore, cannot be remapped unless you delete it using the following command.

1
netsh interface portproxy delete v4tov4 listenport=5700 listenaddress=0.0.0.0

Using the host port solves my original issue, I can now connect to services running on Kubernetes in WSL from Windows using localhost.

Conclusion

For now, I feel like HostPort is the best solution I could come up with, even if the IP changes whenever I restart WSL. If the day ever comes when I can set a static IP in WSL then MetalLB would probably be my preferred choice since HostPort limits the number of PODs to one.

Thanks for reading.