Introducing Load Balancer support for ngrok Kubernetes Operator

ngrok is happy to announce that we’ve added support for Kubernetes Load Balancer services to our Kubernetes Operator to streamline connectivity to your applications. This addition unifies the process of getting TCP and TLS connectivity to services running in your Kubernetes clusters. 

Prior to this feature, you could connect to services in Kubernetes using ngrok’s TCPEdge and TLSEdge custom resource (CR). However, these custom resources were a bit cumbersome to consume, prone to misconfiguration, and tedious to template out. Support for Kubernetes Load Balancer services provides additional benefits over using ngrok’s custom resources, as discussed below.

The example app

This post uses an example telnet server — ngrok-ascii — built from the Docker image ngroksamples/ngrok-ascii to demonstrate how ngrok supports TCP and TLS connections. This server runs on port 9090 and returns a sequence of bytes, some of which are ASCII control characters and ANSI terminal escape codes, that spell ngrok whenever a client connects. Like so:

To run the server in Kubernetes, you can apply a manifest like the one below that defines Service and Deployment resources.

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: ngrok-ascii
  namespace: default
  labels:
    app.kubernetes.io/name: ngrok-ascii
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ngrok-ascii
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ngrok-ascii
    spec:
      containers:
      - name: ngrok-ascii
        image: jonstacks/ngrok-ascii:latest
        imagePullPolicy: Always
        args: ["serve", "9090"]
        env:
        # Filter out health checks that come from the kubelet
        - name: FILTER_LOGS_ON_HOST_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        ports:
        - containerPort: 9090
          name: telnet
        livenessProbe: 
          tcpSocket:
            port: 9090
          initialDelaySeconds: 2 #Default 0 
          periodSeconds: 60 #Default 10 
          timeoutSeconds: 2 #Default 1 
          successThreshold: 1 #Default 1 
          failureThreshold: 5 #Default 3
        resources:
          limits:
            cpu: 100m
            memory: 64Mi
          requests:
            cpu: 100m
            memory: 64Mi
---
kind: Service
apiVersion: v1
metadata:
  name: ngrok-ascii
  namespace: default
  labels:
    app.kubernetes.io/name: ngrok-ascii
spec:
  selector:
    app.kubernetes.io/name: ngrok-ascii
  ports:
  - name: telnet
    port: 23
    targetPort: telnet

The server is now running in the cluster and other applications can access it via the ngrok-ascii service, but how do you expose it outside the cluster and to the internet?

‍TCPEdge and TLSEdge Custom Resources(CRs)

You can add ingress to your Kubernetes services using the ingress controller feature of ngrok’s Kubernetes Operator. Previously, you would have to create the following two resources to connect the ngrok-ascii service to the internet with ngrok:

kind: TCPEdge
apiVersion: ingress.k8s.ngrok.com/v1alpha1
metadata:
  name: ngrok-ascii-edge
  namespace: default
spec:
  ipRestriction:
    policies:
    - ipp_2KZtV8hrTPdf0Q0lS4KCDGosGXl
  backend:
    labels:
      k8s.ngrok.com/namespace: default
      k8s.ngrok.com/service: ngrok-ascii
      k8s.ngrok.com/port: "23"
---
kind: Tunnel
apiVersion: ingress.k8s.ngrok.com/v1alpha1
metadata:
  name: ngrok-ascii-tunnel
  namespace: default
spec:
  forwardsTo: ngrok-ascii.default.svc.cluster.local:23
  labels:
    k8s.ngrok.com/namespace: default
    k8s.ngrok.com/service: ngrok-ascii
    k8s.ngrok.com/port: "23"

The first thing to note is that the Tunnel resource’s labels must match the TCPEdge resource’s backend labels so the TCPEdge can correctly select the matching tunnel. Second, it was necessary to specify the value for forwardsTo correctly so that the tunnel forwards the traffic correctly to the ngrok-ascii service on the correct port.

In addition to creating a core Kubernetes Service, you need to create two more complex custom resource objects with many places for mistakes. The following section discusses how the new LoadBalancer service simplifies this process.

Load balancer services

ngrok now offers the Kubernetes-native method for getting L4 traffic into your cluster with our new LoadBalancer class. Load Balancer services are implementation-specific. They provision an external load balancer (AWS NLB, Google Cloud Load Balancer, etc.) and communicate the external IP or hostname to the service. To use ngrok’s Kubernetes Load Balancer controller, create a Service resource with type=LoadBalancer and loadBalancerClass=ngrok using a manifest like the one below.

---
apiVersion: v1
kind: Service
metadata:
  name: ngrok-ascii
  namespace: default
  labels:
    app.kubernetes.io/name: ngrok-ascii
spec:
  allocateLoadBalancerNodePorts: false
  loadBalancerClass: ngrok
  ports:
  - name: telnet
    port: 23
    protocol: TCP
    targetPort: telnet
  selector:
    app.kubernetes.io/name: ngrok-ascii
  type: LoadBalancer

To switch from using ngrok’s <code>TCPEdge</code> and <code>TLSEdge</code> custom resources to using the new load balancer service, change the service's manifest as follows:

  1. Add a type: <code>LoadBalancer</code> field to the <code>Service</code> definition to designate it as a Load Balancer service.
  2. Add <code>loadBalancerClass</code>: <code>ngrok</code> to the <code>Service</code> definition. The ngrok service controller will watch for services with <code>loadBalancerClass=ngrok</code>, automatically create the necessary <code>TCPEdge</code>/<code>TLSEdge</code>, <code>Domain</code>, and <code>Tunnel</code> resources for you, and manage their lifecycle.
  3. [Optional] Add <code>allocateLoadBalancerNodePorts</code>: <code>false</code> to the <code>Service</code> definition. As discussed in detail in a later section, the ngrok <code>LoadBalancer</code> class doesn't require you to allocate node ports.

Now, running <code>kubectl get services -o yaml ngrok-ascii</code> displays something like the following in the status field:

status:
  loadBalancer:
    ingress:
    - hostname: 5.tcp.ngrok.io
      ports:
      - port: 24114
        protocol: TCP

And that's it! The ngrok service controller will automatically create your resources and manage their lifecycle. You can now access the ngrok-ascii service at <code>5.tcp.ngrok.io:24114</code>!

In this example, you would run<code>telnet 5.tcp.ngrok.io 24114</code>:‍

And just like that, the TCP service is available on the internet.

TLS

If you don’t want to use a random port for your services and would like something easier to remember, you can simply specify the domain with an annotation such as <code>k8s.ngrok.com/domain: ascii.ngrok.io</code>

ngrok will provision a valid certificate for the service and encrypt traffic between the client and ngrok while serving the application on port 443. To use TLS, modify the ngrok-ascii service as indicated below:

apiVersion: v1
kind: Service
metadata:
  name: ngrok-ascii
  namespace: default
  labels:
    app.kubernetes.io/name: ngrok-ascii
  annotations:
    k8s.ngrok.com/domain: ascii.ngrok.io # <--- Use a TLS Edge
spec:
  allocateLoadBalancerNodePorts: false
  loadBalancerClass: ngrok
  ports:
  - name: telnet
    port: 23
    protocol: TCP
    targetPort: telnet
  selector:
    app.kubernetes.io/name: ngrok-ascii
  type: LoadBalancer

This service is now accessible by running the following command:<code>openssl s_client -connect ascii.ngrok.io:443</code>

Use with ExternalDNS

Let's say you haveexternal-dns running in your cluster and configured to manage DNS for mydomain.com. Prior to this release of ngrok’s Kubernetes Operator, you could achieve connectivity to your TCP and TLS services running in Kubernetes, but external-dns didn’t know how to communicate with the resulting custom resource objects. 

Providing this connectivity through the standard Kubernetes LoadBalancer Service resource allows integration with external-dns. You can quickly provide access to the myapp service at myapp.mydomain.com by adding the following annotations to the Service definition for myapp:

apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
  labels:
    app.kubernetes.io/name: myapp
  annotations:
    external-dns.alpha.kubernetes.io/hostname: myapp.mydomain.com
    k8s.ngrok.com/domain: myapp.mydomain.com
spec:
 

Now, if you check the status of the myapp service you'll see the following in the status field:

status:
  loadBalancer:
    ingress:
    - hostname: 2r93fef65h7ku1vtu.4raw8yu7nq6zsudp4.ngrok-cname.com
      ports:
      - port: 443
        protocol: TCP

Within a few minutes, external-dns will create a CNAME record for myapp.mydomain.com pointing to 2r93fef65h7ku1vtu.4raw8yu7nq6zsudp4.ngrok-cname.com, and you can access your myapp service at myapp.mydomain.com on port 443.

Benefits of using ngrok load balancer services

No need to expose node ports

Usually, when creating a LoadBalancer service, Kubernetes allocates a port on each node that forwards traffic to healthy endpoints via kube-proxy. This is because the provisioned load balancer sits outside the cluster and forwards traffic to the node port. If pod IPs are routable from outside the cluster, you can set allocateLoadBalancerNodePorts to false.

ngrok works uniquely by creating an outbound tunnel that can receive traffic back over the same tunnel. The provisioned load balancer that forwards traffic back into your cluster is environment-agnostic—including cloud and on-prem. Since this forwarding happens inside the cluster, it allows for even more restrictive firewall/security group rules on your Kubernetes nodes. You get connectivity to your applications while allowing only outbound traffic. Thus, there is no need to allocate node ports.

Works anywhere, even on-prem and private clusters

In addition to ngrok’s Kubernetes Operator providing ingress-as-a-service, the release of support for the LoadBalancer service means your services work the same locally as they do in production across cloud providers and on-prem clusters. To learn more about ngrok’s Kubernetes Operator, check out these other posts on our blog. 

Sign up, try ngrok for free today, and chat with us in our Community Repo if you have questions or feedback.

Share this post
Jonathan Stacks
Jonathan Stacks is a Staff Infrastructure Engineer with experience in Kubernetes, application development & architecture, automation, & data engineering.
Networking
Product updates
Kubernetes Operator
Kubernetes
Load balancer
Production