Skip to main content
This guide walks you through setting up a Consul service mesh on Kubernetes and using the ngrok Kubernetes Operator to provide ingress to your services. The ngrok Kubernetes Operator is the official open-source controller for adding public and secure ingress traffic to your k8s services. Consul is a secure and resilient service mesh that provides service discovery, configuration, and segmentation; Consul Connect provides service-to-service authorization and encryption with mutual TLS. Together, Consul secures communication between services in a cluster while ngrok provides public ingress to those services.

What you’ll need

  • A remote or local Kubernetes cluster with Consul installed or minikube to set up a demo cluster locally.
  • An ngrok account.
  • kubectl and Helm 3.0.0+ installed on your local workstation.
  • A reserved domain from the ngrok dashboard or API; this guide refers to it as <NGROK_DOMAIN>.

Set up a local Consul service mesh on Kubernetes

This guide requires access to a remote or local Kubernetes cluster with Consul installed. If you have an existing cluster with Consul set up, you can skip this step and proceed to Configure the ngrok Kubernetes Operator. If you don’t have one set up, set up a local Minikube cluster and install Consul now.
  • Create a local cluster with minikube.
    minikube start --profile dc1 --memory 4096 --kubernetes-version=v1.22.0
    
  • Create a file called values.yaml with the following contents:
# Contains values that affect multiple components of the chart.
global:
  # The main enabled/disabled setting.
  # If true, servers, clients, Consul DNS and the Consul UI will be enabled.
  enabled: true
  # The prefix used for all resources created in the Helm chart.
  name: consul
  # The name of the data center that the agents should register as.
  data center: dc1
  # Enables TLS across the cluster to verify authenticity of the Consul servers and clients.
  tls:
    enabled: true
  # Enables ACLs across the cluster to secure access to data and APIs.
  acls:
    # If true, automatically manage ACL tokens and policies for all Consul components.
    manageSystemACLs: true
# Configures values that configure the Consul server cluster.
server:
  enabled: true
  # The number of server agents to run. This determines the fault tolerance of the cluster.
  replicas: 1
# Contains values that configure the Consul UI.
ui:
  enabled: true
  # Registers a Kubernetes Service for the Consul UI as a NodePort.
  service:
    type: NodePort
# Configures and installs the automatic Consul Connect sidecar injector.
connectInject:
  enabled: true
  • Install the Consul Helm chart.
    helm repo add hashicorp https://helm.releases.hashicorp.com
    
    helm install --values values.yaml consul hashicorp/consul --create-namespace --namespace consul --version "1.0.0"
    
    Depending on your computer, this can take some time for the pods to become healthy. You can watch the status of the pods with kubectl get pods --namespace consul -w.
  • Verify Consul is installed and all its pods are healthy.
    kubectl get pods --namespace consul
    NAME                                           READY   STATUS    RESTARTS        AGE
    consul-connect-injector-6f67d58f8d-2lw6l       1/1     Running   0               17m
    consul-server-0                                1/1     Running   0               17m
    consul-webhook-cert-manager-5bb6f965c8-pjqms   1/1     Running   0               17m
    
You now have a Kubernetes cluster with a Consul service mesh installed.

Configure the ngrok Kubernetes Operator

Consul requires a bit of extra configuration to work with ngrok’s Operator for Kubernetes ingress. You’ll need to use a pod annotation to enable the Consul Connect sidecar injector. This allows using Consul Connect to secure the traffic between the ngrok Kubernetes Operator and your services.
  • First, create a Kubernetes Service for the ngrok Kubernetes Operator in the consul namespace. Consul relies on this to name services to declare Service Intention source and destination values.
    apiVersion: v1
    kind: Service
    metadata:
      name: ngrok-operator
      namespace: consul
    spec:
      ports:
        - name: http
          port: 80
          protocol: TCP
          targetPort: 80
      selector:
        app.kubernetes.io/name: ngrok-operator
    
  • Next, install the ngrok Kubernetes Operator into your cluster. The controller pods should be in the Consul service mesh to proxy traffic to your other services. Use pod annotations to enable the Consul Connect sidecar injector and allow outbound traffic to use the Consul mesh. Consul documents how to set these two annotations in the Configure Operators for Consul on Kubernetes doc.
    # This annotation is required to enable the Consul Connect sidecar injector
    consul.hashicorp.com/connect-inject: "true"
    # This is the CIDR of your Kubernetes API: `kubectl get svc kubernetes --output jsonpath='{.spec.clusterIP}'
    consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs: "10.96.0.1/32"
    
    Check out the Operator installation doc for details on how to use Helm to install with your ngrok credentials. Once you’ve done that, you can run the command below to set the appropriate annotations.
    helm install ngrok-operator ngrok/ngrok-operator \
      --reuse-values
      --set-string podAnnotations.consul\\.hashicorp\\.com/connect-inject=true \
    	--set podAnnotations."consul\.hashicorp\.com/transparent-proxy-exclude-outbound-cidrs"="YOUR KUBERNETES API CIDR"
    
Consul annotation: HashiCorp’s docs also mention the annotation consul.hashicorp.com/transparent-proxy-exclude-inbound-ports. This is not applicable to the ngrok Kubernetes Operator as it creates an outbound connection for Ingress rather than exposing ports.Helm: The --set-string flag allows the pod annotation to escape the . character in the annotation name while ensuring the value true is a boolean and not a string.Production: In a production environment, or anywhere you wish to use Infrastructure as Code and source control your Helm configurations, you can set up your credentials following this guide.

Install a sample application

Install a sample application to try out the service mesh and Operator combination. This guide uses the HashiCups Demo Application provided by HashiCorp. This application is a simple e-commerce application that allows users to order coffee cups. It has a frontend and public API services that are also backed by a private API and database. These communicate with each other through the Consul service mesh. It comes with nginx installed as a proxy for the frontend and Public API services. Replace this with ngrok to provide public access and other features.
For this demo, everything is installed in the consul namespace.The ngrok Kubernetes Operator can send traffic to services across different namespaces, but Consul Service Intentions across namespaces require an enterprise account. For now, keep everything in the same namespace.
  • Clone the HashiCorp Learning Consul repo. This has multiple great example applications for learning about Consul and Kubernetes.
    git clone https://github.com/hashicorp/learn-consul-kubernetes
    
  • Install the HashiCups sample app in the consul namespace. This app consists of multiple Services and Deployments that make a tiered application. Install all of them from this folder and modify things from there.
    kubectl apply -f learn-consul-kubernetes/service-mesh/deploy/hashicups -n consul
    
  • Remove the existing Service Intentions for the public-api and frontend services and add new ones. Consul has the concept of Service Intentions. In short, they are a programmatic way to configure the Consul service mesh to allow or deny traffic between services. HashiCups comes with nginx installed with intentions to the frontend and public-api services. Remove these and add new intentions to allow traffic from the ngrok Kubernetes Operator to the frontend and public-api services.
    kubectl delete serviceintentions public-api -n consul
    kubectl delete serviceintentions frontend -n consul
    
  • Create Service Intention from ngrok to HashiCups and the public-api.
apiVersion: consul.hashicorp.com/v1alpha1 kind: ServiceIntentions metadata: name: ngrok-consul-frontend namespace: consul spec: destination: name: frontend sources:
  • action: allow name: ngrok-operator

```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
  name: ngrok-consul-api
  namespace: consul
spec:
  sources:
    - name: frontend
      action: allow
    - name: ngrok-operator
      action: allow
  destination:
    name: public-api

Configure Public Ingress for the sample application

Now that the ngrok Kubernetes Operator can communicate with the frontend service and public-api service through the Consul Service Mesh via Service Intentions, create an ingress to route traffic to the app. Create ingress objects to route traffic to the frontend service and the public-api service.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-consul
  namespace: consul
spec:
  ingressClassName: ngrok
  rules:
    - host: <NGROK_DOMAIN>
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: frontend
                port:
                  number: 3000
          - path: /api
            pathType: Prefix
            backend:
              service:
                name: public-api
                port:
                  number: 8080
This ingress object:
  • Uses the ngrok ingress class
  • The host is the ngrok domain name you selected that is static
  • There is a route for / that routes to the frontend service on port 3000
  • There is a route for /api that routes to the public-api service on port 8080
Open your <NGROK_DOMAIN> domain in your browser to see the HashiCups application.

Add OAuth protection to the app

To take your ingress needs a little further, assume you want to add edge security, in the form of Google OAuth, to the endpoint where your 2048 application is humming along. With the Traffic Policy system and the oauth action, ngrok manages OAuth protection entirely at the ngrok cloud service. This means you don’t need to add any additional services to your cluster, or alter routes, to ensure ngrok’s edge authenticates and authorizes all requests before allowing ingress and access to your endpoint. To enable the oauth action, you’ll create a new NgrokTrafficPolicy custom resource and apply it to your entire Ingress with an annotation. You can also apply the policy to just a specific backend or as the default backend for an Ingress—see the documentation on using the Operator with Ingresses.
  • Edit your existing ingress configuration with the following—note the new annotations field and the NgrokTrafficPolicy CR.
     apiVersion: networking.k8s.io/v1
     kind: Ingress
     metadata:
       name: ingress-consul
       namespace: consul
       annotations:
        k8s.ngrok.com/traffic-policy: oauth
     spec:
       ingressClassName: ngrok
       rules:
         - host: <NGROK_DOMAIN>
           http:
             paths:
               - path: /
                 pathType: Prefix
                 backend:
                   service:
                     name: frontend
                     port:
                       number: 3000
               - path: /api
                 pathType: Prefix
                 backend:
                   service:
                    name: public-api
                     port:
                       number: 8080
    ---
    # Traffic Policy configuration for OAuth
    apiVersion: ngrok.k8s.ngrok.com/v1alpha1
    kind: NgrokTrafficPolicy
    metadata:
      name: oauth
      namespace: default
    spec:
      policy:
         on_http_request:
           - type: oauth
             config:
               provider: google
    
  • Re-apply your configuration.
  • When you open your demo app again, you’ll be asked to log in via Google. That’s a start, but what if you want to authenticate only yourself or colleagues?
  • You can use expressions and CEL interpolation to filter out and reject OAuth logins that don’t contain example.com. Update the NgrokTrafficPolicy portion of your manifest after changing example.com to your domain.
     # Traffic Policy configuration for OAuth
     apiVersion: ngrok.k8s.ngrok.com/v1alpha1
     kind: NgrokTrafficPolicy
     metadata:
       name: oauth
       namespace: default
     spec:
       policy:
         on_http_request:
           - type: oauth
             config:
               provider: google
           - expressions:
               - "!actions.ngrok.oauth.identity.email.endsWith('@example.com')"
             actions:
               - type: custom-response
                 config:
                   body: Hey, no auth for you ${actions.ngrok.oauth.identity.name}!
                   status_code: 400
    
  • Check out your deployed HashiCups app once again. If you log in with an email that doesn’t match your domain, ngrok rejects your request.
Now only you can order from HashiCups—from anywhere.