Introduction

I am going to describe how to implement a Nginx ingress controller and cert manager (+clusterissuer) in a (Hetznercloud) Kubernetes cluster. Briefly what both terms mean and do:

Ingress controller

  • An Ingress controller is a specialized load balancer for Kubernetes (and other containerized) environments. For many enterprises, moving production workloads into Kubernetes brings additional challenges and complexities around application traffic management.
  • An Ingress controller abstracts away the complexity of Kubernetes application traffic routing and provides a bridge between Kubernetes services and external ones.

Cert-manager

  • Cert-manager adds certificates and certificate issuers as resource types in Kubernetes clusters, and simplifies the process of obtaining, renewing and using those certificates.
  • It can issue certificates from a variety of supported sources, including Let’s Encrypt, HashiCorp Vault, and Venafi as well as private PKI.
  • It will ensure certificates are valid and up to date, and attempt to renew certificates at a configured time before expiry.

In short, quite handy and perhaps important to implement in the cluster.

Requirements

• I assume you already have a Kubernetes cluster running (at Hetznercloud). Hetznercloud is not necessarily, but a number of points in this article are specifically intended for that.

• You have installed Helm (package manager for Kubernetes).

• You have already registered your own domain and you can access the DNS records to make changes. In our case we use Cloudflare and that's what the cert-manager instructions are based on.

Ready?

letsgo

letsgo

Instructions for Nginx ingress controller

Add the ingress-nginx repo by executing:
helm repo add ingress-nginx https://Kubernetes.github.io/ingress-nginx

Update the repo list:
helm repo update

Create a file named: ingress-nginx.yaml and save it somewhere local on your computer. In our case D:\kluster. Make this your own, by entering the values behind it:

controller:
  kind: DaemonSet
  service:
    annotations:
      load-balancer.hetzner.cloud/hostname: [your-domain]
      load-balancer.hetzner.cloud/http-redirect-https: 'false'
      load-balancer.hetzner.cloud/location: nbg1
      load-balancer.hetzner.cloud/name: [name-of-the-loadbalancer]
      load-balancer.hetzner.cloud/use-private-ip: "true"
      load-balancer.hetzner.cloud/uses-proxyprotocol: 'true'

Little bit of explanation on this

  • your-domain: enter your own domain name here that you want to use.
  • http-redirect-https: self-explanatory I assume
  • location: the location of the load balancer, Hetznercloud provides a number of locations. you can also choose another one and enter it here.
  • name-of-the-loadbalancer: it doesn’t matter what you enter here.
  • private ip: this ensures that the communication between the load balancer and the nodes happens through the private network, so we don’t have to open any ports on the nodes (other than the port 6443 for the Kubernetes API server).
  • proxy protocol: this encures that the ingress controller and applications can “see” the real IP address of the client.

With this command we’re going to install it:
helm upgrade --install --namespace ingress-nginx --create-namespace -f D:\kluster\ingress-nginx.yaml ingress-nginx ingress-nginx/ingress-nginx

What happens next?

A load balancer is created for the Nginx ingress controller, by cloud controller manager (A Hetznercloud service / component). When you log in to Hetznercloud you see that a loadbalancer is created and gets the status Pending, after a few minutes it gets a public IP address.

You can also run the following command to find out the IP address:
kubectl -n ingress-nginx get svc

All traffic goes through this load balancer to your workload. By workload I mean your applications / containers that you are going to roll out in your cluster. So you only need 1 load balancer to reduce costs and to keep it simple. Of course you can also set up 1 load balancer per application / workload, but that is not necessary.

And now?
Go to your domain provider and create a new A record that points to the public IP address of the load balancer. From that moment on you can use this domain for your applications.

More information about uses-proxyprotocol

There’s one more thing (meant specifically if you’re using Hetznercloud) that we should check/correct if necessary: If you want to use use-proxy protocol make sure this is set to true in the loadbalancer AND the configmap. to check this, run:
kubectl -n ingress-nginx get cm ingress-nginx-controller -oyaml

The output will be:

apiVersion: v1
data:
  allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: ingress-nginx
    meta.helm.sh/release-namespace: ingress-nginx
  creationTimestamp: "2021-11-21T19:59:16Z"
  labels:
    app.Kubernetes.io/component: controller
    app.Kubernetes.io/instance: ingress-nginx
    app.Kubernetes.io/managed-by: Helm
    app.Kubernetes.io/name: ingress-nginx
    app.Kubernetes.io/version: 1.1.0
    helm.sh/chart: ingress-nginx-4.0.12
  name: ingress-nginx-controller
  namespace: ingress-nginx
  resourceVersion: "2294487"
  uid: 144f0834-f352-47f1-b167-74b644e9d2be

under the ‘data’ section there should be another line added. to do this, execute the following command:
kubectl -n ingress-nginx edit cm ingress-nginx-controller

Notepad will open where you can make adjustments: add this line: use-proxy-protocol: "true"

so it looks like this eventually:

data:
  allow-snippet-annotations: "true"
  use-proxy-protocol: "true"
kind: ConfigMap

Save the notepad and this change will be implemented into the cluster.

When you log in to Hetznercloud, click on Loadbalancer, Services and at the tcp lines on Edit. Proxy protocol Enabled should be set to Enabled.

hetznercloud-proxy-protocol

hetznercloud-proxy-protocol

Final note:
If you have it set to Enabled in Hetznercloud but NOT in the ConfigMap and you apply an application to the cluster, you will get an error:
400 bad request (nginx)

So make sure you have both set to True/enabled.

Instructions for Cert-manager

We use Cert-manager to arrange certificates for our applications in the cluster. To achieve this, Helm will help (It is a Package manager to speed up deployments).

Add the repo:
helmet repo add jetstack https://charts.jetstack.io

Update the repo list:
helmet repo update

Now we are ready to install the CRDs:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.crds.yaml

The output will be something like this:

output:
PS D:\kluster> kubectl apply -f https://github.com/jetstack/cert-	manager/releases/download/v1.6.1/cert-manager.crds.yaml
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created

To install cert-manager:
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.6.1

The output will be like:

PS D:\kluster> helm install   cert-manager jetstack/cert-manager   --namespace cert-manager   --create-namespace   --version v1.6.1
NAME: cert-manager
LAST DEPLOYED: Fri Dec  3 10:21:45 2021
NAMESPACE: cert-manager
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
cert-manager v1.6.1 has been deployed successfully!

In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).

More information on the different types of issuers and how to configure them
can be found in our documentation:

https://cert-manager.io/docs/configuration/

For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:
https://cert-manager.io/docs/usage/ingress/

Execute this command to check if the pods are running, can take a couple of minutes:


NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-57d89b9548-rc5wz              1/1     Running   0          8d
cert-manager-cainjector-5bcf77b697-ljdmw   1/1     Running   0          8d
cert-manager-webhook-9cb88bd6d-qvwtk       1/1     Running   0          8d

ClusterIssuer

We implement a cluster issuer for cert-manager that you can use across all namespaces. We are going to add custom values so that this will work on your own domain.

Before we go any further we need the Global API key from Cloudflare portal. Then create a file: secretcloudflare.yaml with the following content:

apiVersion: v1
child: Secret
metadata:
   name: cloudflare-api-key-secret
   namespace: cert-manager
type: Opaque
stringData:
   api key: [yourapikey]

To implement this / apply in the cluster, execute:
kubectl apply -f secretcloudflare.yaml

Now create a second file (yaml) for the clusterissuer. Please enter your own email address. You will see the staging environment of Let’s encrypt, which is a test environment that you use to test the provisioning of certificates. Later we’ll change this to the production URL, but only if we’re sure everything works as we want it to.

You can see here that the secret we implemented earlier is used by the cluster issuer.

apiVersion: cert-manager.io/v1
child: ClusterIssuer
metadata:
   name: letsencrypt-prod
   namespace: cert-manager
spec:
   acme:
     email: [email protected]
     server: https://acme-staging-v02.api.letsencrypt.org/directory
     privateKeySecretRef:
       name: letsencrypt-prod-account-key
     solvers:
     - dns01:
         cloudflare:
           email: [email protected]
           apiKeySecretRef:
             name: cloudflare-api-key-secret
             key: api-key

thereafter, to deploy it to the cluster execute:
kubectl apply -f ClusterIssuer.yaml

Deploy a test application

Let’s implement a test application. Create a new file and name it hello.yaml. adjust the Hosts (2x) with your own domain name. At the bottom is secretName, name this after the application name but mention -tls. so that it is clear that it is a certificate and is intended for https. So for example hello-tls.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello
  labels:
    app: hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
      - name: hello
        image: rancher/hello-world
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: hello
spec:
  selector:
    app: hello
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello
  annotations:
    Kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-prod
    Kubernetes.io/tls-acme: "true"
spec:
  rules:
  - host: hello.yourdomain.com
    http:
      paths:
      - path: /
        pathType: ImplementationSpecific
        backend:
          service:
            name: hello
            port:
              number: 80
  ingressClassName: nginx
  tls:
  - hosts:
    - hello. yourdomain.com
    secretName: hello-tls

You will see a number of annotations. with that you tell the cert manager to generate and use ssl certificates for your application.

Then apply it to your cluster:
kubectl apply -f hello.yaml

Give it a few minutes, then go to your browser and browse to the URL / host you defined earlier. You may still get a certificate error but this is explainable, because you have requested a fake / staging certificate.

hello-certificate-staging

hello-certificate-staging

Production ready?

When you are ready to request real certificates in the production environment of Let’s encrypt, edit ClusterIssuer.yaml.
Change this line, from:
https://acme-staging-v02.api.letsencrypt.org/directory to:
https://acme-v02.api.letsencrypt.org/directory

Re-apply the clusterissuer.yaml so that it can get the latest value and use it:
kubectl apply -f ClusterIssuer.yaml

Additional notes

If you have tested your applications and used staging certificates, change the Secretname in the yaml file to something else. Otherwise the staging certificates will still be used. We have to make sure that cert-manager gets NEW production certificates.

So in case of hello.yaml, change the secretname. Save and apply it again:
kubectl apply -f hello.yaml

Give it a few minutes, check if the new certificates were fetched successfully by running:

NAME READY SECRET AGE
hello-prod-tls True hello-prod-tls 8m24s

Check the URL / host again in your browser. You will see a valid certificate without error:

hello-certificate-prod

hello-certificate-prod