Now that we know how we can benchmark a CloudNativePG deployment, it is time to look at how we can connect external applications to the PostgreSQL cluster. Usually, applications run in the same Kubernetes cluster and can directly talk to our PostgreSQL deployment, but sometimes it is required to also connect with external applications or services. By default, this does not work, as nothing is exposed externally.
You can easily check this by looking at the services we currently have:
k8s@k8s1:~$ kubectl get services -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32d
my-pg-cluster-r ClusterIP 10.107.190.52 <none> 5432/TCP 8d
my-pg-cluster-ro ClusterIP 10.109.169.21 <none> 5432/TCP 8d
my-pg-cluster-rw ClusterIP 10.103.171.191 <none> 5432/TCP 8d
There are IP addresses and services for all pods in the cluster, but those addresses are only available inside the cluster. For the external IP addresses there is “<none>” for all of them.
Before we make those services available externally, lets quickly check what they mean:
- my-pg-cluster-r: Connects to any of the nodes for read only operations
- my-pg-cluster-ro: Connects always to a read only replica (hot standby)
- my-pg-cluster-rw: Connects always to the primary node
Whatever connects to the cluster, should us one of those services and never connect to a PostreSQL instance directly. The reason is, that those services are managed by the operator and you should rely on the internal Kubernetes DNS for connecting to the cluster services.
What we need to expose the PostgreSQL cluster services is an Ingress, and an Ingress Controller on top of that in combination with a load balancer.
One of the quite popular Ingress Controllers is the Ingress-Nginx Controller, and this is the one we’re going to use here as well. Getting this installed, can again easily be done by using Helm, in pretty much the same way as we did it with OpenEBS in the storage post, but before we’re going to deploy the METALLB load balancer:
k8s@k8s1:~$ helm install metallb metallb/metallb --namespace metallb-system --create-namespace
NAME: metallb
LAST DEPLOYED: Fri Aug 9 09:43:03 2024
NAMESPACE: metallb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
MetalLB is now running in the cluster.
Now you can configure it via its CRs. Please refer to the metallb official docs
on how to use the CRs.
This creates a new namespace called “metalllb-system” and a few pods:
k8s@k8s1:~$ kubectl get pods -A | grep metal
metallb-system metallb-controller-77cb7f5d88-hxndw 1/1 Running 0 26s
metallb-system metallb-speaker-5phx6 4/4 Running 0 26s
metallb-system metallb-speaker-bjdxj 4/4 Running 0 26s
metallb-system metallb-speaker-c54z6 4/4 Running 0 26s
metallb-system metallb-speaker-xzphl 4/4 Running 0 26s
The next step is to create the Ingress-Nginx Controller:
k8s@k8s1:~$ helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
Release "ingress-nginx" does not exist. Installing it now.
NAME: ingress-nginx
LAST DEPLOYED: Fri Aug 9 09:49:43 2024
NAMESPACE: ingress-nginx
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the load balancer IP to be available.
You can watch the status by running 'kubectl get service --namespace ingress-nginx ingress-nginx-controller --output wide --watch'
An example Ingress that makes use of the controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: foo
spec:
ingressClassName: nginx
rules:
- host: www.example.com
http:
paths:
- pathType: Prefix
backend:
service:
name: exampleService
port:
number: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
Same story here, we get a new namespace:
k8s@k8s1:~$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-69bd47995d-krt7h 1/1 Running 0 2m33s
At this stage, you’ll notice that we still do not have any services exposed externally ( we still see “<pending>” for the EXTERNAL-IP):
k8s@k8s1:~$ kubectl get svc -A | grep nginx
ingress-nginx ingress-nginx-controller LoadBalancer 10.109.240.37 <pending> 80:31719/TCP,443:32412/TCP 103s
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.103.255.169 <none> 443/TCP 103s
This is not a big surprise, as we did not tell the load balancer which IP addresses to request/assign. This is done easily:
k8s@k8s1:~$ cat lb.yaml
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default
namespace: metallb-system
spec:
addresses:
- 192.168.122.210-192.168.122.215
autoAssign: true
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default
k8s@k8s1:~$ kubectl apply -f lb.yaml
ipaddresspool.metallb.io/default created
l2advertisement.metallb.io/default created
k8s@k8s1:~$ kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.109.240.37 192.168.122.210 80:31719/TCP,443:32412/TCP 3m32s
ingress-nginx-controller-admission ClusterIP 10.103.255.169 <none> 443/TCP 3m32s
From now on the LoadBalancer got an IP address automatically assigned from the pool of addresses we’ve assigned. The next steps are covered in the CloudNativePG documentation: First we need a config map for the service we want to expose:
k8s@k8s1:~$ cat tcp-services-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
5432: default/my-pg-cluster-rw:5432
k8s@k8s1:~$ kubectl apply -f tcp-services-configmap.yaml
configmap/tcp-services created
k8s@k8s1:~$ kubectl get cm -n ingress-nginx
NAME DATA AGE
ingress-nginx-controller 1 6m4s
kube-root-ca.crt 1 6m8s
tcp-services 1 12s
Now we need to modify the ingress-nginx service to include the new port:
k8s@k8s1:~$ kubectl get svc ingress-nginx-controller -n ingress-nginx -o yaml > service.yaml
k8s@k8s1:~$ vi service.yaml
...
ports:
- appProtocol: http
name: http
nodePort: 31719
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
nodePort: 32412
port: 443
protocol: TCP
targetPort: https
- appProtocol: tcp
name: postgres
port: 5432
targetPort: 5432
...
k8s@k8s1:~$ kubectl apply -f service.yaml
Warning: resource services/ingress-nginx-controller is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
service/ingress-nginx-controller configured
The last step is to link our config map into the “ingress-nginx-controller” deployment:
k8s@k8s1:~$ kubectl edit deploy ingress-nginx-controller -n ingress-nginx
...
spec:
containers:
- args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-nginx-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --tcp-services-configmap=ingress-nginx/tcp-services
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --enable-metrics=false
...
From now on the PostgreSQL cluster can be reached from outside the Kubernetes cluster:
k8s@k8s1:~$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.109.240.37 192.168.122.210 80:31719/TCP,443:32412/TCP,5432:32043/TCP 6d23h
ingress-nginx-controller-admission ClusterIP 10.103.255.169 <none> 443/TCP 6d23h
k8s@k8s1:~$ psql -h 192.168.122.210
Password for user k8s:
psql: error: connection to server at "192.168.122.210", port 5432 failed: FATAL: password authentication failed for user "k8s"
connection to server at "192.168.122.210", port 5432 failed: FATAL: password authentication failed for user "k8s"