
PostgreSQL on Kubernetes with ArgoCD and Zalando Operator
π PostgreSQL on Kubernetes with ArgoCD and Zalando Operator (and SSL & S3 Backups, of course)
I needed a PostgreSQL database for this website (yep, the one you're browsing), because I'm using Strapi as a headless CMS. I wanted something:
- Easy to deploy β
- Robust enough for production β
- Properly secured (SSL!) β
- With automatic daily backups to S3 β
As usual, βeasyβ is a relative term. So hereβs exactly how I did it β with code snippets, kubectl
tips, and a few caveats I ran into. You're welcome π
βοΈ Architecture overview
- Kubernetes cluster hosted on DigitalOcean (using their managed K8s service).
- ArgoCD for GitOps-based deployment.
- Zalando Postgres Operator for managing PostgreSQL clusters.
- Strapi as the consumer of the database.
- TLS encryption between app and DB, using a self-signed cert.
- Backups to DigitalOcean Spaces (S3-compatible).
π¦ Deploy the Zalando PostgreSQL Operator
The operator lives in the main
namespace. Here's the ArgoCD application definition:
1apiVersion: argoproj.io/v1alpha1 2kind: Application 3metadata: 4 name: postgres-operator 5 namespace: argocd 6 finalizers: 7 - resources-finalizer.argocd.argoproj.io 8spec: 9 project: default 10 source: 11 repoURL: 'git@github.com:bv86/k8s-infra.git' 12 targetRevision: HEAD 13 path: kustomize/postgres-operator/base 14 destination: 15 server: "https://kubernetes.default.svc" 16 namespace: main 17 syncPolicy: 18 automated: 19 prune: true 20 selfHeal: true 21 syncOptions: 22 - CreateNamespace=true
And here's the kustomization.yaml
file from kustomize/postgres-operator/base
to make sure everything runs in the correct namespace:
1apiVersion: kustomize.config.k8s.io/v1beta1 2kind: Kustomization 3 4resources: 5 - github.com/zalando/postgres-operator/manifests?ref=v1.14.0 6 7patches: 8 # by default, service account is created in the default namespace, so add another one in the proper namespace 9 - target: 10 version: v1 11 kind: ServiceAccount 12 name: postgres-operator 13 patch: |- 14 - op: replace 15 path: /metadata/namespace 16 value: main
π Generate a self-signed SSL certificate
Node.js is very picky about SSL. Here's how I generated a certificate signed by my own CA:
1. Generate the CA key and certificate
1openssl req -x509 -newkey rsa:4096 -days 9000 \ 2 -nodes -keyout ca-key.pem -out ca-cert.pem \ 3 -subj "/CN=MyPGSQL_CA"
2. Generate a server key and certificate signing request
1openssl req -newkey rsa:4096 -nodes \ 2 -keyout server-key.pem -out server-req.pem \ 3 -subj "/CN=acid-strapi.main.svc.cluster.local"
βοΈ Important:
CN
must match the internal service name exactly.
3. Sign the server certificate
1openssl x509 -req -in server-req.pem -CA ca-cert.pem \ 2 -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -days 3650
4. Store it in Kubernetes
1kubectl create secret generic pg-tls \ 2 --from-file=tls.crt=server-cert.pem \ 3 --from-file=tls.key=server-key.pem \ 4 --from-file=ca.crt=ca-cert.pem \ 5 -n main
π Deploy the PostgreSQL cluster
π Secret for Spaces credentials
First, let's create a secret containing the DigitalOcean access and secret key for my S3 compatible storage:
1kubectl create secret generic app-secrets \ 2 --from-literal=DO_SPACE_ACCESS_KEY="your-access-key" \ 3 --from-literal=DO_SPACE_SECRET_KEY="your-secret-key" \ 4 -n main
The PostgreSQL instance
This is part of my my-strapi
ArgoCD app. Hereβs the CRD (postgresql.acid.zalan.do
) definition:
1apiVersion: "acid.zalan.do/v1" 2kind: postgresql 3metadata: 4 name: acid-strapi 5 namespace: main 6spec: 7 teamId: "acid" 8 spiloFSGroup: 103 9 volume: 10 size: 1Gi 11 numberOfInstances: 2 12 tls: 13 secretName: "pg-tls" 14 caFile: "ca.crt" 15 users: 16 benoit: 17 - superuser 18 - createdb 19 preparedDatabases: 20 strapi: 21 defaultUsers: true 22 postgresql: 23 version: "17" 24 env: 25 - name: WAL_S3_BUCKET 26 value: strapi-bucket-ben 27 - name: AWS_ACCESS_KEY_ID 28 valueFrom: 29 secretKeyRef: 30 name: app-secrets 31 key: DO_SPACE_ACCESS_KEY 32 - name: AWS_SECRET_ACCESS_KEY 33 valueFrom: 34 secretKeyRef: 35 name: app-secrets 36 key: DO_SPACE_SECRET_KEY 37 - name: AWS_ENDPOINT 38 value: "https://lon1.digitaloceanspaces.com" 39 - name: BACKUP_SCHEDULE 40 value: "0 1 * * *" # Daily at 1AM
π¦ Strapi Configuration
Strapi connects to the DB with SSL enabled and custom certificates. Here's what I set up:
Environment config
I use a ConfigMap to store non sensitive DB information.
1apiVersion: v1 2kind: ConfigMap 3metadata: 4 name: strapi-database-conf 5data: 6 DATABASE_HOST: acid-strapi.main.svc.cluster.local 7 DATABASE_PORT: "5432" 8 DATABASE_NAME: strapi 9 DATABASE_USERNAME: strapi_owner_user 10 DATABASE_SSL: "true" 11 DATABASE_CLIENT: postgres
My strapi Deployment definition
1apiVersion: apps/v1 2kind: Deployment 3metadata: 4 name: strapi-app 5spec: 6 replicas: 1 7 selector: 8 matchLabels: 9 app: strapi-app 10 template: 11 metadata: 12 labels: 13 app: strapi-app 14 spec: 15 imagePullSecrets: 16 - name: ghcr-credentials 17 volumes: 18 - name: certs 19 secret: 20 secretName: pg-tls 21 containers: 22 - name: strapi 23 image: ghcr.io/bv86/my-strapi:main 24 ports: 25 - containerPort: 1337 26 name: http 27 envFrom: 28 - configMapRef: 29 name: strapi-app-conf # the name of our ConfigMap for our app. 30 - configMapRef: 31 name: strapi-database-conf # the name of our ConfigMap for our app. 32 - configMapRef: 33 name: strapi-s3-conf # the name of our ConfigMap for our app. 34 - secretRef: 35 name: app-secrets # the name of our Secret for our app. 36 env: 37 - name: DATABASE_PASSWORD 38 valueFrom: 39 secretKeyRef: 40 name: strapi-owner-user.acid-strapi.credentials.postgresql.acid.zalan.do 41 key: password 42 - name: NODE_EXTRA_CA_CERTS 43 value: /etc/certs/ca.crt 44 volumeMounts: 45 - name: certs 46 mountPath: /etc/certs 47 readOnly: true
As you can see, I load the values from my ConfigMap as environment variables.
The strapi-owner-user.acid-strapi.credentials.postgresql.acid.zalan.do
secret is managed by the PostgreSQL operator and contains the user's password.
Another important part is the mounting of ca.cert
from pg-tls
as a file in /etc/certs
, this is so that Node.js can be told to trust that certificate.
At last, in strapi, the database configuration looks like this:
1postgres: { 2 connection: { 3 connectionString: env('DATABASE_URL'), 4 host: env('DATABASE_HOST', 'localhost'), 5 port: env.int('DATABASE_PORT', 5432), 6 database: env('DATABASE_NAME', 'strapi'), 7 user: env('DATABASE_USERNAME', 'strapi'), 8 password: env('DATABASE_PASSWORD', 'strapi'), 9 ssl: env.bool('DATABASE_SSL', false) && { 10 key: env('DATABASE_SSL_KEY', undefined), 11 cert: env('DATABASE_SSL_CERT', undefined), 12 ca: env('DATABASE_SSL_CA', undefined), 13 capath: env('DATABASE_SSL_CAPATH', undefined), 14 cipher: env('DATABASE_SSL_CIPHER', undefined), 15 rejectUnauthorized: env.bool( 16 'DATABASE_SSL_REJECT_UNAUTHORIZED', 17 true 18 ), 19 }, 20 schema: env('DATABASE_SCHEMA', 'public'), 21 }, 22 pool: { 23 min: env.int('DATABASE_POOL_MIN', 2), 24 max: env.int('DATABASE_POOL_MAX', 10), 25 }, 26 }
π§ͺ Debugging tips
- Use
psql
from another pod to test SSL:
1psql "sslmode=verify-full host=acid-strapi.main.svc.cluster.local dbname=strapi user=strapi_owner_user sslrootcert=/etc/certs/ca.crt"
- Check logs of the Zalando operator:
1kubectl logs -l name=postgres-operator -n main
β Conclusion
With this setup, I now have:
- A production-ready PostgreSQL setup
- SSL/TLS encryption between my app and the DB
- Automated daily backups
- GitOps magic with ArgoCD β¨
Next steps? Probably automate cert creation with let's encrypt.