#Snippets

March 29, 2024

#Quick, dirty and effective commands for hacking your way into Kubernetes

Sometimes, we want to avoid going through the trouble of creating YAML files for resources, checking its syntax, and versioning everything. Sometimes, we only want something up and running for a POC, a quick test, or a debugging session. For those moments, these commands can help.

Spin up debug pods:

# Debug application for testing k8s features
## See docs here: https://github.com/stefanprodan/podinfo
kubectl create deployment podinfo --image=stefanprodan/podinfo

# Debug Ubuntu version with several networking stack tools pre-installed
kubectl create deployment nettools --image=hacklabr/docker-nettools

Create Services for applications:

# Basic Service (ClusterIP)
## Service will expose 'port' / target application listens on 'target-port'
kubectl expose deployment podinfo --port=80 --target-port=9898 --name=podinfo-service

Run commands in containers:

# Latest Ubuntu container that spins up and gives you a shell 
# Container is deleted when you close the connection
kubectl run ubuntu --rm -it --image=ubuntu -- bash

# Similar to the previous one, but keeps the debug container active
kubectl create deployment ubuntu --image=ubuntu -- sleep infinity
kubectl exec -it deploy/ubuntu -- bash
kubectl delete deploy/ubuntu

Get a bird’s eye view of your cluster state:

# See all running pods on your cluster (regardless of namespace)
kubectl get pods -A

# Combine the previous command with "watch" to get an evergreen view
watch kubectl get pods -A

# Read CPU and RAM consumption of your nodes (requires metrics-server)
kubectl top nodes

# Read CPU and RAM consumption of all your pods (requires metrics-server) 
kubectl top pods -A

# Get cluster cluster state including logs from control plane applications
kubectl cluster-info dump

Force all Pods of a Deployment to be recreated:

kubectl rollout restart deployment <deployment_name>

Remove finalizers from an existing object (make sure you really know what you’re doing, though):

kubectl patch <object_name> -p '{"metadata":{"finalizers":[]}}' --type=merge

Some useful Helm commands to manage chart configurations:

# See all configurable options with detailed comments (equivalent to reading the values.yml)
# Example with the Coroot Helm Chart
helm show values coroot/coroot

# Get the values used when isntalling an Helm release
helm get values argocd -n argocd > argocd-values.yaml
helm upgrade argocd argo/argo-cd --version 5.9.1 --namespace argocd -f argocd-values.yaml

# Upgrade an existing Helm release in-place while keeping values provided during installation
helm upgrade argocd argo/argo-cd --reuse-values --force --version 5.9.1 --namespace argocd

Notable domain names available in-cluster:

https://kubernetes.default (Kubernetes API Server)
http://<service_name>.<namespace>
http://<service_name>.<namespace>.svc.cluster.local

Section break
October 12, 2022

#Troubleshoot CORS related issues on API requests

Cross Origin Resource Sharing (a.k.a. CORS) is a powerful, and yet misunderstood, web standard for protecting web APIs from abuse. If you’re anything like me, however, you had your fair share of wasted work hours trying to deal with it from time to time. The concept seems deceptively simple, but the devil is in the details.

First of all, its important to understand that CORS compliance is an optional standard and is mostly enforced by browsers, like Chrome, Firefox and Safari. Therefore, it’s unlikely that you will face issues with it while working with libraries like Python’s Requests or CLI tools like curl.

However, if you have a frontend application running on your browser with a domain like, let’s say, http://localhost:3000, API calls directed to different domains, like https://laury.dev, will be preceded by a preflight request. This determines if the browser will execute the main request or not.

When having problems with this process, you may see an error similar to this on your Chrome console:

Access to XMLHttpRequest at ‘https://laury.dev from origin ‘http://localhost:3000 has been blocked by CORS policy: No ‘Access-Control-Allow-Origin’ header is present on the requested resource.

To reproduce the problem in a more isolated manner, you can use curl to make a preflight request manually. The important parts here are:

  • the HTTP method has to be OPTIONS
  • three headers must be present: Access-Control-Request-Method, Access-Control-Request-Headers and Origin. Be wary, however, that the domain stated on Origin must mention protocol, FQDN, port (if any is being used), and no trailing slash.

Here’s an example:

curl -IX OPTIONS https://google.com \
    -H "Access-Control-Request-Method: POST" \
    -H "Access-Control-Request-Headers: origin, x-requested-with" \
    -H "Origin: http://mywebsite.com:3000"

For a preflight request to be considered successful, all of the following must be true:

  • the HTTP status on the response is within 200-299
  • the response contains these headers: Access-Control-Allow-Methods, Access-Control-Max-Age and Access-Control-Allow-Origin. One extra catch here is that this last header cannot present the value * when the main request contains authentication information (like cookies). If it does, the preflight will be considered a failure by the browser. In this case, the header has to mention the origin domain explicitly.

Kubernetes users running Ingress Nginx

To manage CORS headers on Ingress Nginx, you may use these annotations on Ingress objects. Even objects that only manage paths inside domains will work with them, so there’s no need to enable CORS on the entire domain if you don’t feel like it.

Further reading

Light overview of preflight requests

Official specification of the Fetch standard for preflight requests

Section break
June 26, 2022

#Protect public URLs with ModSecurity for Nginx on a Kubernetes cluster

If you’re using Ingress Nginx to manage public endpoints, ModSecurity is already installed but disabled by default. To enable it, add the following to the ConfigMap ingress-nginx-controller:

allow-snippet-annotations: "true"
enable-modsecurity: "true"
enable-owasp-modsecurity-crs: "true"

To customize the Paranoia and Anomaly Score threshold, you must change rules 900000 and 900110. But remember that higher Paranoia levels lead to stricter rules, and a lower anomaly threshold leads to a more tolerant setup.

modsecurity-snippet: |-
    SecRuleEngine On
    SecAuditEngine Off #  Use RelevantOnly for debugging
    SecAuditLog /dev/stdout
    SecAuditLogFormat JSON
    SecAuditLogType Serial
    SecAuditLogParts ABIJDEFHZ
    SecDebugLog /dev/stdout
    SecDebugLogLevel 3
    SecAction "id:900000,phase:1,nolog,pass,t:none,setvar:tx.paranoia_level=5"
    SecAction "id:900110,phase:1,nolog,pass,t:none,setvar:tx.inbound_anomaly_score_threshold=4,setvar:tx.outbound_anomaly_score_threshold=3"
    Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf

Customize ModSecurity rules on a per-host basis

It’s possible to adjust the ModSecurity configuration for each Ingress object on the cluster using annotations.

annotations:
  nginx.ingress.kubernetes.io/enable-modsecurity: "true"
  nginx.ingress.kubernetes.io/enable-owasp-core-rules: "false"
  nginx.ingress.kubernetes.io/modsecurity-snippet: |
    SecRuleRemoveById <rule_id>

How to completely customize modsecurity.conf

With the modsecurity-snippet option, adding custom configuration to ModSecurity is possible. However, you can override the modsecurity.conf entirely if you want.

To do so, first copy the file inside the pod.

kubectl -n ingress-nginx cp <ingress-controller-pod-name>:/etc/nginx/modsecurity/modsecurity.conf ./modsecurity.conf

Modify it as required and save the file in a ConfigMap.

kubectl -n ingress-nginx create configmap modsecurityconf --from-file=modsecurity.conf

If you need to change the file again locally, you can update the ConfigMap by doing the following.

kubectl -n ingress-nginx create configmap modsecurityconf \
  --from-file=modsecurity.conf -o yaml \
  --dry-run=client | kubectl apply -f -

To mount the ConfigMap to the controller deployment, create a patch file.

spec:
  template:
    spec:
      volumes:
      - name: modsecurityconf
        configMap:
          name: modsecurityconf
      containers:
      - name: controller
        volumeMounts:
        - name: modsecurityconf
          mountPath: "/etc/nginx/modsecurity/modsecurity.conf"
          subPath: modsecurity.conf
          readOnly: true

And then apply the patch file.

kubectl -n ingress-nginx patch deployment/ingress-nginx-controller --patch-file deployment-patch.yml

Further reading

ConfigMap options for Ingress Nginx

Annotations options for Ingress objects

Overview of Anomaly Scoring

Working with Paranoia Levels

Handling False Positives with the OWASP ModSecurity Core Rule Set

Section break
June 24, 2022

#Host publicly accessible web URLs from a home server using a remote proxy

Ideally, you would be able to open ports 80 and 443 of your public IP address and update DNS entries with solutions like ddclient or DNS-O-Matic.

However, if your ISP blocks direct traffic to those ports, using a remote server and SSH tunneling can bypass such limitations. With this setup, you can use Kubernetes ingresses as usual. From the ingress perspective, HTTP traffic will reach your home server directly. Therefore, tooling like Nginx and Cert Manager will work with no unique tweaks.

Any Linux box with a public IP can be your remote proxy, but Oracle Cloud has a generous free tier and can be a good starting option. Remember that the smallest available machine is probably enough for your needs since it will only forward traffic and not act on it.

After setting up the remote server with your provider, ensure it has an SSH server running. We’re going to use it to forward network packets for us.

On your remote instance, edit the /etc/ssh/sshd_config file and add GatewayPorts yes to it. Apply the new configuration with service sshd restart.

If you opted for an Oracle Cloud server, go to the Ingress Rules section of the Virtual Cloud Network panel and open ports 80 and 443. Also, adjust iptables in the instance to allow public traffic to ports 80 and 443.

iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT
iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT
netfilter-persistent save

From your home server, you can run a quick test, as long as you already have a HTTP server running locally. Don’t forget to generate a new SSH key for your local server and authorize it on the remote machine. If everything went smoothly until now, you can access a local web endpoint using the remote server IP as the DNS entry for your domain.

ssh -N -R 80:<local_ip>:80 root@<remote_ip>

If the port forwarding worked, you can now create deployment files so that Kubernetes can keep the tunnel running. Customize and apply the following to your local cluster.

apiVersion: v1
kind: Namespace
metadata:
  name: proxy-router

--
apiVersion: apps/v1
kind: Deployment
metadata:
  name: autossh-80
  namespace: proxy-router
  labels:
    app: autossh-80
spec:
  replicas: 1
  strategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate
  selector:
    matchLabels:
      app: autossh-80
  template:
    metadata:
      labels:
        app: autossh-80
    spec:
      containers:
      - name: autossh-80
        image: jnovack/autossh:2.0.1
        env:
          - name: SSH_REMOTE_USER
            value: "root"
          - name: SSH_REMOTE_HOST
            value: "<remote_server_ip>"
          - name: SSH_REMOTE_PORT
            value: "22"
          - name: SSH_TUNNEL_PORT
            value: "80"
          - name: SSH_BIND_IP
            value: "0.0.0.0"
          - name: SSH_TARGET_HOST
            value: "<local_server_ip>"
          - name: SSH_TARGET_PORT
            value: "80"
          - name: SSH_MODE
            value: "-R"
        volumeMounts:
          - name: keys
            mountPath: /id_rsa
      nodeName: <node_with_ingress_enabled>
      hostNetwork: true
      volumes:
        - name: keys
          hostPath:
            path: <node_path_for_ssh_keys>
            type: File

To redirect port 443, create another deployment using the previous one as a reference.

Any public URLs should have their DNS entries pointing to the remote server’s IP.

Section break
September 09, 2020

#Install Kubernetes on Raspberry Pi OS

We will use k3s, a lightweight Kubernetes distribution, to get the most out of our hardware. This tutorial uses a Raspberry Pi 4 and the latest Raspberry Pi 32-bit (formerly known as Raspbian). The 64-bit version is pretty much the same for this step-by-step.

Flash the OS image on your SD Card and, if necessary, add Wi-Fi credentials so you can access it.

Enable cgroups support and disable IPv6 by appending the following on /boot/cmdline.txt (remember that /boot refers to the boot partition on your SD Card).

cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory ipv6.disable=1

I also recommend disabling swap.

dphys-swapfile swapoff && systemctl disable dphys-swapfile.service

If your workloads don’t require GPU, you may want to change the Memory Split to 16 using raspi-config. You’ll have a little extra RAM this way.

You can force the OS to use legacy iptables to ensure compatibility with older Kubernetes versions, but this step is optional.

iptables -F
update-alternatives --set iptables /usr/sbin/iptables-legacy
update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy

Install and test k3s.

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik" sh -

# After a minute, you should be able to test it
kubectl get nodes

The kubeconfig yaml will be available at /etc/rancher/k3s/k3s.yaml.

If you have worker nodes to add, use the following command. Find the cluster token at /var/lib/rancher/k3s/server/node-token on the server node.

curl -sfL https://get.k3s.io | K3S_URL=https://<server_ip>:6443 K3S_TOKEN="<cluster_token>" sh -

Maintenance

Things should run smoothly from here, but if they don’t, the next few commands might help:

# See current status of k3s
systemctl status k3s

# See system logs (including k3s)
journalctl -xe

# Uninstall k3s
/usr/local/bin/k3s-uninstall.sh

Kubernetes automatically performs cleanup of unused containers and images on nodes. Still, if you’re under the impression that your filesystem is getting bloated by these, you can force a safe deletion with:

k3s crictl rmi --prune

Further reading

Configuration options for k3s

Section break
August 26, 2020

#Combine kubectl and JSONPath to read data from a Kubernetes cluster

To customize how data about cluster objects is presented on the terminal, its possible to use JSONPath syntax. Keep in mind that, under the hood, Kubernetes’ API already sends and receives data in JSON format (the YAMLs we’re used to see are an abstraction to ease reading and editing).

Since data hierarchy on the YAML representation may be a bit off, start by getting a pure JSON version of the data you want to filter.

In the following examples, let’s use a specific deployment.

kubectl -n <my_namespace> get deploy/<my_deployment> -o json

Get only the deployment name.

kubectl -n <my_namespace> get deploy/<my_deployment> -o jsonpath='{.metadata.name}'

range is one of the improvements to JSONPath available here. It iterates over JSON lists and can be paired with @ to make references to each item.

Using both, you can, for instance, print image names for all containers created by a deployment (with line breaks).

kubectl -n <my_namespace> get deploy/<my_deployment> -o jsonpath='{range .spec.template.spec.containers[*]}{@.image}{"\n"}{end}'

A good use case for JSONPath is to list annotations for every deployment in a cluster. Tools like Velero depend on the proper configuration of annotations to do volume backups, so it’s convenient to have a consolidated output.

The following command prints namespace, name and template annotations for every deployment. Additionally, it pipes the list to grep so we can have color highlights.

kubectl get -A deployments -o jsonpath='{range .items[*]}{@.metadata.namespace}{" / "}{@.metadata.name}{"\n"}{@.spec.template.metadata.annotations}{"\n"}{"\n"}{end}' | grep --color "backup.velero.io/backup-volumes\|$"

Note: grep is also matching line endings to ensure all lines are printed.

Further reading

Kubernetes docs on JSONPath

JSONPath evaluator

JSONPath syntax details

Section break
August 20, 2020

#Issue Let's Encrypt certificates for domains in a Kubernetes cluster

We will use cert-manager to issue HTTPS certificates for domains served publicly by a Kubernetes cluster.

The application can be installed via Helm, but it’s recommended to install CRDs (Custom Resource Definitions) separately. That may come in handy if you need to delete cert-manager without losing already issued certificates. Don’t forget to change the “version” accordingly on all commands.

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.1/cert-manager.crds.yaml

Add JetStack charts to your Helm client.

helm repo add jetstack https://charts.jetstack.io --force-update
helm repo update

Install the application.

helm upgrade --install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.14.1 \
  --set ingressShim.defaultIssuerName=letsencrypt-prod \
  --set ingressShim.defaultIssuerKind=ClusterIssuer \
  --set ingressShim.defaultIssuerGroup=cert-manager.io

To issue actual certificates, you will need a production Issuer or ClusterIssuer. The following yaml defines a ClusterIssuer and assumes that you have a nginx ingress controller. If it suit your needs, apply it to your cluster.

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: <your_email>
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          ingressClassName: nginx

Verify if your set up is working.

$ kubectl get clusterissuers -A
NAME               READY   AGE
letsencrypt-prod   True    30m

The cert-manager stack will act upon ingresses that comply with a couple of conditions to generate HTTPS certificates. The following example illustrates that. Take special note of the annotations and the “secretName” property under “tls”.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/tls-acme: "true"
  name: nginx-ingress
  namespace: ingress-testing
spec:
  rules:
  - host: <public_URL>
    http:
      paths:
      - backend:
          serviceName: <service_name>
          servicePort: 80
  tls:
  - hosts:
    - <public_URL>
    secretName: <secret_name_of_your_choosing>

Further reading

Official documentation on the installation process

Information about Cluster Issuers

Supported Ingress annotations

Section break
August 02, 2020

#Read Kubernetes API data using Golang

Get the API client.

import (
  "fmt"
  "context"
  "k8s.io/client-go/kubernetes"
  "k8s.io/client-go/tools/clientcmd"
  metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

// use the current context in kubeconfig
config, err := clientcmd.BuildConfigFromFlags("", "<path_to_KUBECONFIG>")
if err != nil {
  panic(err.Error())
}

// create the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
  panic(err.Error())
}

Usage examples:

  • read data about servers
nodeList, err := clientset.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{})
if err != nil {
  panic(err.Error())
}
  • read data about pods
namespace := "example"  // leave empty to get data from all namespaces
podList, err := clientset.CoreV1().Pods(namespace).List(context.TODO(), metav1.ListOptions{})
if err != nil {
  panic(err.Error())
}

In the above examples, ListOptions can take two important strings, field selectors and label selectors. This can be used for filtering results.

metav1.ListOptions{
  LabelSelector: "labelName=labelKey",
  FieldSelector: "spec.nodeName=<node_name>",  // Example for filtering by node name
}

Field selectors and label selectors on this case operate in the same way that the CLI options for kubectl do. So, the same rules apply here.

Further reading

Official docs on Field Selectors and Labels.

Section break
August 02, 2020

#Read Kubernetes metrics-server data using Golang

First, get the metrics API client.

import (
  "fmt"
  "context"
  "k8s.io/client-go/kubernetes"
  "k8s.io/client-go/tools/clientcmd"
  metricsv "k8s.io/metrics/pkg/client/clientset/versioned"
)

// Get the config
config, err := clientcmd.BuildConfigFromFlags("", "<path_to_KUBECONFIG>")
if err != nil {
  panic(err.Error())
}

// Get the metrics client
metricsClientset, err := metricsv.NewForConfig(config)
if err != nil {
  panic(err.Error())
}

Get the data and store it in podMetricsList array.

namespace := "example"  // leave empty to get data from all namespaces
podMetricsList, err := metricsClientset.MetricsV1beta1().PodMetricses(namespace).List(context.TODO(), metav1.ListOptions{})
if err != nil {
  panic(err.Error())
}

Iterate over the results found.

for _, v := range podMetricsList.Items {
  fmt.Printf("%s\n", v.GetName())
  fmt.Printf("%s\n", v.GetNamespace())
  fmt.Printf("%vm\n", v.Containers[0].Usage.Cpu().MilliValue())
  fmt.Printf("%vMi\n", v.Containers[0].Usage.Memory().Value()/(1024*1024))
}

Note: if you’re inspecting pods that may have more than one container, you’ll need to iterate over v.Containers as well.

Section break
August 01, 2020

#Receive data from Kubernetes API using curl

Start by running kubectl’s proxy. With its help, we avoid dealing with authentication headers.

kubectl proxy

Test the connection with a basic call.

curl http://localhost:8001/api/

To check the API groups available, access the root.

curl http://localhost:8001/apis/

Based on the group listing, you can infer URLs for resources.

For cluster-scoped resources, use:

/apis/<GROUP>/<VERSION>/<RESOURCETYPE>
/apis/<GROUP>/<VERSION>/<RESOURCETYPE>/<NAME>

And, for namespace-scoped resources:

/apis/<GROUP>/<VERSION>/<RESOURCETYPE>
/apis/<GROUP>/<VERSION>/namespaces/<NAMESPACE>/<RESOURCETYPE>
/apis/<GROUP>/<VERSION>/namespaces/<NAMESPACE>/<RESOURCETYPE>/<NAME>

To get deployment data, for example, the API Group is “apps”, Version is “v1” and Resourcetype is “deployments”. The final request looks like this:

curl http://localhost:8001/apis/apps/v1/deployments

If your Kubernetes cluster has a metrics-server, the following request will get node resource usage data:

curl http://localhost:8001/apis/metrics.k8s.io/v1beta1/nodes

Rancher users

However, if your cluster was created using Rancher, you’ll need “Authorized Cluster Endpoint” enabled, so your requests can reach the API Server directly. Your URL paths will slightly differ from the examples above otherwise.

Using your command line, you’ll also need to select the correct context for kubectl before activating the proxy.

# List available contexts
kubectl config get-contexts

# Select the direct API context
# Its name has the format <cluster_name>-<pilot_name>
kubectl config use-context <direct_context>

Further reading

Overview of the Kubernetes API

More info about API concepts

Accessing the API without kubectl proxy

Full list of Groups, Versions and Resourcetypes

Section break
March 13, 2020

#How to dump and restore MySQL/MariaDB databases

Options for dumping data:

# Directly
mysqldump --databases $MYSQL_DATABASE -u$MYSQL_USER -p$MYSQL_PASSWORD > dump-`date '+%Y_%m_%d__%H_%M_%S'`.sql

# Docker
docker exec -it <container_name> sh -c 'mysqldump --databases $MYSQL_DATABASE -u$MYSQL_USER -p$MYSQL_PASSWORD' > dump-`date '+%Y_%m_%d__%H_%M_%S'`.sql

# Kubernetes (vulnerable to networking failures)
kubectl -n <namespace> exec deploy/<deploy_name> -- bash -c 'mysqldump --databases $MYSQL_DATABASE -u$MYSQL_USER -p$MYSQL_PASSWORD' > dump-`date '+%Y_%m_%d__%H_%M_%S'`.sql

Additionally, if your database operates in a remote server (like Amazon RDS), you may still use a local Docker container for dumping the data.

# Create a local container with the desired version (in this example, MariaDB 10.4 is used)
docker run -it --rm -v ${PWD}:/dump -w /dump mariadb:10.4 bash

# Get the data
mysqldump -h <hostname> --databases <database_name> -u <database_user> --password='<password>' > dump-`date '+%Y_%m_%d__%H_%M_%S'`.sql

For restoring data, you can use:

mysql -u$MYSQL_USER -p$MYSQL_PASSWORD < dump.sql

If you’re using Docker, you can also place dump files on /docker-entrypoint-initdb.d and those will be imported on the first run. Accepted files types are *.sql, *.sql.gz, and *.sh.

Section break
March 10, 2020

#How to dump and restore PostgreSQL databases

Options for dumping data:

# Binary mode
pg_dump -Fc -U $POSTGRES_USER $POSTGRES_DB > dump-`date '+%Y_%m_%d__%H_%M_%S'`.psqlc

# SQL mode
pg_dump -U $POSTGRES_USER $POSTGRES_DB > dump-`date '+%Y_%m_%d__%H_%M_%S'`.sql

# Docker and SQL mode
docker exec -it -u postgres <container_name> sh -c 'pg_dump -U $POSTGRES_USER $POSTGRES_DB' > dump-`date '+%Y_%m_%d__%H_%M_%S'`.sql

# Kubernetes and SQL mode (vulnerable to networking failures)
kubectl -n <namespace> exec deploy/postgres -- sh -c 'pg_dump -U $POSTGRES_USER $POSTGRES_DB' > dump-`date '+%Y_%m_%d__%H_%M_%S'`.sql

Options for restoring data:

# Binary mode
pg_restore -O -U $POSTGRES_USER -c -x -n public -d $POSTGRES_DB dump.psqlc

# SQL mode
psql -d $POSTGRES_DB -a -f /backups/dump.sql

If you’re using Docker, you can also place dump files on /docker-entrypoint-initdb.d and those will be imported on the first run. Accepted files types are *.sql, *.sql.gz, and *.sh.

Section break