#Host publicly accessible web URLs from a home server using a remote proxy
Ideally, you would be able to open ports 80 and 443 of your public IP address and update DNS entries with solutions like ddclient or DNS-O-Matic.
However, if your ISP blocks direct traffic to those ports, using a remote server and SSH tunneling can bypass such limitations. With this setup, you can use Kubernetes ingresses as usual. From the ingress perspective, HTTP traffic will reach your home server directly. Therefore, tooling like Nginx and Cert Manager will work with no unique tweaks.
Any Linux box with a public IP can be your remote proxy, but Oracle Cloud has a generous free tier and can be a good starting option. Remember that the smallest available machine is probably enough for your needs since it will only forward traffic and not act on it.
After setting up the remote server with your provider, ensure it has an SSH server running. We’re going to use it to forward network packets for us.
On your remote instance, edit the /etc/ssh/sshd_config
file and add GatewayPorts yes
to it. Apply the new configuration with service sshd restart
.
If you opted for an Oracle Cloud server, go to the Ingress Rules section of the Virtual Cloud Network panel and open ports 80 and 443. Also, adjust iptables
in the instance to allow public traffic to ports 80 and 443.
iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT
iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT
netfilter-persistent save
From your home server, you can run a quick test, as long as you already have a HTTP server running locally. Don’t forget to generate a new SSH key for your local server and authorize it on the remote machine. If everything went smoothly until now, you can access a local web endpoint using the remote server IP as the DNS entry for your domain.
ssh -N -R 80:<local_ip>:80 root@<remote_ip>
If the port forwarding worked, you can now create deployment files so that Kubernetes can keep the tunnel running. Customize and apply the following to your local cluster.
apiVersion: v1
kind: Namespace
metadata:
name: proxy-router
--
apiVersion: apps/v1
kind: Deployment
metadata:
name: autossh-80
namespace: proxy-router
labels:
app: autossh-80
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app: autossh-80
template:
metadata:
labels:
app: autossh-80
spec:
containers:
- name: autossh-80
image: jnovack/autossh:2.0.1
env:
- name: SSH_REMOTE_USER
value: "root"
- name: SSH_REMOTE_HOST
value: "<remote_server_ip>"
- name: SSH_REMOTE_PORT
value: "22"
- name: SSH_TUNNEL_PORT
value: "80"
- name: SSH_BIND_IP
value: "0.0.0.0"
- name: SSH_TARGET_HOST
value: "<local_server_ip>"
- name: SSH_TARGET_PORT
value: "80"
- name: SSH_MODE
value: "-R"
volumeMounts:
- name: keys
mountPath: /id_rsa
nodeName: <node_with_ingress_enabled>
hostNetwork: true
volumes:
- name: keys
hostPath:
path: <node_path_for_ssh_keys>
type: File
To redirect port 443, create another deployment using the previous one as a reference.
Any public URLs should have their DNS entries pointing to the remote server’s IP.