#Host publicly accessible web URLs from a home server using a remote proxy
Ideally, you would be able to open ports 80 and 443 of your public IP address and update DNS entries with solutions like ddclient or DNS-O-Matic.
However, if your ISP blocks direct traffic to those ports, its possible to use a remote server and SSH tunneling to bypass that limitation. With this setup, you can use Kubernetes ingresses normally. From the ingress’ perspective, http traffic will reach your home server directly. Therefore, tooling like Nginx and Cert Manager will work with no special tweaks.
Any Linux box with a public IP can be your remote proxy, but Oracle Cloud has a generous free tier and can be a good starting option. Just remember that the smallest available machine is probably enough for your needs since it will only forward traffic and not really act on it.
After setting up the remote server with your provider of choice, ensure that it has a SSH server running. We’re going to use it to forward network packets for us.
On your remote instance, edit the /etc/ssh/sshd_config
file and add GatewayPorts yes
to it. Apply the new configuration with service sshd restart
.
If you opted for an Oracle Cloud server, go to the Ingress Rules section of the Virtual Cloud Network panel and open ports 80 and 443. Also, adjust iptables
in the instance to allow public traffic to ports 80 and 443.
iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT
iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT
netfilter-persistent save
From your home server, you can run a quick test, as long as you already have a HTTP server running locally. Don’t forget to generate a new SSH key for your local server and authorize it on the remote machine. If everything until now went smoothly, you will be able to access a local web endpoint using the remote server IP as the DNS entry for your domain.
ssh -N -R 80:<local_ip>:80 root@<remote_ip>
If the port forwarding worked, you can create deployment files for it so Kubernetes itself can maintain the connection up and running for you. Customize and apply the following to your local cluster.
apiVersion: v1
kind: Namespace
metadata:
name: proxy-router
--
apiVersion: apps/v1
kind: Deployment
metadata:
name: autossh-80
namespace: proxy-router
labels:
app: autossh-80
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app: autossh-80
template:
metadata:
labels:
app: autossh-80
spec:
containers:
- name: autossh-80
image: jnovack/autossh:2.0.1
env:
- name: SSH_REMOTE_USER
value: "root"
- name: SSH_REMOTE_HOST
value: "<remote_server_ip>"
- name: SSH_REMOTE_PORT
value: "22"
- name: SSH_TUNNEL_PORT
value: "80"
- name: SSH_BIND_IP
value: "0.0.0.0"
- name: SSH_TARGET_HOST
value: "<local_server_ip>"
- name: SSH_TARGET_PORT
value: "80"
- name: SSH_MODE
value: "-R"
volumeMounts:
- name: keys
mountPath: /id_rsa
nodeName: <node_with_ingress_enabled>
hostNetwork: true
volumes:
- name: keys
hostPath:
path: <node_path_for_ssh_keys>
type: File
To redirect port 443 as well, create another deployment using the previous one as a reference.
Any public URLs should, of course, have their DNS entries pointing to the remote server’s IP.