AutoTunnel: On-Demand Kubernetes Port Forwarding
- The Problem
- What AutoTunnel Does
- Dynamic Routing (Zero Config)
- How It Works Under the Hood
- TCP Forwarding & Jump Hosts
- Comparison
- Install & Quick Start
TL;DR: AutoTunnel is an open-source tool that replaces per-service kubectl port-forward commands with a single local proxy. Tunnels are created lazily when traffic arrives and torn down after idle timeout. HTTP, TLS passthrough, TCP, and jump-host routing — all on one port.
GitHub: github.com/atas/autotunnel
The Problem
kubectl port-forward does one thing: forward one port to one pod. That's fine for a quick debug session. But when your daily workflow involves 10+ microservices — Grafana, ArgoCD, internal APIs, databases — it becomes untenable. One terminal per service, no idle cleanup, manual lifecycle, and if the pod restarts, you start over.
kubectl proxy is the other option, but it only speaks HTTP and gives you URLs like /api/v1/namespaces/argocd/services/argocd-server:443/proxy/. Not exactly bookmark-friendly.
What AutoTunnel Does
AutoTunnel listens on a single local port (default 127.0.0.1:8989) and routes traffic to Kubernetes services based on hostname. Tunnels are created lazily — only when the first request arrives — and automatically torn down after a configurable idle timeout (default 60 minutes).
Instead of juggling terminals, you get friendly URLs:
http://grafana.localhost:8989— pre-configured static routehttps://argocd.localhost:8989— TLS passthrough, end-to-end encryption preservedhttp://nginx-80.svc.default.ns.microk8s.cx.k8s.localhost:8989— dynamic routing, zero config
Dynamic Routing (Zero Config)
For frequently-used services, you configure static routes in ~/.autotunnel.yaml. But for ad-hoc access, dynamic routing needs no config at all. The URL pattern encodes everything AutoTunnel needs:
{service}-{port}.svc.{namespace}.ns.{context}.cx.k8s.localhost:8989
Static routes take priority, so your clean grafana.localhost URLs keep working while you can still reach any service on-the-fly.
How It Works Under the Hood
Protocol Multiplexing on a Single Port
AutoTunnel serves both HTTP and HTTPS on the same port. For every incoming connection, it peeks at the first byte without consuming it. If it's 0x16 (TLS handshake record type), the connection goes to the TLS handler. Otherwise, it's routed to the HTTP reverse proxy.
TLS Passthrough with SNI
For TLS connections, AutoTunnel doesn't terminate encryption. It parses the TLS ClientHello message by hand — walking through the binary structure to find the SNI (Server Name Indication) extension — then routes the entire connection to the correct backend based on the hostname. The backend sees the original TLS handshake. End-to-end encryption stays intact.
If something goes wrong (no matching route, tunnel fails to start), it generates a short-lived self-signed certificate on the fly, completes the TLS handshake, and returns a proper error page instead of silently resetting the connection.
Service-to-Pod Resolution
When you target a Kubernetes service, AutoTunnel resolves it exactly like kubectl would: fetch the Service object, extract its label selector, list running pods matching those labels, pick a ready one, and resolve named ports from the pod spec. If no pod is fully ready, it falls back to the first running one.
On-Demand Tunnel Lifecycle
Each tunnel follows a state machine: Idle → Starting → Running → Stopping → back to Idle. The port-forward itself uses SPDY via client-go (the same mechanism kubectl port-forward uses internally). The local port is auto-allocated by the OS — no port conflicts.
A background goroutine checks every 30 seconds for tunnels that have been idle longer than the configured timeout and tears them down.
TCP Forwarding & Jump Hosts
Not everything speaks HTTP. For databases, Redis, and other TCP services, AutoTunnel provides direct port-forwarding with dedicated local ports:
tcp:
k8s:
routes:
5432:
context: my-cluster
namespace: databases
service: postgresql
port: 5432
Then just psql -h localhost -p 5432.
For VPC-internal services that aren't directly reachable from Kubernetes pods (RDS, Cloud SQL, ElastiCache), AutoTunnel supports jump-host routing. It opens a kubectl exec session to a pod that has network access to the target and pipes traffic through socat (with nc as automatic fallback):
tcp:
k8s:
jump:
3306:
context: eks-prod
namespace: default
via:
service: backend-api
target:
host: mydb.rds.amazonaws.com
port: 3306
It can even auto-create lightweight jump pods on demand if you don't have an existing pod to use.
Comparison
kubectl port-forward |
kubectl proxy |
AutoTunnel | |
|---|---|---|---|
| Tunnels | 1 per command | N/A (API proxy) | Unlimited, multiplexed |
| Protocol | Raw TCP | HTTP only | HTTP + TLS + TCP |
| URLs | localhost:{port} |
/api/v1/namespaces/.../proxy/ |
{name}.localhost:8989 |
| Lifecycle | Manual | Manual | On-demand + auto-cleanup |
| Multi-context | No | No | Yes |
| TLS | Passthrough | No | SNI-based passthrough |
| VPC access | No | No | Jump-host routing |
Install & Quick Start
brew install atas/tap/autotunnel
brew services start autotunnel
This creates a default config at ~/.autotunnel.yaml. Edit it to add your routes and kubeconfig paths. The config auto-reloads on save (no restart needed unless you change the listen port).
Also available via go install github.com/atas/autotunnel@latest or from GitHub releases.
GitHub: github.com/atas/autotunnel