Hosting at Scale for $4/mo
Introduction
Last week we posted a quick blog. It was meant to be a throwaway, barely technical update to show off some PoC code.
Well, it blew up. And somehow, 715,362 requests hit our tiny $4 VPS.
Thing is... it didn't crash. It barely flinched.
This is a blog post about how we built something tiny, cheap, and resilient without cutting corners on security or reliability.
It's not a "how to scale" story. It's a "how not to die when the internet notices you" story.
Let's get into it.
Step One: You Don't Serve Everything
When you're operating on a $4 budget, serving every single request yourself is suicide.
Initially, given we had near 0 views on our previous posts, we were letting our VPS handle everything directly. Setting up a CDN for 5 visitors seemed like overkill.
But we did it anyway. Cloudflare is AMAZING when it comes to providing free CDN products and services, so we put it in front early.
We tuned it aggressively:
- Caching Level: Cache Everything - Edge Cache TTL: 1 month - Page Rules: aggressive caching for /static, /_next, /api/cacheable - Traefik Ingress: handles smart routing internally
Result:
- 715,362 total requests
- Only ~333,000 ever reached our VPS

Cloudflare did more than reduce bandwidth. It shielded us from dumb traffic. Bots, scrapers, stale browsers, even legitimate visitors were answered instantly by an edge node instead of touching our server.
The fewer packets that reach your backend, the fewer problems you have. Caching doesn't just save resources. It shrinks your attack surface.
Step Two: Kubernetes on a $4 VPS (Because Why Not?)
Yeah, we know. Kubernetes on a single VPS sounds dumb.
But hear us out:
- We love gitops.
- We want declarative deployments.
- We like easy local-to-prod mirroring.
So we run a lightweight Kubernetes (think k3s, but slightly customized).
Here's what a typical app deployment looks like:
apiVersion: apps/v1 kind: Deployment metadata: name: ${DEPLOY_ENV}-app spec: replicas: 1 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 selector: matchLabels: app: ${DEPLOY_ENV}-app template: metadata: labels: app: ${DEPLOY_ENV}-app spec: containers: - name: ${DEPLOY_ENV}-app image: <ECR_REPO>/platform-security-${DEPLOY_ENV}-app:latest ports: - containerPort: 3000 resources: requests: cpu: "150m" memory: "128Mi" limits: cpu: "400m" memory: "384Mi"
We separate frontend and API services for better scaling and resource isolation.
Strict container resource limits are not optional. On a tiny VPS, one greedy container can bring the whole thing down.
Platform security starts with discipline, not "trust."
Step Three: Treat Staging and Production Differently
Even with $4 of infrastructure, we refuse to YOLO into prod.
We use a parameterized deployment system with ${DEPLOY_ENV}
variables that give us staging and production environments from the same manifests.
apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: platformsecurity-ingress spec: entryPoints: - websecure routes: - match: "Host(`platformsecurity.com`)" kind: Rule services: - name: prod-app-service port: 80 - match: "Host(`staging.platformsecurity.com`)" kind: Rule services: - name: stag-app-service port: 80
Production gets platformsecurity.com
, staging runs on staging.platformsecurity.com
.
The code is identical, but they're completely isolated environments.
Whenever we open a pull request, GitHub Actions automatically deploys the new code to staging. We test it there first, under real conditions, but without risking anything customer-facing.
Only after explicit review and approval do we promote changes to production. No merging straight to main. No "hope it works" releases. We engineer discipline into the workflow because hope is not a strategy.
Step Four: Auto-Scaling When It Matters
Even tiny servers need to breathe.
We wired up Horizontal Pod Autoscalers (HPA) to elastically grow when things got hot:
apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: ${DEPLOY_ENV}-app-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: ${DEPLOY_ENV}-app minReplicas: 1 maxReplicas: 5 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 70
1 pod normally. Up to 5 when things get spicy.
Autoscaling is not just for "big infra." It is about resilience under unpredictable load. If your platform cannot react to change without human help, you are not really secure.
Step Five: AWS ECR and Secrets Hygiene
All our containers are built and pushed to AWS ECR.
ECR credentials expire every 12 hours. Rather than manually refreshing logins and risking outages, we automate ECR secret refreshing every 6 hours with a Kubernetes CronJob:
apiVersion: batch/v1 kind: CronJob metadata: name: ecr-secret-refresh spec: schedule: "0 */6 * * *" # Runs every 6 hours jobTemplate: spec: template: spec: serviceAccountName: default containers: - name: refresh-ecr image: amazon/aws-cli:latest command: - /bin/sh - -c - | # Gets fresh ECR credentials TOKEN=$(aws ecr get-login-password --region us-east-1) # Updates the Kubernetes secret kubectl create secret docker-registry aws-ecr-readonly-secret \ --docker-server=<ECR_REPO> \ --docker-username=AWS \ --docker-password="$TOKEN" \ -n default
Secrets rotation is boring until you forget to do it and everything breaks. Automate your security basics.
Building for Survival
Constraint-driven architecture is not about saving $20 per month. It is about survival discipline.
When you design for minimal attack surfaces, prioritize edge-first defenses, establish clear trust boundaries, and ensure zero-drama deployments, resilience becomes a natural side effect. You do not "add security later", you build it in from day one.
When money is tight, every watt, every packet, and every permission matters.
Our $4 VPS took hundreds of thousands of visitors without breaking because we treated it seriously from the start.
Was it necessary? No.
Was it fun? Absolutely.
P.S. I promise this one wasn't written by AI (lol).
"When money is tight, discipline gets real."