When you're a consulting company helping clients build infrastructure, there's an obvious question: what does YOUR infrastructure look like?

This is the story of how we built DataLux's production platform—running on our own hardware in Houston, serving datalux.dev to you right now. We'll cover the architecture decisions, real costs, lessons learned, and why we chose self-hosting over cloud.

This isn't a theoretical blog post. This is the actual system powering our business, with real numbers and real tradeoffs. If you're considering self-hosted infrastructure for your business, this case study will show you exactly what it takes.

Why Self-Host? The Decision Framework

Before we dive into the technical details, let's address the obvious question: why not just use AWS, Azure, or Vercel like everyone else?

The Cloud Cost Reality Check

We ran the numbers for hosting our platform (website, API, database, email service) on popular cloud providers:

Annual cost: $960-$3,000 depending on provider and growth.

Meanwhile, our self-hosted setup:

Annual cost: $121 after year one, then break-even at month 3-4.

Over 3 years, cloud hosting would cost us $2,880-$9,000. Our self-hosted setup costs $842 total.

Savings: $2,000-$8,000 over 3 years. For a consulting business, that's real money.

But It's Not Just About Cost

The financial ROI was compelling, but we had other motivations:

When Self-Hosting Makes Sense

Good fit if you:

  • Have technical expertise in systems administration
  • Need predictable costs (CAPEX vs OPEX preference)
  • Have stable, predictable traffic patterns
  • Value data sovereignty and control
  • Can handle your own backups and disaster recovery

Bad fit if you:

  • Need auto-scaling for unpredictable traffic spikes
  • Lack in-house technical expertise
  • Need multi-region deployment from day one
  • Prioritize "someone else's problem" over cost savings
  • Are pre-revenue and need to move fast without infrastructure work

The Technology Stack

Here's what we built with and why we chose each component:

Docker Compose

Container orchestration. Simple, reliable, perfect for single-server deployments.

FastAPI

Python backend framework. Fast, modern, excellent for APIs with automatic OpenAPI docs.

PostgreSQL

Production database. Rock-solid, feature-rich, perfect for relational data.

nginx

Reverse proxy and static file server. Industry standard for good reason.

Cloudflare Tunnel

Secure external access without port forwarding. Replaces traditional VPN + dynamic DNS.

Ubuntu Server

Base operating system. Stable, well-documented, LTS support.

Why These Choices?

Docker Compose over Kubernetes: K8s is overkill for a single-server setup. Docker Compose gives us reproducible deployments, easy rollbacks, and isolated environments without the complexity overhead.

FastAPI over Node/Django: We're a Python shop, and FastAPI's async capabilities and automatic API documentation made it perfect for our API-first architecture.

PostgreSQL over MySQL/MongoDB: Postgres handles both relational and JSON data beautifully. The ecosystem is mature, and it's what most clients use anyway.

nginx over alternatives: Battle-tested, performant, extensive documentation. Can serve static files and reverse proxy equally well.

Cloudflare Tunnel over traditional VPN: This was the game-changer. No port forwarding, no dynamic DNS updates, automatic SSL/TLS, built-in DDoS protection. Just works.

The Architecture: How It All Fits Together

Here's the actual architecture running at datalux.dev right now:

Internet Traffic
      ↓
[Cloudflare Network]
  - DDoS Protection
  - SSL/TLS Termination
  - Global CDN
  - DNS Management
      ↓
[Cloudflare Tunnel] (cloudflared daemon on server)
  - Encrypted outbound connection
  - No inbound ports exposed
  - Automatic failover
      ↓
[nginx Container] :80
  - Reverse proxy
  - Static file serving
  - Request routing
      ↓
      ├─→ [Static HTML/CSS/JS] → Website content
      │
      ├─→ [FastAPI Container] :8000
      │     - REST API endpoints
      │     - Business logic
      │     - Form processing
      │          ↓
      │     [PostgreSQL Container] :5432
      │       - Contact submissions
      │       - Newsletter subscribers
      │       - Analytics data
      │
      └─→ [Email Service Integration]
            - SendGrid API
            - Contact notifications
            - Newsletter delivery

[Backup System]
  - Daily PostgreSQL dumps
  - Weekly full system snapshots
  - Offsite encrypted backups
                

Traffic Flow Explained

  1. User visits datalux.dev: DNS resolves to Cloudflare's network, not our IP
  2. Cloudflare receives request: Handles SSL/TLS, checks for DDoS patterns, serves cached static content when possible
  3. Cloudflare Tunnel routes to server: Encrypted connection over outbound tunnel (no inbound firewall rules needed)
  4. nginx receives request: Routes based on path—static files served directly, API requests proxied to FastAPI
  5. FastAPI processes dynamic requests: Contact form, newsletter signup, etc.
  6. PostgreSQL stores data: Contact submissions, subscriber info, usage analytics
  7. Response flows back: FastAPI → nginx → Cloudflare Tunnel → Cloudflare → User

Implementation Details: The Docker Compose Setup

Let's look at the actual implementation. Here's our production docker-compose.yml structure (simplified for clarity):

version: '3.8'

services:
  nginx:
    image: nginx:alpine
    container_name: datalux-nginx
    ports:
      - "80:80"
    volumes:
      - ./html:/usr/share/nginx/html:ro
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - api
    restart: unless-stopped
    networks:
      - datalux-network

  api:
    build: ./backend
    container_name: datalux-api
    environment:
      - DATABASE_URL=postgresql://user:pass@db:5432/datalux
      - SENDGRID_API_KEY=${SENDGRID_API_KEY}
      - ENVIRONMENT=production
    depends_on:
      - db
    restart: unless-stopped
    networks:
      - datalux-network

  db:
    image: postgres:15-alpine
    container_name: datalux-db
    environment:
      - POSTGRES_DB=datalux
      - POSTGRES_USER=datalux_user
      - POSTGRES_PASSWORD=${DB_PASSWORD}
    volumes:
      - postgres-data:/var/lib/postgresql/data
      - ./backups:/backups
    restart: unless-stopped
    networks:
      - datalux-network

  cloudflared:
    image: cloudflare/cloudflared:latest
    container_name: datalux-tunnel
    command: tunnel run
    environment:
      - TUNNEL_TOKEN=${CLOUDFLARE_TUNNEL_TOKEN}
    restart: unless-stopped
    networks:
      - datalux-network

volumes:
  postgres-data:
    driver: local

networks:
  datalux-network:
    driver: bridge

Key Configuration Decisions

Alpine-based images: Smaller attack surface, faster pulls, lower memory footprint. Production nginx runs in ~10MB RAM.

Named volumes for database: Persists data across container restarts and upgrades. Backup-friendly.

Environment variables for secrets: Never commit credentials. Load from .env file that's gitignored.

restart: unless-stopped: Automatic recovery from crashes or server reboots. Containers come back up without manual intervention.

Custom network: Isolated internal communication. Only nginx exposed to Cloudflare Tunnel.

Cloudflare Tunnel: The Magic Sauce

This deserves special attention because it's what makes self-hosting practical without complex networking.

The Old Way (Painful)

The Cloudflare Tunnel Way (Easy)

  1. Create a Cloudflare Tunnel in the dashboard (5 minutes)
  2. Copy the tunnel token
  3. Add tunnel token to docker-compose as environment variable
  4. Start cloudflared container
  5. Configure DNS to point to the tunnel

That's it. Cloudflare handles SSL/TLS automatically, provides DDoS protection, caches static content globally, and hides your actual IP address. No inbound firewall rules needed.

The cloudflared daemon establishes an outbound connection to Cloudflare's network. Traffic flows through this encrypted tunnel. To the outside world, your server doesn't exist—only Cloudflare's network is visible.

Lesson Learned: Don't Overthink Networking

I initially planned a complex setup with Wireguard VPN, Traefik reverse proxy, and Let's Encrypt automation. Spent two days fighting DNS challenges and certificate renewals.

Switched to Cloudflare Tunnel. Had it working in 20 minutes. Sometimes the modern solution really is better than the "traditional" way.

Security Considerations

Self-hosting means you're responsible for security. Here's what we implemented:

Network Layer

Application Layer

Data Layer

The Real Costs: Full Breakdown

Let's talk numbers. Here's what it actually cost to build and run this infrastructure:

Item One-Time Cost Annual Cost Notes
Mini PC Hardware $600 - Refurbished Dell OptiPlex, 16GB RAM, 512GB SSD
External Backup Drive $80 - 2TB encrypted drive for local backups
Domain Registration $12 $12 .dev domain through Google Domains
Electricity - $120 ~20W draw × 24/7 × $0.12/kWh
Cloudflare (Free Tier) $0 $0 DNS, CDN, Tunnel all free
SendGrid (Free Tier) $0 $0 100 emails/day sufficient for contact forms
Cloud Backup Storage - $60 Backblaze B2, ~50GB encrypted backups
Development Time $0* - *Learning investment, not out-of-pocket
Total $692 $192 Year 1 total: $884

Cost Comparison: Self-Hosted vs Cloud (3-Year TCO)

Approach Year 1 Year 2 Year 3 3-Year Total
Self-Hosted (Our Setup) $884 $192 $192 $1,268
AWS (t3.medium + RDS) $2,400 $2,400 $2,400 $7,200
Vercel + Supabase $1,200 $1,200 $1,200 $3,600
Savings vs AWS $1,516 $2,208 $2,208 $5,932

Over three years, self-hosting saves us nearly $6,000 compared to AWS. Even compared to budget-friendly serverless options, we save $2,300+.

For a bootstrapped consulting company, that's a meaningful amount. But more importantly, we own the entire stack and learned everything from scratch.

Deployment Workflow: How We Ship Updates

Here's our actual deployment process for pushing updates to production:

1. Development and Testing

# Work on feature branch locally
git checkout -b feature/new-blog-post

# Make changes, test locally with docker-compose
docker-compose up --build

# Commit and push to GitHub
git add .
git commit -m "Add new blog post about infrastructure"
git push origin feature/new-blog-post

2. Merge to Main

# After review, merge to main branch
git checkout main
git merge feature/new-blog-post
git push origin main

3. Deploy to Production

# SSH into production server
ssh datalux-prod

# Pull latest code
cd /opt/datalux
git pull origin main

# Rebuild and restart containers (zero-downtime with nginx)
docker-compose up -d --build

# Verify health
curl http://localhost/api/health

Total deployment time: 2-3 minutes from git push to live.

Rollback Procedure

If something breaks:

# Revert to previous git commit
git reset --hard HEAD~1

# Rebuild containers with old code
docker-compose up -d --build

Rollback time: Under 60 seconds.

Lesson Learned: You Don't Need Complex CI/CD (Yet)

Initially planned to set up GitHub Actions, automated testing, and complex deployment pipelines. Realized this was over-engineering for a single-server setup with minimal traffic.

Simple SSH deploy works perfectly fine. When we scale to multiple servers or team members, we'll add automation. But don't let tooling complexity delay shipping.

Performance and Uptime

Let's talk real numbers from our monitoring:

Response Times (30-Day Average)

Uptime and Reliability

For a self-hosted setup running on a single server, 99.7% uptime is respectable. We're not running a bank or hospital—a few hours of downtime per quarter is acceptable for our use case.

Resource Utilization

Current usage on our 16GB / 512GB mini PC:

We're nowhere near capacity. This setup could easily handle 10-20x our current traffic.

Lessons Learned: What We'd Do Differently

1. Buy a UPS from Day One

We lost 2 hours of uptime during a Houston thunderstorm. A $100 UPS would have prevented this entirely. It's now on the shopping list.

2. Automate Backup Verification Earlier

We tested our first backup restore in week 3. Should have been day 1. Backups you haven't tested are just files that make you feel better.

3. Start with Monitoring from the Beginning

We added proper monitoring (Prometheus + Grafana) in month 2. Wish we had it from the start. You can't optimize what you don't measure.

4. Document Everything As You Build

Future you will forget why you made certain decisions. Document your architecture, configurations, and workarounds in a README immediately.

5. Don't Optimize Prematurely

Our first version had database connection pooling, Redis caching, and complex nginx configurations. Removed 90% of it because our traffic didn't justify the complexity.

Start simple, measure, optimize only what's actually slow.

Is Self-Hosting Right for Your Business?

After building and running this infrastructure for several months, here's our honest assessment:

Self-Hosting is Great If:

Use Cloud Hosting If:

Hybrid Approach (What We Recommend for Most)

For many Houston businesses, the best answer is hybrid:

This gives you cost savings where it makes sense and cloud benefits where you need them.

We Can Build This for Your Business

This case study shows what's possible with modern self-hosted infrastructure. The setup we built for ourselves is the same one we can build for your Houston business.

What you get:

Typical investment: $8,000-$15,000 for full setup (hardware, software, configuration, documentation, training)

Typical savings: $1,500-$3,000 annually compared to equivalent cloud hosting, paying for itself in 3-5 years while giving you complete control.

Final Thoughts

Building DataLux's infrastructure on self-hosted hardware was one of the best technical decisions we made. We saved money, learned valuable skills, and created something we can demonstrate to clients.

Is it for everyone? No. It requires technical knowledge, ongoing maintenance, and accepting slightly lower uptime than cloud services.

But for the right business—one with technical expertise, predictable traffic, and a desire for cost control—self-hosting is incredibly powerful.

The cloud isn't going anywhere, and we still use it for certain workloads. But self-hosted infrastructure deserves a place in your decision-making process, especially as cloud costs continue to rise.

We're running this infrastructure in production, right now, serving you this blog post. That's not a proof of concept. That's confidence.

Want to Build Your Own Self-Hosted Infrastructure?
Let's discuss your specific needs and determine if self-hosting makes sense for your Houston business. We'll provide a detailed cost-benefit analysis and implementation roadmap.

Schedule a Free Infrastructure Consultation →