When you're a consulting company helping clients build infrastructure, there's an obvious question: what does YOUR infrastructure look like?
This is the story of how we built DataLux's production platform—running on our own hardware in Houston, serving datalux.dev to you right now. We'll cover the architecture decisions, real costs, lessons learned, and why we chose self-hosting over cloud.
This isn't a theoretical blog post. This is the actual system powering our business, with real numbers and real tradeoffs. If you're considering self-hosted infrastructure for your business, this case study will show you exactly what it takes.
Why Self-Host? The Decision Framework
Before we dive into the technical details, let's address the obvious question: why not just use AWS, Azure, or Vercel like everyone else?
The Cloud Cost Reality Check
We ran the numbers for hosting our platform (website, API, database, email service) on popular cloud providers:
- AWS: $180-$250/month for equivalent resources (t3.medium instance, RDS postgres, load balancer, data transfer)
- Azure: $170-$230/month (similar configuration)
- Vercel + managed DB: $80-$150/month (serverless functions + Supabase or similar)
Annual cost: $960-$3,000 depending on provider and growth.
Meanwhile, our self-hosted setup:
- Hardware: $600 one-time (refurbished mini PC with 16GB RAM, 512GB SSD)
- Electricity: ~$10/month (mini PC draws about 20W under load)
- Internet: $0 additional (already had business internet)
- Domain/DNS: $12/year for .dev domain
- Cloudflare Tunnel: $0 (free tier is more than sufficient)
Annual cost: $121 after year one, then break-even at month 3-4.
Over 3 years, cloud hosting would cost us $2,880-$9,000. Our self-hosted setup costs $842 total.
Savings: $2,000-$8,000 over 3 years. For a consulting business, that's real money.
But It's Not Just About Cost
The financial ROI was compelling, but we had other motivations:
- Practice what we preach: We help clients build infrastructure. We should understand every layer of the stack, not just abstract cloud services.
- Learning value: Self-hosting forces you to understand networking, security, reverse proxies, SSL/TLS, and systems administration. These skills make us better consultants.
- Client demonstration: "We run our production systems the same way we'd build yours" is a powerful sales message.
- Control and flexibility: No vendor lock-in, no surprise pricing changes, no service deprecations.
- Data sovereignty: All our data stays on hardware we control in Houston. No compliance questions about cloud regions.
When Self-Hosting Makes Sense
Good fit if you:
- Have technical expertise in systems administration
- Need predictable costs (CAPEX vs OPEX preference)
- Have stable, predictable traffic patterns
- Value data sovereignty and control
- Can handle your own backups and disaster recovery
Bad fit if you:
- Need auto-scaling for unpredictable traffic spikes
- Lack in-house technical expertise
- Need multi-region deployment from day one
- Prioritize "someone else's problem" over cost savings
- Are pre-revenue and need to move fast without infrastructure work
The Technology Stack
Here's what we built with and why we chose each component:
Docker Compose
Container orchestration. Simple, reliable, perfect for single-server deployments.
FastAPI
Python backend framework. Fast, modern, excellent for APIs with automatic OpenAPI docs.
PostgreSQL
Production database. Rock-solid, feature-rich, perfect for relational data.
nginx
Reverse proxy and static file server. Industry standard for good reason.
Cloudflare Tunnel
Secure external access without port forwarding. Replaces traditional VPN + dynamic DNS.
Ubuntu Server
Base operating system. Stable, well-documented, LTS support.
Why These Choices?
Docker Compose over Kubernetes: K8s is overkill for a single-server setup. Docker Compose gives us reproducible deployments, easy rollbacks, and isolated environments without the complexity overhead.
FastAPI over Node/Django: We're a Python shop, and FastAPI's async capabilities and automatic API documentation made it perfect for our API-first architecture.
PostgreSQL over MySQL/MongoDB: Postgres handles both relational and JSON data beautifully. The ecosystem is mature, and it's what most clients use anyway.
nginx over alternatives: Battle-tested, performant, extensive documentation. Can serve static files and reverse proxy equally well.
Cloudflare Tunnel over traditional VPN: This was the game-changer. No port forwarding, no dynamic DNS updates, automatic SSL/TLS, built-in DDoS protection. Just works.
The Architecture: How It All Fits Together
Here's the actual architecture running at datalux.dev right now:
Internet Traffic
↓
[Cloudflare Network]
- DDoS Protection
- SSL/TLS Termination
- Global CDN
- DNS Management
↓
[Cloudflare Tunnel] (cloudflared daemon on server)
- Encrypted outbound connection
- No inbound ports exposed
- Automatic failover
↓
[nginx Container] :80
- Reverse proxy
- Static file serving
- Request routing
↓
├─→ [Static HTML/CSS/JS] → Website content
│
├─→ [FastAPI Container] :8000
│ - REST API endpoints
│ - Business logic
│ - Form processing
│ ↓
│ [PostgreSQL Container] :5432
│ - Contact submissions
│ - Newsletter subscribers
│ - Analytics data
│
└─→ [Email Service Integration]
- SendGrid API
- Contact notifications
- Newsletter delivery
[Backup System]
- Daily PostgreSQL dumps
- Weekly full system snapshots
- Offsite encrypted backups
Traffic Flow Explained
- User visits datalux.dev: DNS resolves to Cloudflare's network, not our IP
- Cloudflare receives request: Handles SSL/TLS, checks for DDoS patterns, serves cached static content when possible
- Cloudflare Tunnel routes to server: Encrypted connection over outbound tunnel (no inbound firewall rules needed)
- nginx receives request: Routes based on path—static files served directly, API requests proxied to FastAPI
- FastAPI processes dynamic requests: Contact form, newsletter signup, etc.
- PostgreSQL stores data: Contact submissions, subscriber info, usage analytics
- Response flows back: FastAPI → nginx → Cloudflare Tunnel → Cloudflare → User
Implementation Details: The Docker Compose Setup
Let's look at the actual implementation. Here's our production docker-compose.yml structure (simplified for clarity):
version: '3.8'
services:
nginx:
image: nginx:alpine
container_name: datalux-nginx
ports:
- "80:80"
volumes:
- ./html:/usr/share/nginx/html:ro
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- api
restart: unless-stopped
networks:
- datalux-network
api:
build: ./backend
container_name: datalux-api
environment:
- DATABASE_URL=postgresql://user:pass@db:5432/datalux
- SENDGRID_API_KEY=${SENDGRID_API_KEY}
- ENVIRONMENT=production
depends_on:
- db
restart: unless-stopped
networks:
- datalux-network
db:
image: postgres:15-alpine
container_name: datalux-db
environment:
- POSTGRES_DB=datalux
- POSTGRES_USER=datalux_user
- POSTGRES_PASSWORD=${DB_PASSWORD}
volumes:
- postgres-data:/var/lib/postgresql/data
- ./backups:/backups
restart: unless-stopped
networks:
- datalux-network
cloudflared:
image: cloudflare/cloudflared:latest
container_name: datalux-tunnel
command: tunnel run
environment:
- TUNNEL_TOKEN=${CLOUDFLARE_TUNNEL_TOKEN}
restart: unless-stopped
networks:
- datalux-network
volumes:
postgres-data:
driver: local
networks:
datalux-network:
driver: bridge
Key Configuration Decisions
Alpine-based images: Smaller attack surface, faster pulls, lower memory footprint. Production nginx runs in ~10MB RAM.
Named volumes for database: Persists data across container restarts and upgrades. Backup-friendly.
Environment variables for secrets: Never commit credentials. Load from .env file that's gitignored.
restart: unless-stopped: Automatic recovery from crashes or server reboots. Containers come back up without manual intervention.
Custom network: Isolated internal communication. Only nginx exposed to Cloudflare Tunnel.
Cloudflare Tunnel: The Magic Sauce
This deserves special attention because it's what makes self-hosting practical without complex networking.
The Old Way (Painful)
- Configure router for port forwarding (ports 80 and 443)
- Set up dynamic DNS to track changing residential IP
- Configure firewall rules and security groups
- Manage Let's Encrypt SSL certificates manually
- Deal with ISP blocking common ports
- No DDoS protection (your home IP is exposed)
- VPN for secure remote management
The Cloudflare Tunnel Way (Easy)
- Create a Cloudflare Tunnel in the dashboard (5 minutes)
- Copy the tunnel token
- Add tunnel token to docker-compose as environment variable
- Start cloudflared container
- Configure DNS to point to the tunnel
That's it. Cloudflare handles SSL/TLS automatically, provides DDoS protection, caches static content globally, and hides your actual IP address. No inbound firewall rules needed.
The cloudflared daemon establishes an outbound connection to Cloudflare's network. Traffic flows through this encrypted tunnel. To the outside world, your server doesn't exist—only Cloudflare's network is visible.
Lesson Learned: Don't Overthink Networking
I initially planned a complex setup with Wireguard VPN, Traefik reverse proxy, and Let's Encrypt automation. Spent two days fighting DNS challenges and certificate renewals.
Switched to Cloudflare Tunnel. Had it working in 20 minutes. Sometimes the modern solution really is better than the "traditional" way.
Security Considerations
Self-hosting means you're responsible for security. Here's what we implemented:
Network Layer
- Firewall default deny: UFW configured to block all inbound except SSH (port 22, key-only)
- No exposed web ports: Cloudflare Tunnel means ports 80/443 aren't open to the internet
- SSH hardening: Key-only authentication, non-standard port, fail2ban for brute force protection
- Automatic security updates: Unattended-upgrades for Ubuntu security patches
Application Layer
- Environment-based secrets: No credentials in code or configs. All loaded from .env files
- Database isolation: Postgres not exposed outside Docker network, no external access
- Input validation: FastAPI with Pydantic models validates all API inputs
- Rate limiting: Cloudflare handles rate limiting and bot protection
- CORS restrictions: API only accepts requests from datalux.dev domain
Data Layer
- Daily encrypted backups: Automated postgres dumps to encrypted external drive
- Offsite backup sync: Weekly encrypted backups to cloud storage (ironic, but necessary)
- Backup testing: Monthly restore drills to verify backup integrity
The Real Costs: Full Breakdown
Let's talk numbers. Here's what it actually cost to build and run this infrastructure:
| Item | One-Time Cost | Annual Cost | Notes |
|---|---|---|---|
| Mini PC Hardware | $600 | - | Refurbished Dell OptiPlex, 16GB RAM, 512GB SSD |
| External Backup Drive | $80 | - | 2TB encrypted drive for local backups |
| Domain Registration | $12 | $12 | .dev domain through Google Domains |
| Electricity | - | $120 | ~20W draw × 24/7 × $0.12/kWh |
| Cloudflare (Free Tier) | $0 | $0 | DNS, CDN, Tunnel all free |
| SendGrid (Free Tier) | $0 | $0 | 100 emails/day sufficient for contact forms |
| Cloud Backup Storage | - | $60 | Backblaze B2, ~50GB encrypted backups |
| Development Time | $0* | - | *Learning investment, not out-of-pocket |
| Total | $692 | $192 | Year 1 total: $884 |
Cost Comparison: Self-Hosted vs Cloud (3-Year TCO)
| Approach | Year 1 | Year 2 | Year 3 | 3-Year Total |
|---|---|---|---|---|
| Self-Hosted (Our Setup) | $884 | $192 | $192 | $1,268 |
| AWS (t3.medium + RDS) | $2,400 | $2,400 | $2,400 | $7,200 |
| Vercel + Supabase | $1,200 | $1,200 | $1,200 | $3,600 |
| Savings vs AWS | $1,516 | $2,208 | $2,208 | $5,932 |
Over three years, self-hosting saves us nearly $6,000 compared to AWS. Even compared to budget-friendly serverless options, we save $2,300+.
For a bootstrapped consulting company, that's a meaningful amount. But more importantly, we own the entire stack and learned everything from scratch.
Deployment Workflow: How We Ship Updates
Here's our actual deployment process for pushing updates to production:
1. Development and Testing
# Work on feature branch locally
git checkout -b feature/new-blog-post
# Make changes, test locally with docker-compose
docker-compose up --build
# Commit and push to GitHub
git add .
git commit -m "Add new blog post about infrastructure"
git push origin feature/new-blog-post
2. Merge to Main
# After review, merge to main branch
git checkout main
git merge feature/new-blog-post
git push origin main
3. Deploy to Production
# SSH into production server
ssh datalux-prod
# Pull latest code
cd /opt/datalux
git pull origin main
# Rebuild and restart containers (zero-downtime with nginx)
docker-compose up -d --build
# Verify health
curl http://localhost/api/health
Total deployment time: 2-3 minutes from git push to live.
Rollback Procedure
If something breaks:
# Revert to previous git commit
git reset --hard HEAD~1
# Rebuild containers with old code
docker-compose up -d --build
Rollback time: Under 60 seconds.
Lesson Learned: You Don't Need Complex CI/CD (Yet)
Initially planned to set up GitHub Actions, automated testing, and complex deployment pipelines. Realized this was over-engineering for a single-server setup with minimal traffic.
Simple SSH deploy works perfectly fine. When we scale to multiple servers or team members, we'll add automation. But don't let tooling complexity delay shipping.
Performance and Uptime
Let's talk real numbers from our monitoring:
Response Times (30-Day Average)
- Static pages (HTML/CSS/JS): 45ms median, served from Cloudflare CDN
- API endpoints: 120ms median, includes database query time
- Database queries: 8ms median for typical contact form insert
- Time to First Byte: 180ms from Houston, 220ms from NYC, 250ms from San Francisco
Uptime and Reliability
- Uptime (90 days): 99.7% (total downtime: ~6 hours)
- Planned maintenance: 4 hours for Ubuntu security updates
- Unplanned outages: 2 hours (power outage during storm, no UPS... yet)
For a self-hosted setup running on a single server, 99.7% uptime is respectable. We're not running a bank or hospital—a few hours of downtime per quarter is acceptable for our use case.
Resource Utilization
Current usage on our 16GB / 512GB mini PC:
- CPU: 5-15% average, peaks to 40% during backups
- RAM: 6GB used (38%), plenty of headroom
- Disk: 180GB used (35%), mostly logs and backups
- Network: 50-200MB daily transfer
We're nowhere near capacity. This setup could easily handle 10-20x our current traffic.
Lessons Learned: What We'd Do Differently
1. Buy a UPS from Day One
We lost 2 hours of uptime during a Houston thunderstorm. A $100 UPS would have prevented this entirely. It's now on the shopping list.
2. Automate Backup Verification Earlier
We tested our first backup restore in week 3. Should have been day 1. Backups you haven't tested are just files that make you feel better.
3. Start with Monitoring from the Beginning
We added proper monitoring (Prometheus + Grafana) in month 2. Wish we had it from the start. You can't optimize what you don't measure.
4. Document Everything As You Build
Future you will forget why you made certain decisions. Document your architecture, configurations, and workarounds in a README immediately.
5. Don't Optimize Prematurely
Our first version had database connection pooling, Redis caching, and complex nginx configurations. Removed 90% of it because our traffic didn't justify the complexity.
Start simple, measure, optimize only what's actually slow.
Is Self-Hosting Right for Your Business?
After building and running this infrastructure for several months, here's our honest assessment:
Self-Hosting is Great If:
- You have technical skills and enjoy infrastructure work
- Your traffic is predictable and moderate (not massive spikes)
- You want cost predictability and lower long-term expenses
- Data control and sovereignty matter to you
- You're building a technical product and want to understand the full stack
- You have reliable internet and power (or invest in UPS/redundancy)
Use Cloud Hosting If:
- You need to focus 100% on product, not infrastructure
- You have unpredictable traffic with major spikes
- You need multi-region deployment globally
- You lack technical infrastructure expertise
- You're pre-revenue and need to move fast
- You need guaranteed 99.9%+ uptime with SLAs
Hybrid Approach (What We Recommend for Most)
For many Houston businesses, the best answer is hybrid:
- Self-host: Internal tools, data processing, analytics, development environments
- Cloud-host: Customer-facing applications, high-availability services, global content delivery
This gives you cost savings where it makes sense and cloud benefits where you need them.
We Can Build This for Your Business
This case study shows what's possible with modern self-hosted infrastructure. The setup we built for ourselves is the same one we can build for your Houston business.
What you get:
- Fully containerized application stack (Docker Compose)
- Production-ready database with automated backups
- Secure external access via Cloudflare Tunnel (no complex networking)
- SSL/TLS, DDoS protection, and CDN included
- Complete documentation and deployment procedures
- Training for your team to manage and update
Typical investment: $8,000-$15,000 for full setup (hardware, software, configuration, documentation, training)
Typical savings: $1,500-$3,000 annually compared to equivalent cloud hosting, paying for itself in 3-5 years while giving you complete control.
Final Thoughts
Building DataLux's infrastructure on self-hosted hardware was one of the best technical decisions we made. We saved money, learned valuable skills, and created something we can demonstrate to clients.
Is it for everyone? No. It requires technical knowledge, ongoing maintenance, and accepting slightly lower uptime than cloud services.
But for the right business—one with technical expertise, predictable traffic, and a desire for cost control—self-hosting is incredibly powerful.
The cloud isn't going anywhere, and we still use it for certain workloads. But self-hosted infrastructure deserves a place in your decision-making process, especially as cloud costs continue to rise.
We're running this infrastructure in production, right now, serving you this blog post. That's not a proof of concept. That's confidence.
Want to Build Your Own Self-Hosted Infrastructure?
Let's discuss your specific needs and determine if self-hosting makes sense for your Houston business. We'll provide a detailed cost-benefit analysis and implementation roadmap.
Schedule a Free Infrastructure Consultation →