We Built It, Then We Freed It: Telemetry Harbor Goes Open Source
We’re open-sourcing Telemetry Harbor: the same high-performance ingest stack we run in the cloud, now fully self-hostable. Built on Go, TimescaleDB, Redis, and Grafana, it’s production-ready out of the box. Your data, your rules clone, run docker compose, and start sending telemetry.

Couple weeks ago, we published our story about rewriting our entire ingest pipeline from Python to Go, achieving a 10x performance improvement and eliminating the crashes that plagued our early system. The response was incredible developers loved the technical deep-dive, and many asked to try Telemetry Harbor for their projects.
But alongside the enthusiasm, we kept hearing the same concerns:
"Looks amazing, but I can't send our telemetry data to a third party."
"Our compliance team will never approve an external service."
"Can I self-host this? We need full control over our infrastructure."
We heard you. Today, we're excited to announce Telemetry Harbor OSS the complete, production-ready, self-hostable version of our platform.
Why Open Source? Why Now?
The decision wasn't immediate. As a small team building a SaaS platform, open-sourcing your core technology feels counterintuitive. Aren't we giving away our competitive advantage?
But the more we thought about it, the more it made sense:
Trust Through Transparency: When you're asking teams to send their critical telemetry data somewhere, transparency builds trust. Now you can see exactly how we handle your data, audit our security practices, and modify anything that doesn't meet your standards.
Your Data, Your Rules: Some organizations simply cannot use cloud services for sensitive telemetry data. Period. We'd rather enable these teams with our technology than lose them to inferior solutions.
Community Over Competition: The telemetry space is fragmented with half-built solutions and abandoned projects. We believe a well-maintained, production-ready open source option will elevate the entire ecosystem.
Most importantly: We're not afraid of competition. Our cloud platform's value isn't just in the code—it's in the managed infrastructure, automatic scaling, enterprise features, and dedicated support that many teams prefer over self-hosting.
What You're Getting (Spoiler: Everything)
Telemetry Harbor OSS isn't a stripped-down "community edition." It's the same production-grade stack that powers our cloud platform:
The Full Stack
- Go Fiber API - The same high-performance ingest layer that handles 10x more load than our original Python implementation
- TimescaleDB - Production-ready time-series database with automatic partitioning
- Redis Queue - Reliable message queuing that keeps your data safe during traffic spikes
- Grafana Integration - Pre-configured dashboards and datasource connections
- Background Workers - Efficient data processing that won't crash under load
Production-Ready Out of the Box
git clone https://github.com/TelemetryHarbor/telemetry-harbor-oss.git
cd telemetry-harbor-oss
docker compose up -d
That's it. API at localhost:8000
, Grafana at localhost:3000
. Start sending data immediately.
SDK Compatibility
All our existing SDKs work with OSS, just change the endpoint URL. No code changes required.
The Complete Package (With Some Smart Omissions)
Telemetry Harbor OSS includes everything you need to run a production telemetry platform, but we've made some intentional decisions about what to include versus what belongs in our managed cloud offering.
What's Included
You get the full ingestion and visualization stack the same Go-powered API that handles 10x more throughput than our original Python implementation, complete with TimescaleDB for time-series storage, Redis queuing, and pre-configured Grafana dashboards.
What We Left Out (And Why)
Some features are inherently tied to multi-tenant cloud infrastructure: rate limiting systems, automatic harbor provisioning, and billing integrations. These don't make sense in a self-hosted environment where you control your own resources and usage patterns.
The OSS version gives you one powerful, general-purpose harbor that can handle any telemetry workload. It's the same core technology, just optimized for single-tenant deployment.
When to Choose OSS vs Cloud
We're not trying to hide the ball here. Both options have clear use cases:
Choose OSS Self-Hosted If:
- You need full control over data and infrastructure
- Compliance requires on-premises deployment
- You want to customize the stack for specific needs
- You have the technical expertise to manage infrastructure
- You're building something where telemetry data absolutely cannot leave your environment
Choose Telemetry Harbor Cloud If:
- You want managed infrastructure with automatic updates
- You need multiple harbors with automatic provisioning
- You prefer dedicated support and SLAs
- You want to focus on your application, not infrastructure maintenance
- You need guaranteed uptime and automatic scaling
There's no wrong choice just different priorities.
The Licensing Philosophy
We chose Apache 2.0 because we mean it when we say "free."
Use it commercially—build products, sell services, make money
Modify everything—fork it, customize it, make it yours
No gotchas—just keep the copyright notice and attribution
If you build something cool with Telemetry Harbor OSS, we'd love to hear about it. A link back to us is appreciated.
What This Means for Our Cloud Platform
Absolutely nothing changes. Telemetry Harbor Cloud continues to operate exactly as before, with the same performance, features, and reliability our customers depend on.
If anything, OSS makes our cloud platform stronger:
- Better Code Quality: Open source code faces more scrutiny, making our entire codebase more robust
- Faster Innovation: Community contributions and feedback accelerate feature development
- Stronger Trust: Transparency in our OSS version builds confidence in our cloud offering
The Real Reason We Did This
Beyond the strategic benefits, there's a simpler truth: we built something genuinely useful, and we want more teams to benefit from it.
Every time we saw another team struggling with InfluxDB's limitations, building yet another custom time-series solution from scratch, or wrestling with deployment complexity that kept them from focusing on their actual data insights we felt the same frustration that started Telemetry Harbor in the first place.
The telemetry infrastructure space is littered with half-finished projects, abandoned solutions, and tools that promise simplicity but deliver complexity. We built Telemetry Harbor to solve these problems for ourselves, then refined it based on real customer feedback and production workloads.
Now it's yours too.
What's Next
This is just the beginning of our open source journey. Here's what we're planning:
Harbor AI Open Source: The machine learning features that power insights in our cloud platform will be released as open source modules. Think anomaly detection, pattern recognition, and trend analysis, all running on your own infrastructure.
Management Dashboard: A dedicated admin interface for monitoring your OSS deployment, managing data retention, and configuring performance settings.
Community Harbor Types: Right now, OSS ships with our general-purpose harbor that handles any telemetry payload. But we know different industries have different needs. Want to ingest LoRaWAN data from The Things Network? Bluetooth sensor data? Industrial IoT protocols?
The community is welcome to contribute new harbor types for specialized data formats. While we can't guarantee these will make it upstream to our cloud platform (that depends on broader demand), any new harbor types we develop for the cloud will flow downstream to OSS.
The roadmap will be shaped by what the community actually builds and needs, not what we think you need.
Get Started Today
Ready to try it? The complete stack is waiting:
Repository: github.com/TelemetryHarbor/telemetry-harbor-oss
Quick Start:
git clone https://github.com/TelemetryHarbor/telemetry-harbor-oss.git
cd telemetry-harbor-oss
docker compose up -d
Join the Community: Issues, PRs, and feature requests welcome
We can't wait to see what you build with it.