🌩️ The outage in Iberia — and my moment of clarity
A few months ago, a major power outage swept across Iberia. I was out of town at the time. Everything went dark — not just the electricity grid, but entire telecommunications networks. Data centers stayed up, but the “last mile” was cut off. Users couldn’t communicate or fetch information, businesses went offline — and centralized CDN architecture couldn’t save us.
That outage wasn’t just a blackout — it was a wake-up call. No matter how powerful or redundant central systems are, if they don’t reach the user when it matters, they’re irrelevant.
From central-centric to last-mile-centric: A Datacenter and CDN vet’s epiphany
Having spent years working with the world's largest Telcos, Datacenter vendors and one of the world’s largest CDNs, I’ve seen the gaps:
- High latency for underserved regions.
- Backhaul congestion during peak events.
- Single points of failure closer to users—ISP aggregation, local routers, power availability.
Centralized PoPs — regional data centers, IXPs, metro clouds — they get close… but not close enough. That outage in Iberia crystallized it: truly distributed compute must live in the last mile — ideally inside ISP infrastructure.
Qwilt Open Edge: the real deal?
That's when Qwilt triggered my interest. Their Open Edge architecture seems to nail technical precision:
- Federated ISP-embedded nodes
Over 2,196 nodes in 38 countries, with 150+ Tbps capacity — directly inside ISP access networks: central offices, aggregation points, base stations. - Sub-5 ms latency across the board
These last-mile nodes are on average 10× closer to users than CDNs and metro clouds — enabling ultra-low latency compute and delivery. - Cloud-managed, API-driven orchestration
Qwilt’s Cloud Service Controller and open APIs give publishers one global interface to 2000+ edge PoPs, enabling centralized control of truly distributed workloads. - Composable compute + storage fabric
Built on commodity Xeon/Atom servers, each node can be dynamically tuned — storage-heavy for caching 4K/8K, CPU/GPU for AI inference or gaming. - Seamless hybrid fabric
Qwilt’s architecture isn’t anti-cloud — it intelligently complements centralized resources, choosing where (edge vs cloud) based on performance needs.
The result? A programmable edge that actually lives where people connect—not just marketing copy.
Deep into the Tech: what makes it tick
Common compute + storage at scale
Every edge node runs the same software (QN), managed from the cloud, so upgrades, analytics, policy enforcement — all delivered globally with zero local ops burden.
Open Caching integration
Standardized via Streaming Video Technology Alliance specs, edge caching works seamlessly with existing clients — fully transparent and deployable at network-scale.
Federated yet unified
Though nodes are inside different ISPs across continents, Qwilt’s API presents one consistent platform — developers don’t need to poke each ISP separately.
Alignment with telco innovations
With vRAN and MEC deployed in 4G/5G base stations, these nodes can co-reside with network functions — opening opportunities for slicing, real-time analytics, AR/VR offloading, and network-aware AI at the edge.
Why it matters — Beyond the outage
Imagine Iberia again — but this time, power flickers at a local aggregation point. If edge compute is only in centralized data centers, applications fail. With Qwilt’s nodes embedded in the ISP network — even if one node fails, thousands of others nearby can continue serving, and distribution shifts automatically via the federated control plane. That’s resilience born of physical distribution and smart orchestration.
It’s not enough to say “we have edge nodes.” You need:
- Depth: nodes in the last km, one hop from users.
- Breadth: thousands, globally.
- Intelligence: API-driven orchestration, application-aware placement.
- Integration: with telco stacks and standards like Open Caching.
Qwilt seems to deliver on all fronts.
The CDN engineer’s closing reflection
That Iberian blackout turned my worldview. We assumed power outages were anomalies. But if your platform isn’t distributed enough, your resiliency is a myth.
I built centralized DNS Authorities and Resolvers as well as content caches for millions — and they failed spectacularly when the last-mile broke. But Qwilt’s model offers something different: compute that doesn’t need the metro cloud when it matters. It’s not buzzword edge — it’s true edge, anchored in telco infrastructure, orchestrated globally, programmable by design.
I would love to see us go beyond edge-by-name, and this seems to be the blueprint. And I’m here to help shape this — let’s make that edge story real.
Reference articles by Qwilt and Computer Weekly.