Back to Resources

Blog | May 08, 2026

I Created SD-WAN - But the Industry Needs to Move On

By Khalid Raza, founder of Viptela and Graphiant

A Necessary Apology

I need to start with an apology.

When we started Viptela in 2012 and pioneered what became known as SD-WAN, we solved a real problem. We helped enterprises break free from expensive MPLS circuits and gave them the flexibility to use internet connectivity for their wide area networks. It was the right solution for the right time.

But I've watched with growing concern as the technology I helped create has become a liability for the very organizations it was meant to serve.

The world has changed. Cloud changed. AI emerged. And SD-WAN? It stayed the same.

Today, in the era of data sovereignty, AI inference, and agentic computing, the architecture we built in 2012 has become the problem, not the solution.

The Problem We Solved in 2012

When we designed SD-WAN, the enterprise world looked very different:

  • Most data lived in customer data centers, not in the cloud
  • Traffic patterns were simple branches connected to headquarters, end of story
  • Security meant firewalls, not zero-trust, not end-to-end encryption
  • Data sovereignty wasn't a concern because data rarely left your premises

The problem was straightforward: enterprises needed to connect branches to their data centers without paying MPLS premiums.

Our solution was equally straightforward: build encrypted tunnels over cheap internet connections. Scale by adding more headend routers. Decrypt, route, encrypt. Simple.

And it worked, for that era.

What Changed: The Three Forces That Broke SD-WAN

1. Cloud Became the Data Center

When we designed SD-WAN, around 15-20% of enterprise workloads were in the cloud. Today? It's inverted. 85% of enterprise data now resides outside traditional data centers, spread across AWS, Azure, GCP, SaaS applications, edge locations, and increasingly, AI inference endpoints.

SD-WAN was built for a hub-and-spoke architecture. The world moved to partial mesh.

2. AI Created Entirely New Traffic Patterns

Here's what we didn't anticipate: the shift from centralized to distributed AI inference, and the emergence of agentic AI.

Traditional software is request-response. User asks and the system answers. Traffic flows north-south, it’s predictable and simple.

But AI inference is moving to the edge. Latency requirements and GPU capacity constraints are forcing inference to run at distributed locations. This shift creates east-west traffic as distributed inference endpoints communicate with each other for complex workloads.

The is compounded by AI agents. These agents are distributed everywhere, across clouds, colocation sites and edge environments. They're continuously communicating with distributed inference endpoints and other agents, coordinating decisions and sharing context in real-time.

This creates massive east-west traffic over wide geographic distances. It's not branch-to-datacenter anymore. It's everywhere-to-everywhere, all the time, with sub-100ms latency requirements.

SD-WAN was designed for north-south traffic. Distributed inference has created east-west traffic. Agentic AI demands full-mesh east-west at global scale.

3. Data Sovereignty Became Non-Negotiable

And here's where SD-WAN's architecture becomes actively dangerous:

In the classic SD-WAN model, whether deployed by service providers or middle-mile providers like Equinix and Megaport, there's a fundamental flaw: your data gets decrypted at third-party locations.

Think about that for a moment.

Your enterprise data - or worse, your nation's data - is decrypted, inspected, and re-encrypted at locations you don't control, by infrastructure you don't own, potentially in jurisdictions where you have no legal authority.

This destroys the entire premise of data sovereignty.

The Power Problem Driving Distributed AI

AI demand is exploding, and the biggest constraint isn’t models or software, it’s power.

Training and running modern AI systems requires enormous amounts of compute, which in turn requires enormous amounts of electricity. But the infrastructure needed to support that demand simply can’t keep up. Building new data centers is slow, expensive, and heavily constrained by power availability and permitting timelines.

As a result, the industry is being forced to rethink where compute lives.

Instead of concentrating AI infrastructure in a small number of massive data centers, compute will increasingly be deployed wherever power and space are available, telco central offices, edge locations, regional hubs, and smaller distributed facilities. We are already seeing the rise of GPU-as-a-service providers and regional AI clusters appearing outside traditional hyperscale locations.

In other words, AI compute will become highly distributed.

This shift creates a new requirement for networking. When compute lives in many different places, the network becomes the system that ties everything together, connecting enterprises to distributed AI 

infrastructure and enabling workloads to move between locations.

The future of AI will not be powered by a few centralized data centers. It will be powered by a distributed network of compute nodes, connected by the network fabric that makes them behave like a single platform.

The Three Imperatives of the AI Era

1. AI Is Distributed Intelligence with Sovereignty Requirements

Only 15% of the world's data is public. That's what current LLMs are trained on.

85% of data is behind firewalls, customer data, intellectual property, national secrets, healthcare records, financial transactions.

Organizations face an impossible choice:

  • Don't adopt AI → Existential business threat
  • Adopt AI without controls → Lose IP or national sovereignty

The network must solve this dilemma.

2. Latency Becomes Strategic

Amazon discovered in 2006 that 100ms of latency cost them 1% in sales. For a company doing billions in revenue, this meant tens of millions lost, not to competitors, but to physics and bad architecture.

With AI inference, latency is even more critical:

  • Real-time fraud detection needs <50ms response
  • Medical diagnostics need <100ms for doctor workflows
  • Autonomous systems need <20ms for safety-critical decisions
  • AI agents communicating need predictable, consistent latency

3. Data Becomes a National Asset

Data sovereignty is no longer just about compliance. It's about:

  • Legal oversight of data based on where it was generated
  • Control of where it's stored and computed
  • Visibility into how it moves across jurisdictions
  • Cryptographic proof for regulators and auditors

What We Need: A New Network Architecture for the AI Era

The network we built for static content caching over public internet, or for point-to-point SD-WAN connections, will not scale to the AI era.

We're transitioning from SaaS (software as a service) to Agentic AI (autonomous, distributed intelligence).

Just as microservices architecture transformed the data center, we need to look at the WAN differently.

We need a network for distributed AI built around:

  • Security: End-to-end encryption, zero-trust by default
  • Sovereignty: Path control, jurisdictional guarantees, cryptographic audit trails
  • Observability: Complete visibility into where data travels and how it's processed
  • Full-mesh fabric: Anywhere-to-anywhere connectivity running over public last-mile for agility
  • Edge intelligence: AI caching at telco central offices and edge locations
  • Policy-aware routing: Traffic steered based on data classification, sovereignty requirements, and latency SLAs

This is not SD-WAN 2.0. This is a fundamentally different architecture.

The Path Forward

I helped create SD-WAN. It solved the problems of 2012 and solved them well.

But the assumptions SD-WAN was built on are no longer valid.

SD-WAN was designed for a world where:

  • Data stayed in your data center with limited cloud deployments
  • Traffic was north-south
  • Security meant perimeter firewalls
  • Sovereignty wasn't a concern

This architecture must evolve.

Today as you plan for AI workloads, distributed intelligence, and sovereign computing, ask yourself:

  • Can my current WAN architecture and vendors solutions handle massive east-west traffic at scale?
  • Does my network provide true end-to-end encryption without third-party decryption?
  • Does my network fabric support the mesh connectivity pattern that agentic AI demands?

If the answer to any of these is "no", then it's time to start planning your network transition.

Final Thoughts

AI workloads and the rapid rise of agentic AI are beginning to reshape the network landscape.

As AI systems scale, they are generating new traffic patterns and performance requirements. Distributed models, GPU clusters, data pipelines, and collaborating AI agents are driving far more east-west communication across clouds, edges, and data centers.

The lesson the industry has learned over time is simple: system architecture drives network architecture.

Today, AI architecture is becoming the next major driver of network transformation. As AI systems become more distributed and collaborative, the network must evolve to support them.

The shift is already underway. The real question is - are you ready for it.