Deploying the Internet of AI Agents – Part I

Sharanya Badrinarayanan

Deploying the Internet of AI Agents – Part I

Stop Building Bots, Start Building AI Agent Networks

Manikandan Meenakshi Sundaram¹, Sharanya Badrinarayanan¹, Neha Save¹, Javier Solis Vindas²,
John Zinky, Ph.D.², *Pradyumna Chari, Ph.D.³, Maria Gorskikh³, Tim Swanson⁴, Ramesh Raskar, Ph.D.³, and *Hema Seshadri, Ph.D.¹˒²

¹ Northeastern University · ² Akamai Technologies · ³ Massachusetts Institute of Technology· 

Cisco Systems  · *Principal Investigators

AI is entering a new phase. Until now, most AI systems have been built as isolated models or assistants useful within a single application, but limited by the boundaries of that system. The next phase is different. We envision a world with large numbers of specialized AI agents, each with a distinct role, set of capabilities, and operating constraints, that can discover one another, establish trust, exchange information, and work together to solve problems no single agent could solve alone.

We call this emerging ecosystem the Internet of AI Agents. By an AI agent, we mean a software system that understands its domain of knowledge, can reason about goals, use tools, and take action. By the Internet of AI Agents, we mean both a society of AI agents working together and the infrastructure, protocols, and trust mechanisms that allow many such agents to interact safely and effectively across platforms and organizations. 

This post is the first in a blog series, which teases apart the concerns of what a society of agents is trying to accomplish from how it actually does its tasks in an unreliable and hostile environment. Its purpose is to introduce the motivation behind this shift and explain why new infrastructure is needed. In later posts, we will go deeper into the technical architecture, implementation, deployment, results, lessons learned, and real-world use cases, including our Boston MBTA commuter proof-of-concept. Here, we set the stage for why systems like that matter and what it takes to make them possible.

From Isolated AI to Collaborative Systems

Traditional AI, machine learning, and analytics have often been deployed as isolated capabilities inside a single application or workflow. Conventional software is also usually governed by fixed rules and tightly coupled integrations. That model works for many tasks, but it becomes limiting when a problem requires multiple kinds of expertise, real-time coordination, or adaptation to changing conditions.

The Internet of AI Agents points toward a different model: not one model trying to do everything, but many specialized agents working together. In such a system, intelligence is not only embedded in individual models; it also emerges from how agents discover one another, share context, hand off work, and coordinate decisions.

This is the shift from isolated intelligence to collaborative autonomy. Agents do not simply follow pre-defined business rules. They can communicate, coordinate, and improve together in real time. Over time, groups of agents, what we call agent societies, can develop more capable and resilient behavior than any one system acting alone.

New Infrastructure Requirements

As organizations move toward agent-based systems, better models alone will not be enough. The widespread adoption of this shift depends on decentralized, secure, and scalable infrastructure. Agents need to discover one another, verify who they are interacting with, communicate without relying on brittle static links, and coordinate in real time across systems.

That is why decentralized architectures matter. By distributing computation and coordination across multiple nodes rather than relying on a single point of control, they can offer the resilience and scalability needed for modern global applications.

Today, many agents still operate in isolation or inside siloed marketplaces. They depend on fixed endpoints and custom integrations. That approach does not scale well, and it can be vulnerable to overload, DDoS attacks, and sudden demand spikes.

An Internet of AI Agents requires new infrastructure in several areas:

Inter-agent communication – Agents need standardized ways to exchange messages, negotiate tasks, share context, and collaborate across networking, compute, and security layers.

Agent discovery and registry – Agents need dynamic mechanisms to find, verify, and recruit other agents with the right capabilities for a given task.

Formation of agent societies – Teams of agents need ways to collaborate repeatedly, learn from shared experience, and improve collective performance over time.

Autonomous coordination and decision-making – The system needs real-time coordination that can adapt to changing context rather than depend entirely on rigid, centralized logic.

The AI Agent Digital Passport

One example of this new infrastructure is the AI Agent Digital Passport: a standardized, machine-readable metadata record that describes an agent’s identity, capabilities, provenance, oversight model, and permissions. In many settings, that record can be cryptographically signed so that other agents and systems can verify and trust it. In practice, it serves as a verified profile for the agent.

Just as a human traveler uses a passport to establish identity and move across borders with trust and accountability, an AI agent needs a reliable way to identify itself, communicate what it can do, and declare what it is authorized to access.

An AI Agent Digital Passport supports several kinds of metadata. Some describe what functionality the agent can perform, others describe the restrictions on how it interacts with other agents, and others describe the validity of the metadata itself. Some example fields in AI Agent Digital Passport include:

Capability discovery – It can describe what the agent does, for example, scheduling, route planning, summarization, or revenue management, so that other agents can select the right collaborator.

Verifiable identity – It helps confirm that an agent is who it claims to be, reducing impersonation and malicious activity.

Permission and access control – They can carry verifiable assertions about what systems or data an agent is allowed to access, helping keep agents within their approved scope.

Interoperability – If passports use common formats, agents built on different platforms can still understand one another and exchange useful metadata.

Dynamic routing – Passports can include endpoint and context information that helps a network route requests to the most appropriate agent based on capability, geography, current load, or policy.

Continuous verification – Rather than relying only on one-time authorization, passports can support real-time validation and rapid revocation when needed.

Accountability – They provide a traceable record of identity and operational context, which is especially important in regulated sectors such as finance and healthcare.

Trust and provenance – It can capture where the agent came from, who created it, which version is running, and what governance or training context applies.

Technical Foundations and Protocols

The technical foundations for this future are beginning to take shape. A growing ecosystem of protocols and frameworks is providing the plumbing for a web of interoperating agents, one in which agents can store knowledge, use tools, discover one another, and communicate securely at scale.

In our work, researchers from Northeastern University, together with collaborators from the industry and academia, have begun exploring proof-of-concept autonomous AI agents that can reason, plan, and integrate with both internal and external systems on Akamai Connected Cloud (Linode). This effort is intended to bridge academic research and industry practice in distributed systems, security, and multi-agent computing.

The technical stack behind this work includes several emerging building blocks. 

A2A (Agent2Agent) supports structured inter-agent communication. 
MCP  (Model Context Protocol) connects agents to external tools, data sources, and APIs. 
SLIM (Secure Low-Latency Interactive Messaging) provides a secure, low-latency messaging layer for encrypted agent communication. 
NANDA supports agent discovery and authentication.
AGNTCY helps developers build, orchestrate, and test multi-agent systems, while cloud platforms such as 
Akamai Connected Cloud (Linode) provides the scalable cloud infrastructure needed to deploy them.

We will unpack these technical components in later posts.

This blog series is the fuller story of what we built, why we built it, and what we learned along the way. It moves from concepts to implementation, from architecture to deployment, and from research ideas to concrete systems.

In the posts ahead, we will cover the full multi-agent architecture, how agents discover and coordinate with one another, how we deployed the system on Akamai’s Cloud infrastructure, what worked and what did not, and what we learned from applying these ideas in practice. We will also dive into specific use cases, including the MBTA commuter scenario. The next post introduces a real-world example of a society of AI agents that helps a commuter navigate their daily commute using the Boston MBTA Transit Conversational Intelligence. The case study will serve as a concrete example of how specialized agents can discover, authenticate, coordinate, and act in a real-world setting.

Author