In enterprise software development, distributed systems have been essential for the last 15 years. We've embraced SOA, microservices, actor models like Akka (Akka.NET), Microsoft Orleans, Erlang process, and countless message brokers, frameworks, and architectures.
But two years ago, we started fresh.
With AI/LLM models, there were no established frameworks, no observability tools, and no automated testing solutions. We were starting from zero.
Brief Recup
AI Agent is a software entity powered by artificial intelligence designed to perform tasks autonomously or semi-autonomously in a specific environment to achieve particular goals. It processes inputs, makes decisions, and takes actions based on predefined rules, learned patterns, or dynamic interactions.
Actor is finer-grained and lightweight, isolated entity that encapsulates state and behavior, communicates via message passing, and processes one message at a time. Thousands or millions of actors can exist within a single system, often within the same process.
Microservice is independently deployable, highly maintainable, and typically communicates over a network using protocols like HTTP/REST or gRPC. Coarser-grained and heavier compare to actors. Each microservice is typically a standalone application or process.
Actors can be used within microservices to manage internal state and concurrency, combining the strengths of both paradigms. For example, a microservice can implement the actor model for event processing while exposing an API to other microservices.
The Evolving Role of AI Agents
The naming of AI agents
depends on context, marketing, and sometimes misunderstandings.
- At the product level (e.g. chatbot in your company) AI agent is an actor.
- At the company level (e.g. Google Mariner) AI agent is a service.
Over time, the community may establish more precise terminology, such as micro-agent
, AI actor
, or AI service
, to distinguish these concepts.
Aspect | Actor | Service / Microservice |
---|---|---|
Granularity | Fine-grained | Coarse-grained |
State | Internal, encapsulated | External, often stateless |
Communication | Messages | APIs over network |
Concurrency | Built-in, per actor | Depends on service design |
Scaling | Within system or distributed | Horizontal, per service |
Fault Tolerance | Supervision hierarchies | External patterns/mechanisms |
Use Cases | Real-time, event-driven | Enterprise, modular |
If you compare tools like CrewAI Agent
, Autogen Agent
, or LangChain Agent
to this table, you’ll see they function more like actors.
As for an AI service
or AI microservice
I haven’t fully defined this for myself yet. It might be something we don’t need, or it’s a concept still waiting to be built. I had hopes for Anthropic MCP to fill this role, but it’s not quite there yet. (I wrote more about this here.)
LangGraph: Moving Toward Multi-Actor Applications
Recently, LangGraph introduced commands and redefined itself from a "multi-agent" to a "multi-actor" framework. It now focuses on stateful, multi-actor applications with LLMs for building workflows
.
I believe this is absolutely correct. Let's take a look closer and build some example.
Source code on GitHub
import { Annotation, START } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { Command } from "@langchain/langgraph";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";
import { StateGraph } from "@langchain/langgraph";
import dotenv from 'dotenv';
dotenv.config();
const StateAnnotation = Annotation.Root({
customerInquiry: Annotation<string>({
value: (_prev, newValue) => newValue,
default: () => "",
}),
route: Annotation<string>({
value: (_prev, newValue) => newValue,
default: () => "",
})
});
const model = new ChatOpenAI({
modelName: "gpt-4o-mini"
});
const routeUserRequest = async (state: typeof StateAnnotation.State) => {
const response = await model.withStructuredOutput<{ route: "quotation" | "refund" }>({
schema: {
type: "object",
properties: {
route: { type: "string", enum: ["quotation", "refund"] }
},
required: ["route"]
}
}).invoke([
new SystemMessage('Please categorize the user request'),
new HumanMessage(state.customerInquiry)
]);
const routeToFunctionName = {
"quotation": "quotationAgent",
"refund": "refundAgent"
};
return new Command({
update: {
route: response.route
},
goto: routeToFunctionName[response.route],
});
};
const quotationAgent = (state) => {
return {};
};
const refundAgent = (state) => {
return {};
};
const graph = new StateGraph(StateAnnotation)
.addNode("routeUserRequest", routeUserRequest, { ends: ["quotationAgent", "refundAgent"] })
.addNode("quotationAgent", quotationAgent)
.addNode("refundAgent", refundAgent)
.addEdge(START, "routeUserRequest")
.compile();
async function main() {
try {
await graph.invoke({ customerInquiry: 'Hi, I need refund' });
console.log("Done");
} catch (error) {
console.error("Error in main function:", error);
}
}
main();
This approach removes explicit edge
declarations, leaving only nodes (actors).
In the future, LangGraph might go beyond its graph-based structure. By adding a message broker
, actor addresses
, and autodiscovery
, it could evolve into something like Microsoft Orleans.
The Future of AI Service Communication
Tools like LangChain/LangGraph are still evolving. Right now, they focus on in-service
design and lack of inter-service
communication features, but they’re starting to add features for broader integration. For example, LangChain recently added OpenTelemetry support, which is critical for distributed systems.
The next big step for community will be enabling seamless AI-to-AI service communication
. Whether it’s through Anthropic MCP, LangChain, or other innovations, this will define the future of AI in distributed systems.
Top comments (0)