ACP SDK allows you to wrap an existing agent, regardless of its framework or programming language, into a reusable and interoperable service. By implementing a simple interface, your agent becomes compatible with the ACP protocol. It can communicate over HTTP, interact with other agents in workflows, and exchange structured messages using a shared format.
Once wrapped, your agent becomes:
- Remotely callable over REST
- Composable with other agents in workflows
- Discoverable
- Reusable without modifying its internal logic
Simple agent
If you haven’t already, install the SDK: uv add acp-sdk
Wrapping an agent with ACP is as simple as annotating a Python function.
Use the @server.agent()
decorator to define your agent. The name is inferred from the function name, and the description is pulled from the docstring:
from collections.abc import AsyncGenerator
from acp_sdk.models import Message
from acp_sdk.server import Context, RunYield, RunYieldResume, Server
# Create a new ACP server instance
server = Server()
@server.agent()
async def echo(input: list[Message], context: Context) -> AsyncGenerator[RunYield, RunYieldResume]:
"""Echoes everything"""
for message in input:
yield message
# Start the ACP server
server.run()
This is the minimal structure needed for an ACP-compliant agent. You now have an agent that can receive messages from others, or be called via HTTP, using the ACP protocol.
Simple LLM agent
Let’s look at an example of how a simple agent can call an LLM model using the beeai-framework
and return a response.
from collections.abc import AsyncGenerator
from acp_sdk.models import Message, MessagePart
from acp_sdk.server import RunYield, RunYieldResume, Server
from beeai_framework.backend.chat import ChatModel
from beeai_framework.agents.react.agent import ReActAgent
from beeai_framework.backend.message import UserMessage
from beeai_framework.memory.token_memory import TokenMemory
server = Server()
@server.agent()
async def llm(input: list[Message]) -> AsyncGenerator[RunYield, RunYieldResume]:
"""LLM agent that processes input and returns a response"""
# Create a llm instance
llm = ChatModel.from_name("ollama:llama3.1")
# Create a memory instance
memory = TokenMemory(llm)
# Add messages to memory
for message in input:
await memory.add(UserMessage(str(message)))
# Create agent with memory and tools
agent = ReActAgent(llm=llm, tools=[], memory=memory)
# Run the agent with the memory
response = await agent.run()
# Yield the response
yield MessagePart(content=response.result.content[0].text)
server.run()