Tutorials
Deploying a Remote Agent Server
In this tutorial, we'll deploy a remote agent using a FastAPI server.
Step 1: Set Up the Project Structure
Create a new Python file called remote_agent_server.py
and import all required packages:
remote_agent_server.py
"""
A remote agent server using Moya.
"""
from moya.agents.openai_agent import OpenAIAgent, OpenAIAgentConfig
from moya.memory.in_memory_repository import InMemoryRepository
from moya.tools.tool_registry import ToolRegistry
from moya.tools.memory_tool import MemoryTool
import asyncio
import uvicorn
from fastapi import FastAPI, Request, HTTPException
from fastapi.responses import StreamingResponse
Step 2: Set Up an Agent
Next, we'll set up an OpenAI Agent that can answer queries:
remote_agent_server.py (continued)
def setup_agent():
"""Set up OpenAI agent with memory capabilities."""
# Set up memory components
memory_repo = InMemoryRepository()
memory_tool = MemoryTool(memory_repository=memory_repo)
tool_registry = ToolRegistry()
tool_registry.register_tool(memory_tool)
# Create and setup agent
agent_config = OpenAIAgentConfig(
system_prompt="You are a remote agent that specializes in telling jokes and being entertaining.",
model_name="gpt-4o",
temperature=0.8,
max_tokens=1000
)
agent = OpenAIAgent(
agent_name="remote_joke_agent",
description="Remote agent specialized in humor",
agent_config=agent_config,
tool_registry=tool_registry
)
agent.setup()
return agent
# Initialize agent at startup
agent = setup_agent()
Step 3: Create endpoints to interact with
We create endpoints such as health
and chat
:
The health
endpoint is used to check if the server is running properly. The
chat
endpoint is used to interact with the agent.
remote_agent_server.py (continued)
@app.get("/health")
async def health_check():
"""Health check endpoint."""
return {"status": "healthy", "agent": agent.agent_name}
@app.post("/chat")
async def chat(request: Request):
"""Handle normal chat requests using OpenAI agent."""
data = await request.json()
message = data['message']
thread_id = data.get('thread_id', 'default_thread')
# Store user message if memory tool is available
if agent.tool_registry:
try:
agent.call_tool(
tool_name="MemoryTool",
method_name="store_message",
thread_id=thread_id,
sender="user",
content=message
)
except Exception as e:
print(f"Error storing user message: {e}")
# Get response from agent
response = agent.handle_message(message, thread_id=thread_id)
# Store agent response if memory tool is available
if agent.tool_registry:
try:
agent.call_tool(
tool_name="MemoryTool",
method_name="store_message",
thread_id=thread_id,
sender=agent.agent_name,
content=response
)
except Exception as e:
print(f"Error storing agent response: {e}")
return {"response": response}
Step 4: Start the uvicorn server
Now, let's start the FastAPI server using uvicorn:
remote_agent_server.py (continued)
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
You should now be able to chat with your agent on the url localhost:8000/chat
Note: This example is similar to the
remote_agent_server.py
example
included in the Moya repository. You can run that example directly with
python examples/remote_agent_server.py
.