This guide helps you quickly create a Remote Rollout server. We provide a ready-to-use template repository that you can use to get started immediately.
Create Your Project from the Template
Clone and Install Dependencies
Clone your newly created repository and install dependencies:If you don’t have uv installed, you can also use pip:pip install osmosis-ai[server]
Login to Osmosis Platform
Authenticate with the Osmosis Platform to enable server registration:This will open your browser for authentication. After logging in, your credentials are saved to ~/.config/osmosis/credentials.json.You can verify your login status with:The server uses your credentials to register with Osmosis Platform, enabling the training system to discover and connect to your server.
Run the Server
Start the server using the SDK CLI:uv run osmosis serve -m server:agent_loop
You should see output similar to:INFO: Osmosis RolloutServer starting...
INFO: Agent: calculator
INFO: Tools: 4
INFO: Uvicorn running on http://0.0.0.0:9000
Template Structure
The template repository contains the following key components:
my-rollout-server/
├── server.py # Agent Loop main file
├── tools.py # Tool definitions and execution logic
├── rewards.py # Reward computation (optional)
├── test_data.jsonl # Test dataset
├── pyproject.toml # Project dependencies
└── README.md # Documentation
Core Concept: Agent Loop
The Agent Loop is the core of Remote Rollout. It inherits from RolloutAgentLoop and implements two required methods:
from osmosis_ai.rollout import (
RolloutAgentLoop,
RolloutContext,
RolloutResult,
RolloutRequest,
create_app,
)
class CalculatorAgent(RolloutAgentLoop):
"""Simple calculator agent example"""
name = "calculator" # REQUIRED: unique agent identifier
def get_tools(self, request: RolloutRequest):
"""REQUIRED: Return available tools for this agent
Called when /v1/rollout/init request is received.
The returned tools are included in the InitResponse sent to the training platform.
"""
return CALCULATOR_TOOLS
async def run(self, ctx: RolloutContext) -> RolloutResult:
"""REQUIRED: Execute the agent loop
This is where you implement your agent logic:
1. Call the LLM to get a response
2. Execute tool calls
3. Continue the conversation until completion
"""
messages = list(ctx.request.messages)
for turn in range(ctx.request.max_turns):
# Call LLM
result = await ctx.chat(messages, **ctx.request.completion_params)
messages.append(result.message)
# Check if done (no tool calls)
if not result.has_tool_calls:
break
# Execute tool calls
for tool_call in result.tool_calls:
# ... parse and execute tool ...
output = await execute_tool(tool_call)
messages.append({
"role": "tool",
"tool_call_id": tool_call.get("id"),
"content": str(output)
})
ctx.record_tool_call() # Optional: record metrics
return ctx.complete(messages)
# Export agent instance for CLI
agent_loop = CalculatorAgent()
# Create FastAPI app
app = create_app(agent_loop)
Tools are defined using the OpenAI function calling format:
CALCULATOR_TOOLS = [
{
"type": "function",
"function": {
"name": "multiply",
"description": "Multiply two numbers",
"parameters": {
"type": "object",
"properties": {
"a": {"type": "number", "description": "First number"},
"b": {"type": "number", "description": "Second number"}
},
"required": ["a", "b"]
}
}
}
]
Key Methods:
get_tools() - Returns tool definitions when /v1/rollout/init is called
run() - Executes your agent loop logic
ctx.chat() - Calls the training platform’s LLM endpoint
ctx.record_tool_call() - Optional, records tool call count for metrics
ctx.complete() - Returns the final result
Validation and Testing
Validate Agent Configuration
Before running, validate your agent implementation using the CLI:
uv run osmosis validate -m server:agent_loop
Expected output:
✓ Agent loop validated successfully
Name: calculator
Tools: 4 (add, subtract, multiply, divide)
Local Testing
The template includes a test_data.jsonl test dataset. Set your API key and run tests:
export OPENAI_API_KEY="your-key-here"
# Run batch tests
uv run osmosis test -m server:agent_loop -d test_data.jsonl
# Limit number of tests
uv run osmosis test -m server:agent_loop -d test_data.jsonl --limit 10
# Use a different model
uv run osmosis test -m server:agent_loop -d test_data.jsonl --model anthropic/claude-sonnet-4-20250514
Interactive Debugging
Step through agent execution for debugging:
uv run osmosis test -m server:agent_loop -d test_data.jsonl --interactive
Interactive commands:
n - Execute next LLM call
c - Continue to completion
m - Show message history
t - Show available tools
q - Quit
Development Tips
Enable Hot Reload
Enable auto-reload during development to automatically restart the server when code changes:
uv run osmosis serve -m server:agent_loop --reload
Enable Debug Logging
Write execution traces to files for debugging:
uv run osmosis serve -m server:agent_loop --log ./logs
Log file structure:
logs/
└── {timestamp}/
├── rollout-abc123.jsonl
└── rollout-def456.jsonl
Local Mode (No Login Required)
Skip login and platform registration for local development:
uv run osmosis serve -m server:agent_loop --local
Local mode disables API key authentication and platform registration. This is intended for development only and should not be used in production.
Test datasets must include the following columns:
| Column | Description |
|---|
system_prompt | System prompt for the LLM |
user_prompt | User message to start conversation |
ground_truth | Expected output (for reward computation) |
Supported formats: .json, .jsonl, .parquet
Next Steps