@osmosis_rubric decorator and delegate scoring to a language model based on a rubric description.
Basic Example
File:reward_rubric/reward_rubric_openai.py
Function Signature
The evaluate_rubric Function
The evaluate_rubric helper function handles the LLM evaluation:
Parameters
| Parameter | Type | Description |
|---|---|---|
rubric | str | Natural language description of evaluation criteria |
solution_str | str | The LLM output to evaluate |
model_info | dict | Provider, model, and API key configuration |
ground_truth | str | Expected correct answer or reference |
metadata | dict | Optional additional context |
score_min | float | Minimum score value (default: 0.0) |
score_max | float | Maximum score value (default: 1.0) |
return_details | bool | Whether to return detailed explanation |
Supported Providers
OpenAI
Anthropic
For additional providers (Google Gemini, xAI Grok, OpenRouter, Cerebras), see the API Reference.