willccbb/verifiers
Verifiers
Environments for LLM Reinforcement Learning
Overview
Verifiers is a library of modular components for creating RL environments and training LLM agents. Verifiers includes an async GRPO implementation built around the transformers
Trainer, is supported by prime-rl
for large-scale FSDP training, and can easily be integrated into any RL framework which exposes an OpenAI-compatible inference client. In addition to RL training, Verifiers can be used directly for building LLM evaluations, creating synthetic data pipelines, and implementing agent harnesses.
Full documentation is available here.
Setup
We recommend using verifiers
with along uv for dependency management in your own project:
|
|
For local (CPU) development and evaluation with API models, do:
|
|
For training on GPUs with vf.GRPOTrainer
, do:
|
|
To use the latest main
branch, do:
|
|
To use with prime-rl
, see here.
To install verifiers
from source for core library development, do:
|
|
In general, we recommend that you build and train Environments with verifiers
, not in verifiers
. If you find yourself needing to clone and modify the core library in order to implement key functionality for your project, we’d love for you to open an issue so that we can try and streamline the development experience. Our aim is for verifiers
to be a reliable toolkit to build on top of, and to minimize the “fork proliferation” which often pervades the RL infrastructure ecosystem.
Environments
Environments in Verifiers are installable Python modules which can specify dependencies in a pyproject.toml
, and which expose a load_environment
function for instantiation by downstream applications (e.g. trainers). See environments/
for examples.
To initialize a blank Environment module template, do:
|
|
To an install an Environment module into your project, do:
|
|
To install an Environment module from this repo’s environments
folder, do:
|
|
Once an Environment module is installed, you can create an instance of the Environment using load_environment
, passing any necessary args:
|
|
To run a quick evaluation of your Environment with an API-based model, do:
|
|
The core elements of Environments in are:
- Datasets: a Hugging Face
Dataset
with aprompt
column for inputs, and eitheranswer (str)
orinfo (dict)
columns for evaluation - Rollout logic: interactions between models and the environment (e.g.
env_response
+is_completed
for anyMultiTurnEnv
) - Rubrics: an encapsulation for one or more reward functions
- Parsers: optional; an encapsulation for reusable parsing logic
We support both /v1/chat/completions
-style and /v1/completions
-style inference via OpenAI clients, though we generally recommend /v1/chat/completions
-style inference for the vast majority of applications. Both the included GRPOTrainer
as well as prime-rl
support the full set of SamplingParams exposed by vLLM (via their OpenAI-compatible server interface), and leveraging this will often be the appropriate way to implement rollout strategies requiring finer-grained control, such as interrupting and resuming generations for interleaved tool use, or enforcing reasoning budgets.
The primary constraint we impose on rollout logic is that token sequences must be increasing, i.e. once a token has been added to a model’s context in a rollout, it must remain as the rollout progresses. Note that this causes issues with some popular reasoning models such as the Qwen3 and DeepSeek-R1-Distill series; see Footguns for guidance on adapting these models to support multi-turn rollouts.
SingleTurnEnv
For tasks requiring only a single response from a model for each prompt, you can use SingleTurnEnv
directly by specifying a Dataset and a Rubric. Rubrics are sets of reward functions, which can be either sync or async.
|
|
Datasets should be formatted with columns for:
'prompt' (List[ChatMessage])
OR'question' (str)
fieldsChatMessage
= e.g.{'role': 'user', 'content': '...'}
- if
question
is set instead ofprompt
, you can also passsystem_prompt (str)
and/orfew_shot (List[ChatMessage])
answer (str)
AND/ORinfo (dict)
task (str)
: optional, used byEnvGroup
andRubricGroup
for orchestrating composition of Environments and Rubrics
The following named attributes available for use by reward functions in your Rubric:
prompt
: sequence of input messagescompletion
: sequence of messages generated during rollout by model and Environmentanswer
: primary answer column, optional ifinfo
is usedstate
: can be modified during rollout to accumulate any metadata (state['responses']
includes full OpenAI response objects by default)info
: auxiliary info needed for reward computation (e.g. test cases), optional ifanswer
is usedtask
: tag for task type (used byEnvGroup
andRubricGroup
)parser
: the parser object declared. Note:vf.Parser().get_format_reward_func()
is a no-op (always 1.0); usevf.ThinkParser
or a custom parser if you want a real format adherence reward.
For tasks involving LLM judges, you may wish to use vf.JudgeRubric()
for managing requests to auxiliary models.
Note on concurrency: environment APIs accept max_concurrent
to control parallel rollouts. The vf-eval
CLI currently exposes --max-concurrent-requests
; ensure this maps to your environment’s concurrency as expected.
vf-eval
also supports specifying sampling_args
as a JSON object, which is sent to the vLLM inference engine:
|
|
Use vf-eval -s
to save outputs as dataset-formatted JSON, and view all locally-saved eval results with vf-tui
.
ToolEnv
For many applications involving tool use, you can use ToolEnv
to leverage models’ native tool/function-calling capabilities in an agentic loop. Tools can be specified as generic Python functions (with type hints and docstrings), which will then be passed in JSON schema form to each inference request.
|
|
In cases where your tools require heavy computational resources, we recommend hosting your tools as standalone servers (e.g. MCP servers) and creating lightweight wrapper functions to pass to ToolEnv
. Parallel tool call support is enabled by default.
For training, or self-hosted endpoints, you’ll want to enable auto tool choice in vLLM with the appropriate parser. If your model does not support native tool calling, you may find the XMLParser
abstraction useful for rolling your own tool call parsing on top of MultiTurnEnv
; see environments/xml_tool_env
for an example.
MultiTurnEnv
Both SingleTurnEnv
and ToolEnv
are instances of MultiTurnEnv
, which exposes an interface for writing custom Environment interaction protocols. The two methods you must override are
|
|
If your application requires more fine-grained control than is allowed by MultiTurnEnv
, you may want to inherit from the base Environment
functionality directly and override the rollout
method.
Training
GRPOTrainer
The included trainer (vf.GRPOTrainer
) supports running GRPO-style RL training via Accelerate/DeepSpeed, and uses vLLM for inference. It supports both full-parameter finetuning, and is optimized for efficiently training dense transformer models on 2-16 GPUs.
|
|
Alternatively, you can train environments with the external prime-rl
project (FSDP-first orchestration). See the prime-rl
README for installation and examples. For example:
|
|
|
|
Troubleshooting
- Ensure your
wandb
andhuggingface-cli
logins are set up (or setreport_to=None
intraining_args
). You should also have something set as yourOPENAI_API_KEY
in your environment (can be a dummy key for vLLM). - If using high max concurrency, increase the number of allowed open sockets (e.g.
ulimit -n 4096
) - On some setups, inter-GPU communication can hang or crash during vLLM weight syncing. This can usually be alleviated by setting (or unsetting)
NCCL_P2P_DISABLE=1
in your environment (or potentiallyNCCL_CUMEM_ENABLE=1
). Try this as your first step if you experience NCCL-related issues. - If problems persist, please open an issue.
Resource Requirements
GRPOTrainer
is optimized for setups with at least 2 GPUs, scaling up to multiple nodes. 2-GPU setups with sufficient memory to enable small-scale experimentation can be rented for <$1/hr.
PRIME-RL
If you do not require LoRA support, you may want to use the prime-rl
trainer, which natively supports Environments created using verifiers
, is more optimized for performance and scalability via FSDP, includes a broader set of configuration options and user experience features, and has more battle-tested defaults. Both trainers support asynchronous rollouts, and use a one-step off-policy delay by default for overlapping training and inference. See the prime-rl
docs for usage instructions.
Further Documentation
See the full docs for more information.
Contributions
Verifiers warmly welcomes community contributions! Please open an issue or PR if you encounter bugs or other pain points during your development, or start a discussion for more open-ended questions.
Please note that the core verifiers/
library is intended to be a relatively lightweight set of reusable components rather than an exhaustive catalog of RL environments. For applications of verifiers
(e.g. “an Environment for XYZ task”), you are welcome to submit a PR for a self-contained module that lives within environments/
if it serves as a canonical example of a new pattern. Stay tuned for more info shortly about our plans for supporting community Environment contributions 🙂
Citation
If you use this code in your research, please cite:
|
|
Roadmap
- A community Environments hub for crowdsourcing, sharing, and discovering new RL environments built with
verifiers
- Default patterns for hosted resources such as code sandboxes, auxiliary models, and MCP servers
- Multimodal input support
- Non-increasing token sequences via REINFORCE