Specify which Large Language Model providers and models Plexe should use for its agent system.
model.build()
provider
argument in model.build()
controls which LLM is used.
"vendor/model_name"
. This model will be used for all agent tasks by default.
"openai/gpt-4o-mini"
if the provider
argument is omitted.
ProviderConfig
for Granular ControlProviderConfig
class. This allows you to use potentially stronger models for complex tasks like planning or coding, and faster/cheaper models for simpler tasks like tool usage or review.
The roles you can configure are:
default_provider
: Fallback provider if a specific role isn’t set.orchestrator_provider
: For the main agent managing the workflow.research_provider
: For the agent planning the ML solution.engineer_provider
: For the agent writing the training code.ops_provider
: For the agent writing the inference code.tool_provider
: For agents performing internal tool calls (like schema inference, metric selection).ProviderConfig
allows optimizing for cost and capability by assigning different models to roles based on their complexity. Refer to your LLM provider’s documentation for model identifiers and capabilities.