Configure LLM Providers
Specify which Large Language Model providers and models Plexe should use for its agent system.
Plexe utilizes Large Language Models (LLMs) extensively through its agent system to perform tasks like planning, code generation, analysis, and schema inference. You can configure which LLM provider and model Plexe uses.
Setting API Keys
Before configuring providers, ensure the necessary API keys are set as environment variables. Plexe uses LiteLLM to connect to various providers.
Specifying the Provider in model.build()
The provider
argument in model.build()
controls which LLM is used.
Simple Provider String
You can provide a single string in the format "vendor/model_name"
. This model will be used for all agent tasks by default.
Plexe defaults to "openai/gpt-4o-mini"
if the provider
argument is omitted.
Using ProviderConfig
for Granular Control
For more advanced control, you can specify different models for different agent roles using the ProviderConfig
class. This allows you to use potentially stronger models for complex tasks like planning or coding, and faster/cheaper models for simpler tasks like tool usage or review.
The roles you can configure are:
default_provider
: Fallback provider if a specific role isn’t set.orchestrator_provider
: For the main agent managing the workflow.research_provider
: For the agent planning the ML solution.engineer_provider
: For the agent writing the training code.ops_provider
: For the agent writing the inference code.tool_provider
: For agents performing internal tool calls (like schema inference, metric selection).
Using ProviderConfig
allows optimizing for cost and capability by assigning different models to roles based on their complexity. Refer to your LLM provider’s documentation for model identifiers and capabilities.