Understanding the role of LLM providers in the Plexe Python library.
"vendor/model_name"
:
ProviderConfig
class. This lets you optimize for performance in critical stages like code writing while using faster/cheaper models for more routine tasks.
"openai/gpt-4o-mini"
or similar for most rolesprovider_config = ProviderConfig(default_provider="openai/gpt-4o-mini", engineer_provider="openai/gpt-4o")
provider_config = ProviderConfig(default_provider="anthropic/claude-3-sonnet-20240229", research_provider="anthropic/claude-3-opus-20240229", engineer_provider="anthropic/claude-3-opus-20240229")
provider_config = ProviderConfig(default_provider="ollama/llama3")
model.build()
with a provider configuration: