Callbacks Reference
Detailed reference documentation for the callback system in the Plexe Python library.
Plexe provides a flexible callback system that allows you to monitor and interact with the model building process. This reference documents all built-in callbacks and provides information on creating custom callbacks.
Base Callback Class
All callbacks inherit from the Callback
base class:
BuildStateInfo
The BuildStateInfo
dataclass provides context about the current state of the build process and is passed to all callback methods:
Attribute | Type | Description |
---|---|---|
intent | str | The natural language description of the model’s intent |
provider | str | The provider (LLM) used for generating the model |
input_schema | Optional[Type[BaseModel]] | The input schema for the model |
output_schema | Optional[Type[BaseModel]] | The output schema for the model |
run_timeout | Optional[int] | Maximum time in seconds for each individual training run |
max_iterations | Optional[int] | Maximum number of iterations for the model building process |
timeout | Optional[int] | Maximum total time in seconds for the entire model building process |
iteration | int | Current iteration number (0-indexed) |
datasets | Optional[Dict[str, TabularConvertible]] | Dictionary of datasets used for training |
node | Optional[Node] | The solution node being evaluated in the current iteration |
Built-in Callbacks
ChainOfThoughtModelCallback
This callback logs detailed steps of the agent’s reasoning during the build process:
Parameter | Type | Description |
---|---|---|
emitter | Optional[Emitter] | Object that handles outputting the chain of thought logs. Default: ConsoleEmitter() |
include_code | bool | Whether to include generated code in the logs. Default: True |
This callback is automatically added when chain_of_thought=True
is set in model.build()
.
Example:
MLFlowCallback
Integrates with MLflow for experiment tracking:
Parameter | Type | Description |
---|---|---|
tracking_uri | Optional[str] | MLflow tracking server URI. Default: None (uses default MLflow URI) |
experiment_name | Optional[str] | MLflow experiment name. Default: None (uses/creates “Default”) |
run_name_prefix | str | Prefix for MLflow run names. Default: "plexe_" |
log_code | bool | Whether to log generated code as artifacts. Default: True |
log_artifacts | bool | Whether to log model artifacts. Default: True |
Example:
Creating Custom Callbacks
You can create custom callbacks by subclassing Callback
and implementing the desired methods:
Using Multiple Callbacks
You can use multiple callbacks simultaneously:
Callback Execution Order
When multiple callbacks are provided:
- All callbacks’
on_build_start
methods are called in the order they appear in the list - For each iteration:
a. All callbacks’
on_iteration_start
methods are called in order b. The iteration runs c. All callbacks’on_iteration_end
methods are called in order - All callbacks’
on_build_end
methods are called in order
Emitters for Chain of Thought
The ChainOfThoughtModelCallback
uses a ChainOfThoughtEmitter
to output the chain of thought logs. Built-in emitters include:
ConsoleEmitter
Outputs logs to the console (stdout).
LoggingEmitter
Sends logs to the Python logging system.
MultiEmitter
Combines multiple emitters into one.
Creating Custom Emitters
You can create custom emitters by subclassing ChainOfThoughtEmitter
:
Best Practices
- Choose callbacks based on your needs: Use MLflow for experiment tracking, TensorBoard for visualization, or custom callbacks for specialized logging
- Limit callback overhead: Complex callbacks can slow down the build process
- Combine callbacks strategically: Multiple callbacks can provide different views of the same process
- Handle exceptions gracefully: Callbacks should catch their own exceptions to avoid disrupting the build process