Initiate and configure machine learning model builds using the Plexe Platform REST API.
POST
request to the model creation endpoint.
Base URL: https://api.plexe.ai
upload_id
(s).x-api-key
header.
POST
request to the endpoint for creating models, typically including the desired model name in the path (e.g., /models/{model_name}
).
The request body contains the core configuration for the build:
goal
: (Required) Natural language description of the model’s goal.upload_id
: (Required if not using purely synthetic generation based on goal/schema alone) Reference to your data. This could be:
input_schema
: (Optional) Dictionary defining the input features and types (e.g., {"feature1": "float", "feature2": "str"}
). Plexe will try to infer if omitted and upload_id
is provided.output_schema
: (Optional) Dictionary defining the output prediction(s) and types (e.g., {"prediction": "int", "probability": "float"}
). Plexe will try to infer if omitted.metric
: (Optional) Suggest a primary metric to optimize (e.g., "accuracy"
, "rmse"
, "f1"
). Plexe will select an appropriate default if omitted.max_iterations
: (Optional) Maximum number of different modeling approaches the agent system should try (default might be 1 or 3, check API reference). Higher values increase build time and cost but may yield better models.provider
: (Optional) Specify the LLM provider/model to use (e.g., "openai/gpt-4o-mini"
). Uses the platform default if omitted. See Configure LLM Providers (concepts apply similarly here)."completed"
, you can proceed to making inferences with it using the deployed model inference API.