Build Models via API
Initiate and configure machine learning model builds using the Plexe Platform REST API.
You can start the automated model building process on the Plexe Platform by making a POST
request to the model creation endpoint.
Base URL: https://api.plexe.ai
Prerequisites
- You have a Plexe Platform account and a valid API Key.
- (Optional but Recommended) You have uploaded your data and have the resulting
upload_id
(s).
Authentication
Include your API key in the x-api-key
header.
Starting a Build Job
Make a POST
request to the endpoint for creating models, typically including the desired model name in the path (e.g., /models/{model_name}
).
The request body contains the core configuration for the build:
goal
: (Required) Natural language description of the model’s goal.upload_id
: (Required if not using purely synthetic generation based on goal/schema alone) Reference to your data. This could be:- An ID obtained from the data upload process.
- A publicly accessible URL to a dataset (CSV, JSON, etc. - check API reference for supported URL types).
input_schema
: (Optional) Dictionary defining the input features and types (e.g.,{"feature1": "float", "feature2": "str"}
). Plexe will try to infer if omitted andupload_id
is provided.output_schema
: (Optional) Dictionary defining the output prediction(s) and types (e.g.,{"prediction": "int", "probability": "float"}
). Plexe will try to infer if omitted.metric
: (Optional) Suggest a primary metric to optimize (e.g.,"accuracy"
,"rmse"
,"f1"
). Plexe will select an appropriate default if omitted.max_iterations
: (Optional) Maximum number of different modeling approaches the agent system should try (default might be 1 or 3, check API reference). Higher values increase build time and cost but may yield better models.provider
: (Optional) Specify the LLM provider/model to use (e.g.,"openai/gpt-4o-mini"
). Uses the platform default if omitted. See Configure LLM Providers (concepts apply similarly here).
Checking Build Status
After submitting a build request, you’ll want to monitor its progress. Use the status endpoint to check on your model’s build status:
Once your model’s status is "completed"
, you can proceed to making inferences with it using the deployed model inference API.