What is the Plexe Platform?

The Plexe Platform is a managed service that builds on the core capabilities of the Plexe Python library, providing a full-featured, scalable system for creating, managing, and deploying machine learning models. It offers both a web-based Console UI and a comprehensive REST API.

While the Plexe library is designed for users who want to integrate model creation directly into their Python workflows, the Platform provides a service-oriented approach with additional features for deployment, management, and monitoring.

Key Components

The Plexe Platform consists of several interconnected components:

Console UI

The web-based interface at console.plexe.ai where users can:

  • Create and manage models visually
  • Upload and analyze datasets
  • Monitor model training
  • View metrics and logs
  • Manage deployments
  • Handle API keys and account settings

REST API

Available at api.plexe.ai, the REST API provides programmatic access to all platform features, allowing for integration with applications, scripts, and workflows. All actions possible in the Console UI can also be performed via the API.

Model Building System

The core engine that powers model creation, using multi-agent AI systems to:

  • Analyze user intent and data
  • Generate appropriate ML code
  • Execute training processes
  • Evaluate models
  • Produce optimized inference code

Serving Infrastructure

Once a model is built, it can be deployed to Plexe’s serving infrastructure, which provides:

  • Scalable inference endpoints
  • Load balancing
  • Monitoring and logging
  • High availability
  • Performance optimization

Storage System

Manages various types of data and artifacts:

  • User data uploads
  • Trained models and artifacts
  • Inference logs
  • Analytics data

Core Workflow

The standard workflow for using the Plexe Platform follows these steps:

  1. Authentication: Access the platform using API keys or Console UI login
  2. Data Upload: Provide your training data (optional but recommended)
  3. Model Building: Define your model requirements using natural language
  4. Status Monitoring: Track the model building progress
  5. Deployment: Deploy successful models for production use
  6. Inference: Use deployed models to make predictions via API
  7. Monitoring & Maintenance: Monitor performance and update as needed

Key Concepts

Jobs

Any long-running operation (like model building) is represented as a job. Jobs have:

  • A unique ID
  • Status tracking
  • Progress information
  • Results upon completion

Models

A model in the Plexe Platform represents a specific ML solution created to address your needs. Each model has:

  • A unique name
  • One or more versions (iterations)
  • Associated metadata
  • Performance metrics
  • Input/output schemas

Versions

Each model can have multiple versions, representing different iterations or updates. Each version:

  • Has unique metrics and characteristics
  • Can be deployed independently
  • Maintains its own artifacts and code

Deployments

A deployment makes a specific model version available for inference. Deployments:

  • Have a unique URL endpoint
  • Can be scaled up or down
  • Can be monitored for performance and usage
  • Can be updated or rolled back

API Keys

API keys are used for authentication with the Plexe API. Each key:

  • Has specific permissions (read-only or read-write)
  • Is associated with your account
  • Can be revoked if compromised
  • Is used to track usage and credit consumption

Architecture Overview

At a high level, the Plexe Platform architecture follows a microservices approach:

  1. API Gateway: Handles authentication, rate limiting, and request routing
  2. Auth Service: Manages accounts, permissions, and API keys
  3. Data Service: Handles data uploads, storage, and pre-processing
  4. Model Service: Coordinates model building and versioning
  5. Execution Service: Runs the generated code in secure environments
  6. Inference Service: Handles prediction requests to deployed models
  7. Billing Service: Tracks usage and manages credits

Differences from the Python Library

While the Plexe Platform builds on the same core technology as the Python library, there are some key differences:

FeaturePython LibraryPlatform
EnvironmentRuns in your local or custom environmentFully managed cloud environment
InfrastructureYou manage compute resourcesPlexe manages all infrastructure
ScalabilityLimited by your local resourcesAuto-scaling based on demand
AuthenticationEnvironment variables for API keysJWT tokens & API key management
PersistenceManual save/load to filesAutomatic versioning and storage
DeploymentManual integration with your servicesOne-click deployment to endpoints
MonitoringBasic callbacks and loggingComprehensive monitoring & alerting
CollaborationCode-based sharingTeam access and permissions
BillingPay only for LLM API usagePlatform subscription & usage-based billing

Security Model

The Plexe Platform implements several security measures:

  1. Authentication: API keys and user authentication (JWT tokens)
  2. Authorization: Role-based permissions for API keys
  3. Isolation: Secure execution environments for code generation and training
  4. Encryption: Data encryption in transit and at rest
  5. Monitoring: Logging of access patterns and usage

Deployment Options

The Platform can be deployed in different configurations depending on your needs:

  • Hosted Service: The standard managed offering at api.plexe.ai
  • Enterprise Mode: For self-hosted deployments without authentication requirements

Multiple environment support (Development/Staging/Production) is planned for a future release.

Next Steps

Now that you understand the core concepts of the Plexe Platform, you can: