Runtype
Platform

Where AI products are built.

Agents that reason. Flows that orchestrate. Models from every provider. Surfaces that deploy anywhere. You build real AI products here.

Start building
Intelligence

Agents & Flows

Autonomous agents that reason, plan, and act. Visual flows that orchestrate multi-step logic. Both work together or independently.

Autonomous Agents

Multi-turn reasoning with tool calls, reflection loops, and human-in-the-loop approval gates.

Flow Orchestration

20+ step types: prompts, API calls, data transforms, conditionals, branching, and looping.

Tool Use

Built-in tools (DALL-E, web search, code execution) plus runtime tools defined inline with each request.

Variables & State

System variables, message context, record metadata. Pass data between steps with template syntax.

Resumable Execution

Flows can pause for approval, wait for external events, and resume exactly where they left off.

Models

Models from every major provider.

Start with platform keys or bring your own. Switch providers without rewriting prompts or changing code.

Supported model providers: OpenAI, Anthropic, Google Gemini, Meta, xAI, Mistral, DeepSeek, Perplexity, Groq, Cohere, and more.

OpenAI

Flagship chat, reasoning, audio, image, and tool-capable models in the same runtime.

Anthropic

Strong long-context reasoning and agent-friendly models with reliable tool use.

Google

Fast multimodal models for text, image, and high-throughput production workloads.

xAI

Alternative reasoning styles and fast-turnaround models without a platform switch.

Open Source

Open-weight models routed for flexible cost, latency, and quality tradeoffs.

BYOK

Use your own provider keys when you need tighter cost control, routing, or compliance.

Integrations

Connect to the tools your team uses.

Native integrations with OAuth flows, built-in MCP servers, and the ability to call any API.

Slack

Send messages, manage channels, post threads. Full OAuth bot integration.

GitHub

Create issues, list repos, post comments. Automate developer workflows.

Google Workspace

Read and create Docs, search Drive. Keep AI grounded in your knowledge base.

Linear

Create, update, and list issues. Connect AI to your project management.

Firecrawl & Exa

Web scraping and search. Pull live data into your flows.

MCP & API

Connect any MCP server or call any HTTP endpoint directly. No custom integration needed.

Surfaces

Deploy wherever your users are.

Build once, ship everywhere. Every surface is a deployment target for your AI products.

Web Chat

Embedded chat widget via the Persona SDK. Shadow DOM isolation, theming, streaming.

API

HTTP endpoints with SSE streaming. Integrate AI into any backend or frontend.

Slack

Bot-based surfaces that live in your team's workspace.

Email & SMS

Inbound and outbound messaging. AI that responds over email or text.

CLI

Terminal-native AI experiences via the Marathon product.

Webhooks & Schedules

Event-driven and time-based triggers for autonomous workflows.

Quality

Product Evals

Test the experience your users have, not just the model behind it. Run evals against your entire AI product — flows, tools, prompts, and all.

End-to-End Testing

Evaluate complete flows and agent interactions, not isolated prompts. Measure the product experience from input to output.

Model Comparison

Run the same flow with different models or configurations and compare results side by side.

Regression Detection

Catch quality regressions before they reach users. Run evals on every change to your AI product.

Configuration Variants

Test different temperatures, reasoning modes, response formats, and token limits.

Batch Evaluation

Test against datasets of real inputs. Evaluate hundreds of scenarios in a single run.

Automated Runs

Schedule evals to run on a cadence. Monitor quality over time without manual intervention.

Infrastructure

A runtime built for production AI.

Global edge execution, hardware-level isolation, and automatic secret management. Ship fast without compromising on security or performance.

Edge Execution

Agents execute on a global edge network with zero cold starts. Low latency worldwide, not just in one region.

Hardware Isolation

Every execution runs in its own V8 isolate with strict memory and time limits. No shared compute.

Protected Parameters

Mark tool parameters as protected. They're encrypted at rest, hidden from the model, and injected automatically at execution time.

Local Tool Execution

Tools can run in the user's browser, on a local machine, or on-prem. Cloud execution pauses and resumes when local tools respond.

Runtime Tools

Define tools inline at dispatch time. Compose agent capabilities dynamically based on context — no redeployment needed.

Bring Your Own Keys

Use platform keys to start instantly, or bring your own for tighter compliance, routing, and cost control.

Get Started

From idea to shipped product.