Are you an AI coding agent? We wrote a version of this announcement just for you — with onboarding instructions, CLI commands, and SDK references. Read the agent version →
Today, we're launching Agentuity v1 — a new kind of cloud, built for AI agents from the ground up. This is where agents deploy, run, and scale.
We're not adding AI to the stack. We're rebuilding the stack for AI.
Why a New Cloud?
We believe agentic software is the future. Billions of agents will run workloads across the world, and traditional software will increasingly have agentic patterns woven into it. That demands a fundamentally different approach to how software is built, deployed, and managed.
For decades, we've built software around short-lived requests, stateless transactions, and infrastructure designed for predictability. The cloud we know today was architected for that world: spin up, serve, spin down. Fast, cheap, repeatable.
But agentic software breaks that mold. It's not a single request and response — it's a system that reasons, remembers, and runs. An agent might run for two hours, pause to wait for a human, then run for another hour. Serverless alone won't get you there. Traditional cloud services force you down to an EC2 instance or VM — and even then, it's not quite right.
Agents need to operate more like humans: long-running, with access to the right resources at the right time, and clear direction on what to accomplish. But they also need what computers need — storage, databases, coordination across systems. They pause, resume, and coordinate across time.
Agentuity handles this entire lifecycle — from writing your first agent to deploying it at scale.
What You Get
Agentuity is the full-stack platform for AI agents. Everything your agents need to thrive — built in, not bolted on.
| Capability | What It Means |
|---|---|
| Full-Stack Development | Agents, APIs, and frontends with end-to-end type safety |
| Built-in Services | Key-value, vector, Postgres, object storage, queues — no setup required |
| Sandboxes | Isolated code execution for agents |
| Evals in Production | Run evaluations on every session, not just in CI. Real users, real traffic. |
| AI Gateway | One API for OpenAI, Anthropic, Google, Groq — unified billing, no API keys in code |
| Observability | OpenTelemetry traces, logs, session tracking, cost per span — automatic |
| Deploy Anywhere | Public cloud, private cloud, on-prem, edge. Same SDK, same services, wherever you run. |
This isn't infrastructure you, or your agents, wire together yourself. These are first-class primitives, purpose-built for agents, ready the moment you need them.
As Ben Davis put it after building on the platform:
"It really feels like it's the first platform that is trying to put together a system where the agents can build anything in one place. From the frontend to the backend to the cron jobs, it's all wrapped up in an Agentuity project."
Building on Agentuity
Developer Experience
The future of agentic software is a collaboration between agents and human ingenuity. That means both the developer experience and the agent experience have to be best in class.
You should be able to build, test, and ship agentic apps in minutes, not weeks. Agents should have access to the same tools developers do — safely, with guardrails.
# Install the CLI
curl -sSL https://agentuity.sh | sh
# Create a new project
agentuity create
# Start developing
agentuity dev
That's it. You're running a full-stack agent application with:
- Local server at
http://localhost:3500 - Workbench at
/workbenchfor testing agents without a frontend - Public tunnel URL for webhooks and sharing
- Hot reload so changes reflect immediately
When you're ready, deploy in seconds:
agentuity deploy
Connect your GitHub repo, and every merge to main triggers a deploy. Every pull request gets its own preview environment. Auto-scaling handles the rest.
The Full-Stack SDK
Bun is a first-class citizen across the entire platform — because speed matters. The TypeScript SDK gives you powerful primitives that are dead simple to use, for developers and agents alike.
You get:
- End-to-end type safety from agent schemas to API routes to React components
- Effortless streaming with SSE and WebSocket support built in
- Instant access to cloud services from the agent context
- Hono baked in as a first-class web framework
Here's how the three layers work together:
Agent
src/agent/assistant/agent.ts
import { createAgent } from '@agentuity/runtime';
import { s } from '@agentuity/schema';
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const agent = createAgent('assistant', {
description: 'A helpful assistant',
schema: {
input: s.object({ message: s.string() }),
output: s.object({ response: s.string() }),
},
handler: async (ctx, { message }) => {
const { text } = await generateText({
model: openai('gpt-4'),
prompt: message,
});
return { response: text };
},
});
export default agent;
API Route
src/api/index.ts
import { createRouter } from '@agentuity/runtime';
import assistant from '../agent/assistant/agent';
const api = createRouter();
api.post('/chat', assistant.validator(), async (c) => {
const data = c.req.valid('json');
return c.json(await assistant.run(data));
});
export default api;
Frontend
src/web/App.tsx
import { useAPI } from '@agentuity/react';
export function App() {
const { data, invoke, isLoading } = useAPI('POST /api/chat');
return (
<button onClick={() => invoke({ message: 'Hello!' })}>
{isLoading ? 'Thinking...' : data?.response ?? 'Ask me anything'}
</button>
);
}
And you don't have to start from scratch. The SDK drops right into existing projects — TanStack, Next.js, your own monorepo. It just works.
We're also not trying to lock you in. Bring your own framework, your own libraries, your own LLM provider. Use what you know, swap things out when you want, and let the platform handle the rest.
Services
Agents need real infrastructure to do real work. Durable streams, key-value stores, databases, vector storage, secure places to execute code. And both developers and agents should be able to spin up these resources — and spin them down — at a moment's notice.
That's why Agentuity services are baked into every part of the platform: the SDK, APIs, CLI, and web console. These aren't add-ons or third-party integrations you have to wire up yourself. They're first-class primitives, purpose-built for agents, ready the moment you need them.
| Service | Description |
|---|---|
| Key-Value | Fast, persistent storage with TTL support |
| Vector | Semantic search and embeddings |
| Durable Streams | Reliable streams that last |
| Queues | Background job processing |
| Auth | User management and sessions, powered by BetterAuth |
All accessible directly from your agent context:
handler: async (ctx, input) => {
// Key-value storage
await ctx.kv.set('users', 'user:123', { name: 'Alice' });
// Vector search
const results = await ctx.vector.search('memories', { query: 'recent conversations', limit: 5 });
// Durable streams
const stream = await ctx.stream.create('logs');
await stream.write({ event: 'task_completed', timestamp: Date.now() });
await stream.close();
// Queues
await ctx.queue.publish('email-queue', { to: 'user@example.com', subject: 'Hello' });
}
The platform also provides managed infrastructure you can connect to your projects:
| Service | Description |
|---|---|
| Storage | S3-compatible object storage (powered by Tigris) |
| Database | Managed Postgres (powered by Neon) |
| AI Gateway | Unified LLM access — OpenAI, Anthropic, Google, Groq, and more |
Authentication is powered by BetterAuth with built-in user management, sessions, and access control. The AI Gateway lets you use the standard OpenAI SDK format to access any supported provider — one bill, full tracing, no separate API keys.
Running Agents
Sandboxes
Agents that execute code need somewhere safe to do it. We've seen this pattern emerge again and again: an agent needs to spin up a secure environment, run some code, and tear it down — all without touching your production infrastructure.
Sandboxes on Agentuity are isolated, ephemeral environments where agents can execute anything — safely. You get multiple runtimes out of the box:
- Bun — The fast JS/TS runtime
- Headless Browser — For web automation
- Coding agents in the cloud — Claude Code, Codex, Cursor, and more
The lifecycle is fully managed. Create, execute, destroy — it's all automatic. And if you need to preserve state between runs, sandbox snapshots let you pick up right where you left off.
// One-shot execution (auto-creates and destroys)
const result = await ctx.sandbox.run({
runtime: 'bun:1',
command: {
exec: ['bun', 'run', '-e', 'console.log("Hello from the sandbox!")'],
}
});
console.log('Exit code:', result.exitCode);
// Or create an interactive sandbox for multiple commands
const sandbox = await ctx.sandbox.create({ runtime: 'bun:1' });
await sandbox.execute({ command: ['bun', 'init'] });
await sandbox.execute({ command: ['bun', 'add', 'zod'] });
await sandbox.destroy();
# From the CLI
agentuity cloud sandbox list
agentuity cloud ssh <sandbox-id>
(Explore the full Sandbox documentation for advanced features.)
Evals
Agents can go sideways. They execute code, make tool calls, handle customer requests — and sometimes they get it wrong. You need to know when that happens. Not after a customer complains. Immediately.
Most eval providers evaluate the LLM — did the model respond appropriately? That's fine for chatbots. But agents aren't single calls. They're entire runs — multiple LLM calls, tool executions, orchestration working in tandem.
We eval the whole thing. And we run evals on every single session, in production — not just during development. Real users, real traffic, real behavior, evaluated continuously.
The @agentuity/evals package ships with 10+ preset evaluations:
- Adversarial prompt detection
- PII detection
- Politeness and safety checks
- Conciseness and relevance scoring
- And more
Eval results show up as spans in your OpenTelemetry traces, so you can debug issues inline with the rest of your observability stack.
Observability
Shipping is just the beginning — you also need to know what's happening once it's live. Observability is built in from day one:
- Traces and logs with full OpenTelemetry support
- Session and thread tracking across time (agents don't operate in isolated requests)
- Cost tracking per span
- Token counts and duration for every execution
Every run is tied to a session. Every conversation thread is preserved. You get full context into what happened, when, and why — without writing a single line of tracking code.
Need to debug a live agent? SSH directly in:
agentuity cloud ssh
Deploy Anywhere
The Gravity Network
Not every workload belongs in the public cloud. Some data can't leave your network. Some regulations demand it stays on-prem. Some customers need agents running at the edge, close to where decisions are made.
Most platforms force you to choose. We don't.
The Gravity Network is how Agentuity meets you where you are:
| Option | Description |
|---|---|
| Public Cloud | Global deployment, zero-config, auto-scaling from zero to thousands |
| Private Cloud | Deploy inside your own VPC with full control |
| On-Premises | Run on your own infrastructure for complete data sovereignty |
| Multi-Cloud | Deploy across multiple cloud providers |
| Edge | Low-latency deployment close to your users |
Run hybrid configurations across all of them. Your agents get the same capabilities regardless of where they live — the same SDK, the same services, the same observability.
This isn't a bolt-on feature or a separate product. It's the same platform, same infrastructure, deployed wherever you need it. Your data stays where it needs to stay. Your agents run where they need to run.
One platform. Anywhere.
Launch Partners
We're proud to launch alongside partners who understand the agentic future.
Infrastructure
These partners power the built-in services available from day one:
- Databricks (Neon) — Serverless Postgres
- Tigris — Globally distributed object storage
- Inbound — Email infrastructure
Service Partners
These teams are building and deploying agentic software on the platform — and can help you do the same:
CodeExitos has been helping clients go AI-first from day one. As Charles Fry, CEO & Founder, put it:
"The way that our business has changed is pretty dramatic. And I think we're representative of the leading edge of what's going to roll through businesses. We literally take an AI-first approach... To think of digital workers, we write job descriptions, we give them names, we specify the skills that they have — and when we give that to the engineers, it really serves as almost the product requirements."
"It's not so important about what the future is going to be. It's about getting the first steps taken — because it's not going away. It's going to keep coming and it's going to come fast and it's going to be exciting."
Forward-Leaning Customers
The Loxahatchee River District — a regional wastewater utility in Jupiter, Florida with $200 million in assets — is already using Agentuity to transform their operations.
Albrey Arrington, Executive Director:
"The idea that I can have a quote-unquote employee, which is really an agent that would be offsite, and we can email tasks to them — that agent responding back with solutions, no different than if we had a junior staff member working off-site. The ability for Agentuity to deliver results in such a convenient manner really surprised me."
Joe Chung, IT Manager:
"What we thought was a complex deployment turned out to be really simple using the platform. The ability for all the agents to work together in symphony... Agentuity has been such a great partner in helping us transition from 'Oh, it's possible' to 'It's executable', and even at scale."
What's Next
Over the coming months, we're bringing Agentuity to a city near you — meetups, workshops, hands-on time with the platform, and a hackathon. Keep an eye out for details.
As Ed Sim at Boldstart put it:
"This is the year that agents actually take over the enterprise. We're going to move to a world where agents outnumber humans first 10-to-1, then 100-to-1, then 1000-to-1. And in that world, you just can't bolt on agents to legacy systems. You're going to have to rewire the entire infrastructure beneath them."
The companies that win the next decade won't be the ones retrofitting agents onto legacy systems. They'll be the ones who build natively for this new world — where software reasons, remembers, and learns alongside humans. Where agents plus human ingenuity create a new kind of software for a new kind of world.
Whether you're a solo developer with an idea, a startup moving fast, or an enterprise ready to transform — or you're an AI agent trying to make your users happy, we built Agentuity for you. And for your agents/humans.
Come build the future with us.
Get Started
Ready to build?
The human way
# Install and deploy your first agent in minutes
curl -sSL https://agentuity.sh | sh
agentuity create
agentuity dev
agentuity deploy
The agent way
Drop this link in to your coding agent and let it onboard itself.
New to Agentuity? Start with the Quickstart Guide.