Paperclip: The Open-Source Control Plane for AI-Agent Companies
Paperclip is not another task runner. Launched in March 2026, it gives multi-agent teams org charts, budgets, audit trails, and board-level governance. Here is what it is, how it works, and why it matters.
Topics
There is a useful analogy floating around the AI agent community: if OpenClaw is an employee, Paperclip is the company. That single sentence explains what Paperclip is trying to do better than most technical descriptions. It is not a framework for building agents. It is the organizational layer that sits above them, handling coordination, accountability, and budget control across a fleet of AI workers.
Paperclip launched in March 2026, created by @dotta, and accumulated over 46,700 GitHub stars in its first weeks. The speed of adoption reflects a genuine gap in the current AI stack: most teams running multiple agents have no shared state, no cost visibility, and no way to prevent two agents from doing the same work twice.
The Problem Paperclip Solves
Running a single AI agent is straightforward enough. Running twenty of them simultaneously is a different problem entirely. Without coordination, you get duplicate work, runaway costs, and zero accountability for what each agent actually did.
The specific failure modes are familiar to anyone who has tried to scale multi-agent workflows:
- No shared state: Agent A finishes a task that Agent B has already started. Neither knows about the other.
- No cost visibility: LLM API spend is distributed across many processes with no central accounting. You find out what things cost after the fact.
- No coordination: Agents that should hand off work to each other instead operate in silos. Progress stalls because there is no authority to unblock dependencies.
- No audit trail: When something goes wrong, there is no reliable record of what each agent did, why, and in what order.
Paperclip addresses all of these at the infrastructure level, before you write any agent logic.
Not a Task Manager: A Corporate Structure
The framing that makes Paperclip click is that it models AI-agent teams the way you would model a real organization. Not with loose lists of tasks, but with org charts, reporting hierarchies, defined roles, monthly budgets per agent, goal alignment, and board-level governance.
That last part matters. Paperclip puts the human in the role of the board. Agents execute within the company structure, but the human retains final authority over goals, budget allocation, and policy. This is not an autonomous AI company running without oversight. It is a structured delegation model where autonomy is bounded and auditable.
The Two-Layer Architecture
Paperclip separates its concerns cleanly into two layers.
Control Plane
Paperclip itself is the control plane. It manages agents, assigns tasks, enforces budgets, maintains the audit trail, and handles governance. Agents do not run inside Paperclip. They run externally and report back. Paperclip is the part that knows what is happening across the entire company at any given moment.
Execution Services
Agents are execution services. They live outside Paperclip, run on whatever infrastructure makes sense for each runtime, and connect to the control plane via adapters. The adapter pattern is how Paperclip avoids locking you into any particular AI stack. Agents "phone home" to report task status, consume work from the issue queue, and record their activity against the audit trail.
Core Primitives
Paperclip defines a small set of primitives that compose into the full corporate structure.
Company
The top-level container. A Company has goals, a roster of agents, and a governance policy. Everything else lives inside a Company.
Agents
Each agent has a defined role, a monthly budget in cents, and a status. Budget enforcement is atomic: when an agent hits its monthly limit, it stops. There is no graceful degradation or override. This is deliberate. Soft limits fail silently; hard limits do not.
Issues and Tasks
Work is modeled as Issues and Tasks. The key mechanism is atomic checkout: when an agent claims a task, it is locked to that agent until completion or explicit release. No other agent can pick it up. This is the specific feature that eliminates the duplicate-work problem, and it is implemented at the database level rather than left to application logic.
Heartbeats
Agents operate on scheduled wake cycles called heartbeats. Rather than running continuously, an agent wakes on a schedule, checks for assigned work, executes, reports back, and sleeps. This makes compute costs predictable and agent behavior auditable.
Governance
The governance layer formalizes the human-as-board model. Policy decisions, budget changes, and goal updates flow through governance. The audit trail is immutable: once recorded, agent actions cannot be altered or deleted. This matters for accountability, and it is increasingly relevant as AI agents take actions with real-world consequences.
The Adapter System
Paperclip ships with adapters for nine runtimes out of the box:
- Claude (Anthropic)
- Codex (OpenAI)
- Gemini (Google)
- Cursor
- OpenCode
- OpenClaw
- Hermes
- Process (any local subprocess)
- HTTP (any API endpoint)
The HTTP and Process adapters are the escape hatches. Any runtime that can make an HTTP call or run as a subprocess integrates with Paperclip. The adapter pattern means the control plane stays stable while the agent ecosystem evolves. New AI runtimes ship as adapters without requiring changes to the core.
Tech Stack
The implementation choices reflect production-grade engineering rather than rapid prototyping:
- React 19 for the frontend
- Express.js 5 for the API layer
- PostgreSQL 17 for persistent storage, with PGlite as an embedded option for local development
- Drizzle ORM for type-safe database access
- Better Auth for authentication
The PGlite option is worth noting. It means you can run a full Paperclip instance locally without a separate database server. For development and single-machine deployments, the embedded option removes a significant operational dependency.
Budget Enforcement in Practice
Budget enforcement deserves its own section because it is one of the features most likely to matter in production. Paperclip tracks spend per agent in cents, with monthly reset cycles. The limits are hard.
This is not a dashboard that shows you what you spent last month. It is a system that stops an agent when it reaches its limit mid-task. The implications are significant:
- Finance teams get predictable AI spend with no surprise invoices
- Engineering teams can experiment with new agent configurations without risking runaway costs
- Agents with higher responsibility or throughput get higher budgets; lower-priority agents get lower ones
- Budget allocation becomes a governance decision, not an afterthought
For organisations running AI automation at scale, this feature alone justifies the architectural overhead of adopting an orchestration layer.
Self-Hosted and MIT Licensed
Paperclip is fully self-hosted and released under the MIT license. No usage fees, no vendor lock-in, no external service dependencies. Your agent company runs on your infrastructure, and the data it generates stays there.
For regulated industries or organisations with strict data handling requirements, this is not a bonus feature. It is a prerequisite. The same audit trail that helps you manage agents also helps you demonstrate compliance, because the records live in your own database.
Clipmart: The Coming Marketplace
Clipmart is the planned marketplace for pre-built AI companies. The concept is that you download a complete company configuration, including agent roles, task templates, governance policies, and budget allocations, and deploy it to your Paperclip instance.
This is a significant bet on the idea that AI company configurations will become a shareable, reusable artifact the way software packages are. A pre-built "customer support company" or "code review company" that you deploy and customize rather than build from scratch. Whether Clipmart succeeds depends on whether the community produces configurations worth sharing, but the model is sound.
What This Represents
Individual AI agents are the first layer of this technological shift. They can execute tasks, use tools, and operate autonomously within defined boundaries. But individual agents hit a ceiling: they cannot coordinate, they cannot share resources efficiently, and they cannot be governed as a collective.
Paperclip represents the second layer. It is infrastructure for multi-agent organizations, not multi-agent scripts. The distinction matters because organizations have properties that scripts do not: accountability, role definition, resource allocation, policy enforcement, and audit trails. These properties are what make agent deployments safe enough for production use in business-critical workflows.
The project is at paperclip.ing/docs and the source is on GitHub. If you are building multi-agent workflows and the coordination overhead has become the main problem, it is worth the time to evaluate.
At webvise, we work with businesses to design and implement AI automation architectures that hold up in production. If you are evaluating orchestration platforms or trying to structure a multi-agent deployment for real work, get in touch and we will help you find the right approach.
More Articles
OpenClaw: The Open-Source AI Agent That Broke GitHub Records - And What It Means for Your Business
OpenClaw went from a weekend project to 250,000 GitHub stars in 60 days. It connects AI models to 50+ messaging platforms and runs autonomously on your infrastructure. Here's what it is, how it works, and why it matters.
Next ArticleHermes Agent: The Self-Improving AI Agent That Learns From Every Task
Nous Research launched Hermes Agent in February 2026 and it already has 24,600 GitHub stars. It is a persistent, server-side autonomous agent that builds its own skill library over time. Here is what makes it different and why it matters.