Manifesto

Safe Agents Are
Free Agents

The Trust Problem

AI agents are the most powerful tools we have ever built. An agent that can read files, call APIs, spawn sub-processes, and compose capabilities on the fly is extraordinarily useful. It is also extraordinarily dangerous.

Today, most agent frameworks give you two choices. Run the agent on your own machine—with your credentials, your network, and your data—and hope the guardrails hold. Or hand everything to a cloud platform that isolates the agent by moving your data onto someone else's computer. One trusts the agent too much. The other trusts the cloud too much.

Hope is not an engineering strategy.

A prompt injection attack, a confused deputy, a privilege escalation—and the cage opens. The agent reads your SSH keys, exfiltrates your data, or acts with authority you never intended to grant. This is not a theoretical risk. It is the defining security challenge of the agentic era.

The Inversion

InferNode does not add guardrails to an insecure foundation. It inverts the model entirely.

Each agent runs in its own namespace—a view of the world constructed from only the resources you explicitly share. The agent does not start with access to everything and get restricted. It starts with access to nothing and receives only what you grant.

Try to access something you weren't given? It doesn't error. It simply does not exist. There is no attack surface because there is no surface. A prompt injection cannot read your SSH keys because in the agent's namespace, SSH keys do not exist. There is nothing to exploit.

This is not a claim. The isolation model has been formally verified using TLA+, SPIN, and CBMC. Mathematical proof, not just testing.

Safety Creates Freedom

Here is the insight that changes everything: a provably contained agent is an agent you can trust with real autonomy.

When you cannot prove containment, you restrict. You add approval workflows, human-in-the-loop checkpoints, capability limits. The agent becomes a chatbot with extra steps. The productivity gains never arrive.

When you can prove containment, you liberate. Give the agent more tools, more data, more autonomy—without more risk. Let it compose capabilities, spawn sub-agents, operate across devices. The human-AI team can finally do what it was always supposed to: work together at the speed of thought.

Security is not a constraint on productivity. It is the foundation that makes productivity possible.

The Architecture of Trust

InferNode is derived from Inferno® OS, an operating system developed by Bell Labs—the same lineage that gave us Unix, C, and the foundations of modern computing. But where the industry chose the desktop, Inferno® chose a different path: everything is a file, and every file is accessible across the network.

This was not designed for AI agents. It was designed for distributed systems that demand trust. Telecommunications networks serving billions. Industrial control systems. Environments where failure is not an option. The fact that it is precisely what AI agents need is not a coincidence—it is convergence.

The 9P protocol—Inferno®'s native language—lets resources flow across networks with radical simplicity. No REST APIs. No OAuth handshakes. No middleware. No cloud dependency. Just files, namespaces, and the freedom to compose them as the mission demands.

Veltro: The Unchained Agent

Veltro is an AI agent that does not live inside an application. It lives inside a namespace—your namespace—alongside your data, your tools, your sensors, and your decisions. It does not ask for API access. It does not require a platform subscription. It composes capabilities from whatever resources are present, wherever they are: on your device, on the edge, across your network.

Veltro can do more because it is provably contained. It can act autonomously because you can prove the boundaries hold. It does not puppet you through a workflow designed by a product manager in California. It works with you, at the speed of thought, toward the outcome that matters.

This is what human-AI teaming actually looks like when trust is the foundation, not an afterthought.

The Future Is Not Apps

The application era is ending. Not because applications are bad, but because they encode the wrong abstraction. An app assumes a single user, a single device, a predetermined workflow. The world we are building assumes none of these things.

The future belongs to capabilities and skills—composable, distributed, trustworthy. Data sources that are accessible without gatekeepers. AI agents that are peers, not plugins. Edge computing that is affordable and sovereign, not rented from a hyperscaler on someone else's terms.

Inferno® makes this possible. Not as a prototype. Not as a research curiosity. As a production system with a thirty-year pedigree, proven at the scale of telecommunications networks that serve billions.

A Declaration

We declare that AI agents deserve provable containment, not bolted-on guardrails.

We declare that security and productivity are not opposites—security is the foundation that enables productivity.

We declare that the human and the AI deserve to work in a shared space, with shared context, toward shared outcomes—and that this is only possible when trust is proven, not assumed.

We declare that computing should be distributed, composable, and sovereign by default.

We declare that it is time to build on a foundation worthy of trust.

Safe agents are free agents. InferNode lights the way.