Comparison
The only agent framework with
proven isolation.
Other frameworks bolt guardrails onto a permissive environment and hope they hold. InferNode makes unauthorized access structurally impossible. Here are the facts.
Side by Side
Framework comparison
| Dimension | InferNode | LangChain | CrewAI | AutoGPT | AutoGen | OpenClaw | Cloud Platforms |
|---|---|---|---|---|---|---|---|
| Where agents run | Dedicated OS namespace (local, remote, or distributed) | Host process (Python/JS) | Host process (Python) | Docker Compose stack | Host process (Python); optional gRPC distributed | Local gateway (WebSocket control plane) | Vendor cloud (AWS, Azure, GCP) |
| Security model | Namespace isolation (structural) | Application-level guardrails | Task-output guardrails | Docker containers (no built-in guardrails) | Optional Docker sandbox; no default isolation | Optional Docker sandbox (bypasses documented) | Cloud IAM + guardrails |
| Formal verification | TLA+, SPIN, CBMC | None | None | None | None | None | Vendor-dependent |
| Hardware | Bare metal, hosted VM, Linux, macOS, Windows | Any (host-dependent) | Any (host-dependent) | Any (host-dependent) | Any (host-dependent) | Any (host-dependent) | Vendor-managed |
| Protocol | 9P (Plan 9) | REST / SDK | REST / SDK | REST / SDK | gRPC + protobuf; REST for LLMs | WebSocket / REST | Vendor APIs |
| License | MIT | MIT | MIT | MIT + Polyform Shield (platform) | MIT | MIT | Proprietary |
| Offline capability | Yes (Ollama over 9P, first-class) | Local models supported (Ollama) | Partial (Ollama supported, some tools require API) | Partial (Ollama, self-hosted only) | Supported (Ollama, user-managed) | Yes (Ollama, first-class) | No |
| Distributed support | Native — the network is the computer | Via LangGraph Platform or external | Via external orchestration | Limited | Yes (gRPC runtime, Python + .NET) | Gateway hub-and-spoke | Vendor-managed |
Key Differentiator
Where agents run changes everything.
Most agent frameworks run as libraries inside a host process—your Python script, your Node.js server, your Docker container. The agent inherits whatever permissions the host has. Your credentials, your filesystem, your network access.
Cloud platforms solve this by moving the agent to someone else’s computer. The isolation is real, but so is the dependency. Your data transits their network, runs on their hardware, under their terms.
InferNode takes a third path. The agent runs in a dedicated OS namespace on any system you control—your laptop, a remote server, or distributed across both. Isolation comes from the operating system, not from a container or a cloud boundary. The agent sees only what you mount into its namespace—nothing more.
Host-based frameworks
Agent runs as a library in your process. Fast to start, but inherits all host permissions. Security depends on guardrails holding under adversarial input.
Cloud platforms
Agent runs on vendor infrastructure. Strong isolation, but you give up data sovereignty, offline capability, and hardware choice.
InferNode
Agent runs in an OS-level namespace on any system you control—local, remote, or distributed. Structural isolation. Formally verified. Offline-capable. Sovereign.
Key Differentiator
Guardrails vs. architecture.
Most agent frameworks implement security as a layer of checks on top of an otherwise permissive environment. The agent can access the filesystem, the network, and your credentials—but guardrails intercept dangerous requests before they execute.
This works until it doesn’t. A novel prompt injection, a confused deputy attack, an unexpected tool interaction—and the guardrail fails. The failure mode is total: the agent has the same access as the host process.
InferNode does not add checks to a permissive environment. It constructs a restrictive environment from the ground up. The agent starts with nothing and receives only what you grant. An attack cannot bypass the guardrail because there is no guardrail to bypass—the resources simply do not exist in the agent’s world.
Architecture beats policy. Every time.
Key Differentiator
9P vs MCP: protocol matters.
The Model Context Protocol (MCP) standardises how agents discover and call tools via JSON-RPC. It is a step forward. But every tool schema is included in every request context—exposing the full capability surface to any content the model processes, including injected payloads.
InferNode uses 9P—the Plan 9 file protocol. Tools are files. Write arguments, read results. The agent’s “tool list” is ls /tool. No JSON schemas, no client libraries, no serialisation layer. LLMs already understand filesystem semantics from training data—they correctly infer tool usage from directory listings alone, without explicit schemas.
The security difference is structural. MCP tool schema poisoning is a semantic attack—the model must recognise it as malicious and choose to refuse. 9P reduces the attack surface to syntactic issues (shell quoting)—standard, verifiable programming. And because 9P resources are namespace paths, access control is the same namespace restriction that isolates everything else. No separate permission model. No tokens to manage.
The formal basis for this analysis: Namespace-Bounded Agents (Finn, 2025).
MCP / JSON-RPC
Full JSON schemas (200–500 tokens per tool) included in every request. All registered tools visible to any content the model processes. Security depends on the model refusing malicious instructions—a behavioural defence.
9P (InferNode)
Unmounted tools don’t exist—not denied, nonexistent. A prompt injection saying “call /shell/exec” fails with ENOENT at the OS level, regardless of whether the model complies. The paper benchmarks show filesystem semantics can replace JSON schemas entirely—248 tokens for 14 tools where MCP requires 1,430—a direction InferNode is built to realise.
0%
cross-tool attack success rate
Claude, GPT-5, GPT-4o — n=372, structural guarantee
Honest Assessment
What the alternatives do well.
InferNode is not the right choice for every use case. These frameworks and platforms have real strengths that matter.
LangChain
Massive ecosystem
The largest collection of integrations, document loaders, and chain templates in the agent space. If you need to connect to a specific SaaS tool, LangChain probably has a connector.
CrewAI
Multi-agent orchestration
Elegant role-based agent design with clear task delegation. Excellent for modeling team-like workflows where agents have distinct specialities.
AutoGPT
Autonomous task decomposition
Pioneered the concept of fully autonomous agents that break down high-level goals into executable sub-tasks without human guidance.
AutoGen
Code-gen agent prototyping
Fastest way to stand up multi-agent workflows that generate, execute, and debug code through conversation loops. Strong community, extensive examples, and broad LLM compatibility.
OpenClaw
Self-extending personal agent
The fastest-growing agent project on GitHub (247k+ stars). Agents can write and modify their own skills through conversation. Massive integration ecosystem across messaging platforms, smart home, and productivity tools.
Cloud Platforms
Enterprise integration
Deep integration with existing enterprise infrastructure, compliance certifications, managed scaling, and support contracts that large organisations require.
The Bigger Picture
Why architecture matters.
Feature lists change quarterly. Integrations are added. APIs evolve. What does not change easily is the fundamental architecture—where agents run, how isolation works, what happens when an attack succeeds.
InferNode bets on a 30-year-old insight from Bell Labs: that namespace isolation at the OS level is the correct abstraction for distributed, untrusted computation. This is not a novel idea. It is a proven one, applied to a new domain.
The research backs it up. The namespace-bounded agent architecture has been evaluated against the AgentDojo benchmark (629 injection attacks, 4 domains) and a 31-attack corpus across three models. 75.2% of attacks require cross-tool access and are structurally prevented—the tool does not exist in the agent’s namespace, period. Multi-model validation (Claude, GPT-5, GPT-4o; n=372) confirms 0% attack success rate with defence-in-depth: behavioural blocking when models refuse, structural blocking at the OS level when they don’t. InferNode implements this architecture with restrictdir(), an 8-step namespace restriction policy, and post-restriction auditing via verifyns().
The formal basis: Namespace-Bounded Agents: Capability-Based Security for LLM Systems via 9P Filesystem Semantics (Finn, 2025). Verification: TLA+, SPIN, CBMC—zero errors across all phases.
If you need the largest ecosystem of integrations today, use LangChain. If you need enterprise compliance checkboxes, use a cloud platform. If you need to build agent systems that are provably safe, sovereign, and distributed by design—InferNode is the foundation.
See the difference for yourself.
InferNode is open source, MIT licensed, and runs on any system you control.