NEAR AI’s Answer to the Trust Problem in Always-On AI

AI brokers have gotten persistent, autonomous, and deeply embedded in on a regular basis workflows. However as they achieve the flexibility to behave on our behalf, a tougher query emerges: who controls the info, the execution, and the belief layer?
—
In the present day, $NEAR AI launched its reply. Introduced stay at NEARCON 2026, IronClaw is a brand new open-source, verifiable AI agent runtime designed for a future the place brokers run constantly — with out exposing delicate knowledge, credentials, or person intent.
A Runtime Constructed for Autonomous AI — With out Blind Belief
IronClaw builds on the unique OpenClaw imaginative and prescient, however strengthens it with cryptographic ensures from the bottom up. Written in Rust and deployed inside encrypted Trusted Execution Environments (TEEs) on $NEAR AI Cloud, the runtime permits AI brokers to entry instruments, preserve reminiscence, and take actions on customers’ behalf — all inside a tightly managed safety boundary.
Relatively than asking customers to belief opaque platforms, IronClaw shifts the belief mannequin towards verifiable execution. Information and inference keep protected on the {hardware} stage, and brokers function beneath specific, enforceable permissions.
Safety by Structure, Not Add-Ons
IronClaw is designed with defense-in-depth as a core precept.
Loading tweet…
View Tweet
Each untrusted or third-party instrument runs in its personal sandbox, restricted to solely the assets it’s explicitly approved to entry. Community calls are restricted to authorized locations. Delicate credentials are injected solely at runtime and by no means uncovered on to instruments or exterior providers.
Agent exercise is constantly monitored to detect misuse, together with protections in opposition to prompt-injection assaults and abusive useful resource consumption. All person knowledge is saved domestically in PostgreSQL, encrypted with AES-256-GCM, and by no means shared externally. Importantly, IronClaw collects no telemetry or analytics, guaranteeing execution stays absolutely personal.
An entire audit log provides customers visibility into each instrument interplay — transparency with out surveillance.
Privateness-First AI, Able to Deploy
IronClaw launches with a free Starter tier that features one hosted agent occasion working inside $NEAR AI’s safe surroundings and powered by its inference infrastructure. Builders and organizations can scale up by means of versatile paid tiers as their wants develop.
The aim isn’t simply safer brokers — it’s sensible deployment with out forcing groups to decide on between comfort and management.
Loading tweet…
View Tweet
Why This Issues
As AI methods more and more serve company incentives and depend on opaque knowledge pipelines, IronClaw represents a distinct course: native management, verifiable execution, and privateness by default.
Illia Polosukhin, Co-Founding father of $NEAR Protocol and Founding father of $NEAR AI, described IronClaw as an “agentic harness designed for safety,” extending $NEAR’s full-stack belief mannequin from blockchain infrastructure into the AI layer itself.
Relatively than bolting safety onto agentic AI after the actual fact, IronClaw embeds it into the runtime — combining confidential inference, cryptographic verification, and hardware-backed execution right into a single system.
A Basis for Accountable Agentic AI
George Zeng, Chief Product Officer and GM of $NEAR AI, framed the launch extra bluntly:
“AI brokers are already getting into important workflows, however safety, compliance, and knowledge possession stay unresolved. IronClaw is supposed to shut that hole — giving builders and enterprises the arrogance to deploy always-on brokers with out surrendering transparency or management.”
IronClaw is on the market now, with code accessible by way of $NEAR AI’s GitHub.
As AI strikes from instruments to actors, IronClaw alerts a transparent place: autonomy shouldn’t come at the price of privateness, and intelligence ought to by no means require blind belief.





