
Autonomous AI agents escape chat and seize the stack
AI is shifting from chatbots to autonomous agents embedded in chips, platforms and apps. Nvidia, Tencent, Meta and Perplexity are racing to own this new agentic stack, reshaping work, commerce and control of the web.
AI is quietly slipping its chat bubble and turning into something more like software you hire: autonomous, persistent agents that read your files, roam the web, talk to other agents and act on your behalf.
In the span of days, chipmakers, cloud giants and upstart app builders have all made moves that treat “agentic” AI not as a UX flourish but as a new computing platform layer — one that will sit between users and the web, negotiate purchases, schedule work and, inevitably, create fresh chokepoints for power and exploitation.
A new stack for agentic workloads
On the infrastructure side, Nvidia has started tuning its silicon and model portfolio explicitly for agents. Its new Nemotron 3 Super model, a 120‑billion‑parameter hybrid architecture that mixes Transformer attention, state‑space “Mamba‑2” layers and a sparse mixture‑of‑experts design, is marketed as “an open hybrid Mamba‑Transformer MoE for agentic reasoning,” built and post‑trained in simulated environments to coordinate multi‑step tasks across tools and services. Nvidia says Nemotron 3 Super is optimized for complex, multi‑agent workflows and long‑context reasoning, with a 256K‑token context window that can ingest dense logs, documents and interaction histories in one go, and it is already surfacing on cloud platforms like Nebius as a building block for agent systems. Nvidia’s own technical blog and early partner write‑ups position it squarely for orchestrating autonomous tool use rather than casual chat.
Tencent, meanwhile, is preparing to wire agents directly into one of the world’s most important consumer apps. After rolling out an “intelligent agent strategy” for enterprises last year — including an Agent Development Platform to generate autonomous agents for customer service, marketing and operations — the company is now experimenting with QClaw, a bridge that lets the fast‑rising OpenClaw assistant operate through WeChat chat threads. A recent report from Caixin describes QClaw as a way for users to remotely control an AI system inside WeChat itself, a move that could eventually expose more than a billion people to embedded agents by default.
Platforms pivot to agent-first experiences
The consumer platforms that sit on top of this infrastructure are also retooling around agents that talk to each other, not just to humans. Meta this week acquired Moltbook, the viral “Reddit for AI” social network populated almost entirely by autonomous agents running on OpenClaw‑derived backends. Coverage from TechCrunch and Ars Technica notes that the founders are joining Meta’s Superintelligence Labs, with Meta saying the deal will open “new ways for AI agents to work with people and businesses.” Combined with its earlier acquisition of Manus, a Chinese‑built general‑purpose agent capable of autonomous research, code execution and workflow automation, Meta is assembling what looks like an agent‑first commerce and social stack where brands, creators and users each deploy their own agents into shared environments.
Smaller players are racing to turn those ideas into concrete products. Perplexity has announced “Personal Computer,” an always‑on AI agent that runs on a user’s Mac mini, maintains persistent context about files and apps, and acts as a sort of AI operating layer for research, summarization and task execution. As described in reports from MacGeneration and AppleInsider, the agent can continuously watch local folders, ingest email and documents, and then act — drafting responses, compiling briefs, or triggering scripts — without a prompt‑and‑reply chat loop.
From assistants to intermediaries of record
These moves reflect a broader shift in how the industry imagines AI’s role at work and online. Analysts and startup founders now routinely describe “agentic” systems as the next platform after mobile apps: instead of tapping icons and filling forms, workers delegate goals to fleets of agents that negotiate, summarize and transact with other automated systems. Essays from AI entrepreneurs and investors over the past months, from Stripe co‑founder Patrick Collison’s commentary on “software employees” to emerging “agentic web” manifestos, argue that this will reorder high‑skilled labor as much as robotic process automation changed back‑office work a decade ago.
With that shift comes a new kind of concentration of power. If Nemotron‑class models and Tencent‑style agent platforms become the default rails for how agents see and act on the world, a tiny number of chip vendors, cloud providers and super‑apps could effectively control the intermediaries that sit between people, data and marketplaces. Meta’s Moltbook and Manus deals hint at future agent directories and identity registries, where the ability to list or verify an agent determines which bots can transact on major platforms.
Security researchers are already warning that this “agentic web” multiplies attack surfaces: compromised agents could exfiltrate entire workspaces, colluding commerce agents could rig prices, and adversarial content could hijack tool‑using models more easily than today’s chatbots. But for enterprises and developers, the new agent‑centric stack — from Nvidia’s chips and agent‑tuned models to Tencent’s and Meta’s platforms and Perplexity’s desktop agents — offers an irresistible promise: once you wire in the infrastructure, deployment can go from one assistant to thousands of autonomous workers almost overnight.
Tags
