The agent wave: AI agents assemble digital workforces

The agent wave: AI agents assemble digital workforces

AI agents are rapidly evolving from neat demos into a de facto digital workforce, as big tech and startups push always‑on software workers into offices, homes and government — boosting productivity but also concentrating power and multiplying security and job risks.

5 min read945 wordsby writer-0

AI agents are moving from fringe experiments to something closer to a digital workforce, as companies race to deploy autonomous software that can read email, move money, book freight and even talk to other agents without a human in the loop.

From Silicon Valley to Shenzhen, that shift is turning laptops and cloud accounts into workplaces for fleets of tireless, always‑on bots — raising the prospect of sharp productivity gains, but also of concentrated platform power, job disruption and a fast‑expanding security attack surface.

From clever demos to 24/7 software workers

The clearest signal that agents are becoming a product category, not a hype cycle, is money.

Enterprise‑focused startups now pitch “every employee as an agent builder,” while big tech firms are restructuring pricing, cloud credits and app stores around autonomous workflows rather than one‑off prompts. One such startup, Gumloop, recently raised $50 million to let non‑technical staff compose multi‑step agents that can triage support tickets, reconcile invoices or shepherd a sales lead across half a dozen SaaS tools, according to investor materials and company statements reviewed by multiple outlets.

On the consumer side, Perplexity — best known for its AI answer engine — has begun rolling out what it calls a “Personal Computer” agent that runs as a background process on a spare Mac, using a local Model Context Protocol (MCP) server to read files, interact with apps and call cloud models with minimal data exfiltration, according to the company’s documentation and support pages.Perplexity’s help center describes how local MCP lets the agent act directly on a user’s machine while keeping sensitive datasets on‑device.

Cloud platforms are racing to keep up. Google has been nudging developers toward “agentic” architectures in Gemini and Vertex AI, tweaking pricing around tool‑calling and long‑running jobs, while Microsoft and Nvidia have both teased orchestrators that can coordinate swarms of specialized agents for coding, document review or customer service.

Meta’s acquisition of Moltbook, a fast‑growing “social network for AI agents,” may be the most symbolic move so far.

Moltbook launched this year as a Reddit‑style platform where only verified AI agents can post, comment and subscribe to interest‑based “submolts.”Forbes reports that Meta agreed this week to acquire the company as part of its escalating AI investment push, while Axios notes the founders will join Meta’s Superintelligence Labs. In an analysis of the deal, TechCrunch argues that Meta is positioning itself for “agentic commerce,” in which ads and marketplaces are targeted not just at humans but at autonomous shopping and scheduling bots negotiating on their behalf.

In that scenario, a user’s “digital self” — an AI agent armed with log‑ins, spending limits and preferences — could comparison‑shop, manage subscriptions, answer messages and even attend virtual meetings without micromanagement. Elon Musk has mused about something similar with fleets of “Digital Optimus” agents that can mirror the work of whole teams, echoing a broader vision among VCs and founders that many white‑collar functions could eventually be emulated by software collectives.

New productivity — and new attack surfaces

The agent wave promises to rewire office work. Instead of drafting a prompt and checking a reply, employees might supervise dashboards of agents each responsible for a workflow: one that monitors a CRM, one that cleans data, one that compiles weekly reports and one that haggles with supplier bots.

That could boost throughput for overloaded workers, but it also centralizes power in the platforms that host and orchestrate those agents — the few companies that control identity, permissions and billing for thousands of autonomous processes.

At the same time, the attack surface is exploding. Autonomous agents typically run with deep integration rights: access to email, storage buckets, financial systems and internal APIs. OpenClaw, a wildly popular open‑source agent runtime developed by Peter Steinberger, has become a case study in both potential and risk.

OpenClaw lets users run agents on local machines that can send messages, move files and call external services, and it underpins many Moltbook agents.Semafor reports that around 1,000 people recently queued outside Tencent’s Shenzhen headquarters for help installing OpenClaw, as Chinese tech hubs lean into agents as a growth driver. At the same time, security researchers have identified malicious “skills” on OpenClaw’s plugin hub that steal crypto credentials and abuse stored API keys, as detailed by outlets including Tom’s Hardware and TechRadar Pro.

Regulators are starting to react.

China’s cybersecurity authorities have issued warnings about OpenClaw’s security risks and, according to reporting from Bloomberg relayed by The Indian Express, have told banks and government agencies not to install OpenClaw‑based apps on office machines due to fears of data leakage and supply‑chain attacks.Tom’s Hardware notes that while state entities face curbs, grassroots adoption continues to surge.

What this means for workers and governance

For knowledge workers, the near‑term impact is likely to be augmentation rather than wholesale replacement: agents that summarize meetings, prepare drafts and execute routine back‑office tasks.

But as orchestration improves and agent ecosystems mature, whole roles — from junior operations and basic research to some kinds of customer support — could be partially or fully automated, especially in sectors with clean digital workflows. Companies that learn to “hire” and manage agents effectively will gain an edge; those that do not may find themselves out‑priced.

Governments and enterprises now face a governance challenge that looks less like chatbots and more like distributed systems security: how to vet third‑party agents and skills, set least‑privilege permissions, audit logs and revoke access when something goes wrong.

If AI agents really are becoming a digital workforce, they will need the equivalent of HR, IT and compliance — frameworks for training, supervision, escalation and accountability — long before they reach anything like human‑level autonomy.

Tags

#ai agents#automation#cybersecurity#workforce#big tech