
Cursor launches Automations to wire agents into real dev workflows
Cursor’s new Automations feature lets always-on coding agents trigger from code changes, Slack, and incident tools, promising faster iteration but raising fresh security and governance questions.
Cursor is wiring autonomous coding agents directly into the everyday tools of software teams, launching a feature called Automations that can trigger AI-driven work from code changes, Slack messages, or scheduled timers inside real production-style environments. The rollout, announced March 5, 2026, effectively turns Cursor’s AI-native IDE into a low‑code pipeline for agentic software maintenance and incident response, collapsing the gap between chatty copilots and fully automated development workflows.
Instead of waiting for a developer to type a prompt, Automations let teams define always‑on agents that respond to events from systems like GitHub, Linear, Slack, PagerDuty, and generic webhooks, according to Cursor’s changelog and documentation. When a trigger fires, the service spins up a cloud sandbox where an agent can refactor code, propose fixes, or summarize changes using the models and Model Context Protocol (MCP) tools the team has configured, Cursor says in its product notes.
How Automations change agentic coding
In its launch coverage, TechCrunch reports that Automations can kick off on a new addition to the codebase, on a Slack message, or on a simple timer, with teams already using them for jobs like weekly codebase summaries and PagerDuty‑driven incident triage. The automation framework is pitched as a way to escape the “prompt‑and‑monitor” pattern that currently defines most agentic coding: instead of dozens of ad‑hoc agents vying for a human’s attention, recurring tasks like regression sweeps or dependency updates can run on rails, looping human reviewers in only when needed.
Cursor’s own changelog describes Automations as a system for “always-on agents that run based on triggers and instructions you define,” emphasizing that each run happens in an isolated cloud environment and can leverage a memory tool to learn from past executions over time. The company says Automations can be created from templates or configured at cursor.com/automations, making it easier for teams to standardize patterns like “when a new Linear issue is created, attempt a first fix” — an early use case already surfacing in the company’s community forum. This effectively blends continuous integration practices with AI agents that can not only comment on code but attempt to modify it.
The launch lands as Cursor deepens its footprint across developer tooling, including recent support as an AI agent inside JetBrains IDEs via the Agent Client Protocol, which lets JetBrains users call Cursor’s agentic workflows without leaving their existing environment, according to a JetBrains blog post. It also arrives amid surging competition from OpenAI, Anthropic and others in the race to turn coding agents from novelty into infrastructure, a shift chronicled in multiple recent funding and product announcements.
Security, governance and the new blast radius
The same features that make Automations appealing for busy teams — unattended execution, deep codebase access, and triggers from communication channels — also widen the blast radius if something goes wrong. As TechCrunch notes, Cursor envisions agents querying logs via MCP tools as soon as a PagerDuty incident opens, or continuously reviewing AI‑generated code, which means misconfigurations or prompt injections could now travel quickly from chat to code.
Security researchers have already warned that event‑driven agents embedded in CI/CD pipelines can amplify supply‑chain risks if they ingest untrusted inputs or operate with overly broad repository permissions, a concern echoed in recent software supply‑chain guidance from organizations like the OpenSSF and CISA. While Cursor’s documentation stresses isolated sandboxes and configurable tools, the practical safety of Automations will depend on how teams scope credentials, restrict which branches agents can touch, and log or gate their actions with human‑in‑the‑loop approval flows.
The near‑term impact is likely to be felt in everyday maintenance work: sweeping deprecations, API client updates, doc sync, and on‑call triage could increasingly be delegated to Automations, with human engineers stepping in for design decisions and complex debugging. But as more development work is routed through event‑triggered agents, observability and policy controls — who can create an automation, what it’s allowed to change, how its decisions are audited — may prove just as important as raw model quality. In other words, Automations push agentic coding closer to the center of the software factory, and they force teams to decide how much of that factory they are willing to hand over to machines.
Tags
