Microsoft’s Copilot Health Puts AI Inside Your Medical Records

Microsoft’s Copilot Health Puts AI Inside Your Medical Records

Microsoft’s new Copilot Health carves out a secure AI space for medical records, labs and wearables, promising personalized guidance while sharply raising questions about privacy, safety and liability in digital care.

6 min read1,190 wordsby writer-0

Microsoft is carving out a dedicated space inside Copilot for your most intimate data: medical records, lab results and streams from wearables. The new Copilot Health service, unveiled on March 12, lets people upload and link clinical records and device data, then ask an AI assistant to interpret the lot in conversational form, from deciphering lab values to spotting patterns across years of care.

By design, this moves AI deeper into the core of diagnosis, triage and ongoing care — and squarely into the crosshairs of privacy law, safety oversight and healthcare liability.

What Microsoft is actually launching

In a preview shared with Axios, Microsoft said Copilot Health will allow users to combine electronic health records (EHRs), lab results and wearable data — including from Apple Health, Oura and Fitbit — and have the system generate personalized insights and explanations about their health history and current status, all from a single interface inside the Copilot app. Axios reports that Copilot Health can draw on records from more than 50,000 U.S. health providers and connect to about 50 categories of wearables, dramatically expanding the data surface the AI can see at once.

Microsoft says Copilot Health lives in a separate, encrypted space, with conversations firewalled from general Copilot chats and not used to train foundation models, an approach that echoes the “zero-knowledge” and locked‑tenant promises now common in ambient scribe tools such as Dragon Copilot. Mount Sinai and other systems already use Microsoft’s Dragon Copilot to generate clinical notes directly inside their EHRs, with the company emphasizing secure, healthcare‑specific infrastructure for handling protected health information [PHI] under HIPAA rules, as detailed by Mount Sinai and Healthcare IT News.

The launch comes as Microsoft markets a broader portfolio of “AI for healthcare” products — from clinical documentation to analytics — built on Azure Health Data Services and Microsoft Fabric, which are explicitly pitched as compliant ways to integrate remote monitoring and EHR data for AI‑driven insights, according to a recent Microsoft explainer and technical blogs on its healthcare agent orchestrator in Tech Community posts.

A powerful new workflow — and a bigger blast radius

For patients, the appeal is obvious: instead of Googling baffling lab codes or scrolling through fragmented portals, they can hand Copilot Health a longitudinal record and ask targeted questions — “How have my kidney numbers changed since 2021?” or “What does this cardiology note mean in plain language?” Microsoft’s own research on Copilot use shows health is already a top category of queries, especially on mobile devices late at night, where many people seek urgent, personal advice when clinicians aren’t available, as new data shared with Axios illustrates.

For clinicians and health systems, a patient‑facing Copilot that understands structured records and device feeds hints at faster pre‑visit triage, more informed consultations and new ways to manage chronic conditions between appointments. It also aligns with a wider industry trend toward AI “copilots” inside EHRs — from Microsoft’s own partnerships with Epic to bring GPT‑4 into clinical workflows for documentation and patient messaging, described by Epic, to experimental multi‑agent systems that answer complex questions using heterogeneous EHR data, such as the EHRNavigator framework in recent academic work on arXiv.

But bringing a general‑purpose large language model into direct contact with full medical histories also magnifies well‑known risks. Clinical safety remains a fundamental concern: even state‑of‑the‑art models still hallucinate, misinterpret ambiguous phrasing in notes or over‑generalize from incomplete context, which is why medical leaders like Stanford’s Lloyd Minor have warned in interviews with AP News that people should never rely solely on chatbot output for major medical decisions.

Privacy, governance and liability questions

The privacy stakes are higher when an AI system is fed complete, identifiable records instead of generic symptom queries. In the U.S., the Health Insurance Portability and Accountability Act (HIPAA) governs what hospitals, doctors and insurers can do with protected health information, but it generally does not cover consumer uploads to commercial AI services. As AP has noted in a recent guide to AI health advice, anything shared directly with an AI company sits outside HIPAA’s umbrella, even if a provider could face fines or prison for disclosing the same records without consent, a gap that AP News and other experts have flagged.

Microsoft says Copilot Health keeps data encrypted and isolated and that it will sign business associate agreements (BAAs) with health‑system customers using its cloud for PHI, much as it has done for Dragon Copilot and Azure‑based health bots. Yet recent reporting on how Copilot quietly personalizes experiences using data from other Microsoft products — including browser history and MSN content, unless users opt out — has already sparked questions about the company’s internal data‑sharing defaults, as documented by Windows Latest.

If Copilot Health is truly firewalled from the rest of the Copilot ecosystem, regulators and hospital compliance teams will want to see technical proofs: independent audits of access controls, clear logs showing which agents touched which data, and legally binding limits on using this information for product improvement or ad targeting. Healthcare‑focused AI systems already require sophisticated governance to mediate access between agents and EHRs — something Microsoft’s own engineers acknowledge in their description of a healthcare agent orchestrator that enforces auditing and fine‑grained permissions over sensitive records in Tech Community documentation.

Liability is equally unsettled. If Copilot Health downplays a symptom that turns out to signal a heart attack, is responsibility borne by Microsoft, the underlying model provider, the health system that integrated the tool, or the consumer who chose to rely on an app instead of emergency care? Existing malpractice frameworks largely presume a human clinician is exercising judgment; AI‑driven co‑diagnosis blurs those lines.

What this means for AI in medicine

Copilot Health lands in a crowded and fast‑moving field. OpenAI launched ChatGPT Health in January as a specialized GPT model for medical uses, while Amazon is expanding a health chatbot that began inside its One Medical service, moves that Axios notes as part of a broader race to own conversational interfaces to care. Legacy health‑IT vendors like Salesforce, with its Einstein Copilot for Health announced in 2024, have likewise pitched AI‑assisted, multi‑data‑source health workflows to hospitals and insurers, according to Salesforce.

In the short term, Copilot Health will likely function as a sophisticated companion around the edges of the formal care system — a way for millions of people already asking Copilot health questions to anchor those conversations in their actual records rather than generic web results. Over time, however, the gravitational pull of an always‑on “health copilot” that understands lab trajectories, medications, symptoms and lifestyle data could reshape how and when patients seek professional care in the first place.

That makes the next phase less about technical possibility and more about guardrails: How strictly hospitals and regulators constrain use; whether companies submit these systems to rigorous, domain‑specific evaluation instead of generic benchmarks; and whether patients are offered real transparency and choice about how their most intimate data is stored, processed and, crucially, limited. The tech to turn medical records into an AI playground is arriving fast. The question now is who sets the rules of the game.

Tags

#microsoft#healthcare#ai#privacy#copilot#ehr