AI News

Microsoft confirms Copilot bug let it process & summarize confidential emails despite policies in place

Microsoft 365 Copilot

Microsoft has already been under massive scrutiny for integrating Copilot across Windows 11 for quite some time now. However, after multiple complaints from users and feedback, the company has promised to step back from its AI-everywhere approach and fix the operating system from scratch. Having that said, Microsoft 365 Copilot has been under the radar for multiple other reasons.

Early this year, in early 2026, Britain’s West Midlands Police became embroiled in controversy after admitting that a flawed piece of intelligence came partly from Microsoft’s AI tool. Well, Microsoft 365 Copilot is again under scrutiny, this time for a serious bug that allowed the AI assistant to read and summarize customers’ confidential emails, even when those messages were labeled as sensitive with data loss prevention controls in place.

Copilot ended up summarizing confidential emails due to a bug, Microsoft confirms

The issue, first reported by BleepingComputer, prompted Microsoft to publicly acknowledge the flaw. According to the company, the bug, now tracked as CW1226324, has reportedly affected the Copilot “Work tab” chat feature across Microsoft 365 apps, including Outlook, Word, Excel, and PowerPoint. Emails stored in Sent Items and Drafts that carried confidentiality labels were processed and summarized by Copilot Chat despite policies meant to block such processing.

Microsoft, in its advisory, mentioned that the issue likely dates back to January 21 and continued for several weeks before a fix started rolling out in early February. The company reported that the bug stemmed from an “unspecified” code error that caused Copilot to ignore established data protection policies configured by organizations. The company reportedly is still monitoring the deployment and has not disclosed how many customers or tenants may have been affected by this flaw.

The good news is that the incident hasn’t caused any data leaks outside of a tenant’s environment. The generated summaries containing confidential info were reportedly returned to users within the same organizations. But the fact that Copilot processed content that should have remained protected raises significant concerns about AI governance and enterprise safeguards.

You may also like: Can AI Agents Actually Replace Human Moderators? Moltbook vs Reddit in 2026

The bug comes amid increasing scrutiny of AI features embedded into productivity tools. Earlier this week, the European Parliament’s IT department reportedly blocked built-in AI features on lawmakers’ devices over fears such tools could inadvertently upload sensitive information to the cloud.

Rishaj Upadhyay
Rishaj is a tech journalist with a passion for AI, Android, Windows, and all things tech. He enjoys breaking down complex topics into stories readers can relate to. When he's not breaking the keyboard, you can find him on his favorite subreddits, or listening to music/podcasts
You may also like
More in:AI News