Mar 2, 2026

Microsoft Copilot Flaw Bypasses Sensitivity Labels, Exposing Confidential Emails to AI Processing

A new vulnerability in Microsoft 365 Copilot shows the ongoing difficulties of combining artificial intelligence with data protection. Found on 21 January 2026 and tracked as CW1226324, this flaw lets the AI process and summarise confidential emails, even when data loss prevention (DLP) policies and sensitivity labels are in place.

A new vulnerability in Microsoft 365 Copilot shows the ongoing difficulties of combining artificial intelligence with data protection. Found on 21 January 2026 and tracked as CW1226324, this flaw lets the AI process and summarise confidential emails, even when data loss prevention (DLP) policies and sensitivity labels are in place.

The problem affects the "work tab" chat feature in apps like Outlook and underscores how difficult it is to stay compliant with AI-powered systems. The problem stems from a coding error that allows Copilot to access users' Sent Items and Drafts folders, even when settings are meant to block sensitive content. Microsoft confirmed the issue in an admin notice, saying that "users’ email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat." Microsoft started fixing the problem in early February 2026 and is still working on a full solution, including reaching out to affected customers.

This issue poses serious risks to customer data security, especially in industries such as healthcare and finance that handle regulated information. Emails in these sectors often include personal details protected by laws such as the General Data Protection Regulation (GDPR) and India's Digital Personal Data Protection (DPDP) Act, 2023. The flaw weakens data minimisation and purpose limitation rules, which could lead to breaches and fines. Records also show that sensitivity

labels are not consistently enforced across Microsoft apps, so Copilot can sometimes access protected content. It also raises broader concerns about the governance of generative AI, highlighting the need for thorough audits of DLP integrations and adherence to ethical design principles. As enterprises adopt these tools for productivity, proactive vulnerability assessments are essential to meet evolving privacy standards.

MINI HEADLINES

EU Abandons Proposal to Amend GDPR Personal Data Definition

The European Union has decided not to change the definition of personal data in the General Data Protection Regulation (GDPR), keeping the current meaning of "information relating to an identified or identifiable individual." This decision was influenced by data protection authorities and civil society, who preferred more guidance over new laws. The European Data Protection Board is now working to clarify rules on pseudonymisation to support AI and big data compliance. Concerns about weakening privacy protections also played a role, and future updates will be aligned with the e-Privacy Directive to create a more unified privacy framework.

GDPR Review
Read More → https://cadeproject.org/updates/eu-drops-plan-to-revise-definition-of-personaldata-in-gdpr-review/

Kerala Employees Mount High Court Challenge Over Alleged Privacy Violation

Kerala government employees and teachers plan to petition the High Court, alleging a privacy breach due to unsolicited WhatsApp messages from the Chief Minister’s Office regarding Dearness Allowance adjustments. The messages used contact details collected through the SPARK portal for administrative purposes, in violation of consent requirements under the privacy protocols. Represented by advocate George Poonthottam, the petitioners reference a 2020 High Court ruling in the Sprinklr case, which addressed unauthorised data sharing. This action highlights risks of data misuse in public administration, with no official response from the government currently.

Data Breach

Read More → https://www.etvbharat.com/en/bharat/kerala-employees-to-challenge-govt-in-highcourt-over-privacy-breach-allegations-enn26022301549

UK Regulator Imposes £14.47 Million Penalty on Reddit for Children's Data Lapses

Britain’s Information Commissioner’s Office (ICO) fined Reddit £14.47 million for failing to properly verify the ages of users under 13, leading to the illegal processing of their personal data. Reddit also missed the January 2025 deadline for a required Data Protection Impact Assessment, breaking UK GDPR rules for protecting children online. The ICO said Reddit "had no right to use data from under-13s," stressing the need for parental consent and age checks on online platforms.

Children's Privacy
Read More → https://m.economictimes.com/tech/technology/uk-privacy-watchdog-fines-reddit20-million-over-childrens-data-failures/amp_articleshow/128750440.cms

© 2024-26 GoTrust

India

303, Tower C, ATS Bouquet, Noida Sector 132, U.P.

UAE

DIFC Innovation Hub, Gate Avenue, Zone D, Co-working Space Level 1 Al Mustaqbal St, Dubai

Netherlands

Cuserpark Amsterdam, De Cuserstraat 91, 1081CN, Amsterdam, Netherlands

© 2024-26 GoTrust

India

303, Tower C, ATS Bouquet, Noida Sector 132, U.P.

UAE

DIFC Innovation Hub, Gate Avenue, Zone D, Co-working Space Level 1 Al Mustaqbal St, Dubai

Netherlands

Cuserpark Amsterdam, De Cuserstraat 91, 1081CN, Amsterdam, Netherlands