Sep 26, 2025
Google Unveils VaultGemma, Privacy-Focused AI Model to Prevent Training Data Leaks
Google Unveils VaultGemma, Privacy-Focused AI Model to Prevent Training Data Leaks and strengthen security in AI development.
Google has launched VaultGemma, a new artificial intelligence model designed to prevent training data leaks, as part of its broader push toward privacy-centric AI development. The model was released under the Gemma family of open-weight models and is aimed at enterprise and regulated sectors that require strict data protection standards.
VaultGemma has been trained on a filtered dataset that excludes personally identifiable information, confidential documents, and other sensitive content. The company said the model is built to reduce the risk of memorization and unintended data reproduction during inference.
The model is available in 2B and 7B parameter sizes and is optimized for on-device and edge deployments. It is expected to be used in sectors such as healthcare, finance, and legal services, where data confidentiality is critical. Google has also released a set of evaluation tools to help developers test for privacy risks, including metrics for data leakage and memorization.
The launch comes at a time when global regulators are tightening scrutiny over AI systems and their handling of personal data. In India, the Digital Personal Data Protection Act, 2023, has introduced new compliance requirements for data fiduciaries, including consent-based processing and restrictions on cross-border data transfers. VaultGemma’s privacy-first architecture may help developers align with such mandates.
VaultGemma is now available through Google’s AI repository, along with documentation to guide ethical deployment. The company has not disclosed commercial rollout timelines but indicated that the model will be supported across its cloud and enterprise platforms.
The launch follows a series of AI safety initiatives by Google, including watermarking tools and model interpretability frameworks, as part of its broader effort to build trustworthy and transparent AI systems.
📰 Mini Headlines
China Proposes Independent Oversight Committees for Data Protection
China’s Cyberspace Administration has proposed new rules requiring major online platforms to establish independent oversight committees to monitor personal data protection. The draft regulation mandates that committees comprise at least seven members, with two-thirds being external experts in data security. These bodies will oversee sensitive data handling, cross-border transfers, and regulatory compliance, while maintaining open communication with users. Platforms failing to act may face escalation to provincial regulators.
Data Protection
Read More → https://dig.watch/updates/china-proposes-independent-oversight-committees-to-strengthen-data-protection
Moncler Korea Fined Over 2021 Data Breach in South Korea
Moncler Korea has been fined ₩88 million (approximately $63,200) by South Korea’s Personal Information Protection Commission (PIPC) for violations tied to a significant data breach in December 2021. The breach exposed personal information of nearly 230,000 customers, including shopping preferences, delivery methods, body sizes, and partial payment details. The attackers reportedly gained access by compromising an administrator account and installing malware on the company’s servers. Although highly sensitive data like names and card numbers were not leaked, the regulator found Moncler Korea had failed to implement basic security measures such as two-factor authentication and did not report the breach within the legally mandated 24-hour window.
Data Breach
FTC Urged to Investigate Microsoft Over Ascension Data Breach
Privacy advocates have called on the Federal Trade Commission (FTC) to investigate Microsoft following a data breach involving Ascension, one of the largest U.S. healthcare systems. The breach reportedly compromised sensitive patient data and raised concerns over Microsoft’s cloud security protocols. Critics argue that Microsoft may have failed to meet contractual and regulatory obligations under HIPAA. The incident has prompted calls for stronger oversight of tech vendors in healthcare and renewed debate over accountability in third-party data handling.
Data Breach
China Fines Dior for Unlawful Cross-Border Data Transfers Under PIPL
Chinese regulators have fined Dior Shanghai for unlawfully transferring personal data of Chinese customers to its headquarters in France without complying with the legal requirements under China’s Personal Information Protection Law (PIPL). The Public Security Bureau (PSB) identified three major violations: failure to conduct a Cyberspace Administration of China (CAC) security assessment, lack of proper consent from individuals, and inadequate technical safeguards like encryption.
Data Breach
Next Newsletter
Jury Orders Google to Pay $425 Million in Landmark Data Privacy Lawsuit
A U.S. federal jury has ordered Google to pay $425 million in damages following a class-action lawsuit that accused the tech giant of violating user privacy through its data collection practices. The case was heard in California.
Austria Orders YouTube to Grant Users Access to Their Personal Data
Austria orders YouTube to give users full access to their personal data. Learn how this ruling impacts privacy rights, GDPR compliance, and online platforms in Europe.