Skip to content
Master the essentials of data privacy with our expert-led guide. From key laws and principles to consent tools and compliance tips, explore real-world examples to stay informed, build trust, and run privacy-first marketing campaigns with confidence.
Resources / Guides / Data Privacy
Published by Usercentrics
12 mins to read
Mar 25, 2025

10 data privacy examples (and the lessons they teach us)

There’s no shortage of headlines about data breaches. But those stories rarely help you understand what actually happened — or how to prevent it.

This article isn’t meant to add to the hype. Instead, we want to provide you with clarity. We’ve broken down ten data privacy examples that show how different companies approached personal data breaches, and the lessons you can learn from them to avoid the same mistakes.

These data protection examples span various industries and include different technologies, such as AI and blockchain. But they all share one thing: they reveal the difference between treating data privacy as a priority versus an afterthought.

What is data privacy?

Data privacy, also called information privacy, refers to the right of individuals to control how their personal information is collected, used, and shared. It encompasses the practices and technologies that contribute to handling personal data appropriately and in compliance with applicable regulations.

Personal information includes not only obvious identifiers like names, email addresses, and phone numbers, but also more subtle data points — IP addresses, device IDs, geolocation, and even behavioral patterns online.

Read more to learn everything you need to know about data privacy.

Why data privacy matters

Poor data privacy isn’t just a technical problem—it’s a business risk and a trust issue.

When personal information is misused or exposed, the consequences are immediate. Individuals face identity theft, fraud, and loss of control over their data. Companies face the risk of regulatory fines, legal action, and reputational damage that can be difficult to repair.

But not all privacy failures involve breaches. Sometimes the issue is consent requests that are vague or unclear, or data that’s collected without people really understanding why. These less visible problems still erode trust. And once trust is lost, it’s hard to win back.

Regulations are also getting stricter. The EU’s General Data Protection Regulation (GDPR), California’s Privacy Rights Act (CPRA), Brazil’s Lei Geral de Proteção de Dados (LGPD), and others demand transparency, accountability, and clear user rights. Failing to meet these standards doesn’t just mean noncompliance — it can limit growth, delay product launches, or restrict your access to global markets.

Ultimately, data privacy matters because it shapes how people experience your brand. Done well, it builds confidence and long-term loyalty. Done poorly, it creates uncertainty, and uncertainty drives people away.

10 data privacy examples

Below, we’ll examine some practical data privacy examples. Some are high-profile cases, and others are everyday mistakes that could happen to anyone. In all cases, prioritizing data privacy is the only way to maintain compliance and protect sensitive information.

Facebook and Cambridge Analytica

Our first example of a data privacy breach is one of the most well-known. 

However, the Cambridge Analytica story didn’t begin with a data breach. It began with a personality quiz. Users installed the app and unknowingly gave access not only to their own data but to the profiles of their Facebook friends — without those friends’ consent.

That’s how a third-party developer ended up collecting personal information from tens of millions of users. The data, which included likes, networks, and even psychological profiles, was then sold to a political consulting firm and used to target voters in various campaigns.

What made the situation worse was Facebook’s delayed response. The company had known about the misuse of data for years but didn’t inform affected users until the story broke publicly. The backlash was intense. Facebook CEO Mark Zuckerberg was called to testify before the U.S. Congress, and regulators around the world launched investigations. The company’s reputation took a lasting hit.

The damage wasn’t just legal or financial — it was relational. Users weren’t just angry that their data had been stolen. They were angry that it had been quietly sold, shared, and used in ways they never agreed to. The incident became a turning point in how people think about platform accountability. And a case study in what happens when data privacy is treated as an afterthought.

T-Mobile’s multiple data privacy breaches

Over the course of five years, T-Mobile reported multiple significant data breaches. However, the most widely-cited information privacy example is the 2021 breach that compromised the data of over 40 million customers, many of whom weren’t even active users. This included full names, dates of birth, Social Security numbers, and driver’s license details.

The company took steps to improve its security infrastructure and offered identity protection services to those affected. Yet, in 2023, another breach occurred. The pattern led to growing skepticism among users and increased scrutiny from regulators.

In isolation, each incident might have been manageable. But repeated breaches suggested deeper issues — gaps in infrastructure, policy, or culture that hadn’t been addressed.

For any company, this illustrates an uncomfortable truth. Customers may forgive a single incident, but repeated failures quickly become reputational liabilities. Privacy protection needs to be sustainable — not reactive.

Clearview AI and data privacy

Clearview AI built one of the world’s largest facial recognition databases by scraping billions of publicly available photos from social media and other online sources. The goal was to create a tool that could identify individuals based on a single photo, to be used largely by law enforcement.

The company argued that since the data was publicly available, collecting and using it was legal. But critics — and regulators — disagreed. Privacy watchdogs in Canada, Australia, and several European countries declared Clearview’s practices unlawful. Investigations were launched, fines were issued, and the company was ordered to delete data in some regions. 

For example, the Dutch watchdog said it fined Clearview EUR 30.5 million (USD 33.7 million) for automatically harvesting billions of photos of people from the internet.

What made Clearview different from other AI projects was its reach and opacity. Most users had no idea their images had been collected. They didn’t consent to it, and they weren’t given a way to opt out.

This case raises an important issue: just because data is accessible doesn’t mean it’s fair game. Privacy isn’t defined solely by visibility — it’s about context and consent. As AI and machine learning tools become more powerful, they must also be held more accountable.

Blockchain and information privacy

One of the core principles of blockchain technology is immutability. Once something is recorded, it can’t be altered or deleted. That’s what makes blockchain secure and trustworthy. But it also creates a challenge when it comes to privacy, especially in regions where laws like the GDPR guarantee individuals the “right to be forgotten.”

If personal data ends up stored directly on a blockchain, there’s no easy way to remove it. That creates a legal and technical dilemma. As a workaround, many blockchain-based platforms now store personal data off-chain and only place cryptographic proofs (not the data itself) on-chain.

This solution helps align blockchain with privacy laws, but the underlying tension remains. Technologies that are designed to be permanent need to evolve in a world that increasingly prioritizes reversibility and user control.

The blockchain example offers a clear takeaway: compliance needs to be built in from the beginning, not retrofitted once conflicts emerge.

Giant Tiger data privacy mishap 

In 2023, Canadian retailer Giant Tiger made a costly internal mistake. During a routine HR process, an employee file was inadvertently shared with someone who was not authorized to see it. The file contained personal information, including names, Social Insurance Numbers, salaries, and employment details.

This wasn’t a cyberattack. It wasn’t sophisticated. It was a simple internal misstep — one that anyone could make. But the impact was real. Affected employees had their sensitive data exposed without warning, and the company was forced to notify those involved, investigate the breach, and offer support such as credit monitoring services.

What made the incident notable wasn’t the scale, but the cause. It highlighted how vulnerable companies can be to privacy breaches even without malicious intent or external threats. 

Handling personal data doesn’t just require guarding against outside attacks. It also involves building strong internal processes, implementing access controls, and training teams to manage data responsibly every day.

Equifax data breach of 2017

In 2017, Equifax, one of the largest credit reporting agencies in the United States, announced that attackers had gained access to the personal information of 147 million people. That number alone is staggering — it includes nearly half of all U.S. adults. But what made the Equifax breach particularly damaging was what kind of data was taken: Social Security numbers, dates of birth, addresses, and in some cases, driver’s license numbers and credit card details.

The breach was caused by a failure to patch a known vulnerability in Apache Struts, a common web application framework. There was a fix available, but Equifax failed to apply it in time. Worse still, the breach went undetected for weeks. And once discovered, the company took additional time before notifying the public.

That delay was costly. The company eventually agreed to a settlement of up to USD 700 million. But the reputational damage was harder to quantify. For many people, Equifax wasn’t just breached — they were blindsided by a company they never actively chose to engage with, but who held some of their most sensitive information anyway.

It was a reminder that data privacy requires ongoing vigilance and responsibility. Patching software might sound like a small operational task, but when you hold the keys to people’s financial identity, even small tasks carry weight.

Data privacy in healthcare

Unfortunately, certain sectors are more prone to data breaches due to their access to sensitive information.

For example, another breach of data privacy took place in 2015. Hackers targeted Anthem, the second-largest health insurer in the United States. They gained access to a database containing the personal information of nearly 80 million people, including names, birthdates, Social Security numbers, and employment information.

Notably, the breach didn’t expose medical records. But that didn’t lessen its severity. The type of data accessed was enough to enable identity theft and fraud. Anthem was fined USD 16 million by the U.S. Department of Health and Human Services — the largest Health Insurance Portability and Accountability Act (HIPAA) settlement at the time.

The company’s response was swift. They notified affected individuals, offered credit monitoring, and cooperated with regulators. But the breach still raised important questions about how healthcare organizations secure personal information.

Medical data isn’t the only kind of sensitive data in healthcare. Insurance details, employment information, and even basic identifiers can be just as damaging when exposed.

The Anthem case shows that safeguarding privacy in healthcare requires more than meeting basic regulatory requirements — it requires a proactive approach to data security, backed by regular audits and clear accountability.

When third-party access goes wrong

When Marriott acquired Starwood Hotels in 2016, it gained access to a global brand, a large customer base, and, unknowingly, a long-standing data breach. Attackers had already infiltrated Starwood’s reservation system as far back as 2014. The breach wasn’t discovered until 2018, two years after Marriott’s acquisition.

By that point, information on over 300 million guests had been exposed. This included names, addresses, phone numbers, passport numbers, and travel histories. Marriott disclosed the breach publicly and cooperated with regulators, but the consequences were steep — including GDPR fines and class-action lawsuits.

What makes this case particularly striking is the timing. The breach predated Marriott’s ownership, but the responsibility still fell on them. That’s because data doesn’t reset with an acquisition. Risk is cumulative.

For any business undergoing a merger or acquisition, this case is a clear warning: data privacy due diligence must be part of the process. Legacy systems and inherited vulnerabilities can have very real consequences, even years down the line.

Google and data protection

It may not come as a shock that a company as large as Google has faced issues with data privacy, including their own examples of privacy act violations.

However, Google also became the first major tech company to be fined under the GDPR. France’s data protection authority (CNIL) imposed a penalty of EUR 250 million, citing a lack of transparency and inadequate consent around personalized advertising.

According to regulators, users weren’t provided with clear information about how their data would be used, nor were they given meaningful choices. Settings were spread across multiple screens, and key details were buried in layers of documentation.

This wasn’t a case of stolen data. It was about how information was presented — or rather, how it wasn’t. Even if users technically agreed to tracking, regulators determined the consent wasn’t valid because it wasn’t informed or specific.

For organizations building digital products, this is a vital lesson. Consent isn’t just a checkbox. It’s a dialogue. If users can’t understand what they’re agreeing to, then they haven’t truly agreed.

Apple’s approach to user privacy: Control as a feature

Not all digital security and privacy examples are negative. For example, in 2021, Apple introduced a simple prompt that gave users a choice: allow an app to track your activity across other apps and websites, or don’t. The feature — App Tracking Transparency — was part of iOS 14.5, and it marked a shift in how consumer privacy was treated by one of the world’s largest tech companies.

Rather than framing data privacy as a compliance requirement, Apple positioned it as a user right. The impact was immediate. Major platforms like Facebook pushed back, noting it would affect their ad revenue. But for many users, this was a welcome change: a clear, easy-to-understand option that put control directly in their hands.

By embedding privacy into the operating system itself, Apple made privacy visible and accessible. It was no longer buried in settings menus or abstract policy language. Instead, it became a part of the everyday experience of using a phone.

For businesses, this shift made one thing clear: people care about privacy and will opt out if they can. That forced companies to rethink how they handle data. Privacy was no longer just a technical detail in the background — it became something users expect to see, understand, and manage directly.

How to recover from a data privacy breach?

A data privacy breach is serious, but it doesn’t have to define your organization. How you respond in the hours and days after discovering the breach can determine whether users lose trust or feel reassured that you’re taking responsibility.

1. Contain the damage immediately

The first step is to stop the breach from spreading. Identify the systems that have been compromised and take them offline if necessary. Limit access to affected data to prevent further exposure. Your priority is to prevent more data from leaking while preserving evidence for later analysis.

2. Notify the people who need to know

Once the breach is contained, inform all affected parties as soon as possible. This includes your users, internal teams, and any regulators you’re legally required to notify. Be clear and direct in your communication. Let users know what kind of data was involved, how it might affect them, and what steps you’re taking in response. If you’re still investigating, say so — but don’t go silent.

As the Cambridge Analytica scandal showed us, delaying notification can damage trust more than the breach itself. People want honesty and accountability.

3. Investigate what went wrong

Launch a full investigation to determine the cause. Was it a technical vulnerability, human error, or a malicious attack? How did it go unnoticed? What systems were involved? Many organizations bring in an independent cybersecurity firm to provide an unbiased assessment. This helps ensure you don’t overlook internal issues and builds credibility with users and regulators.

4. Fix what you need

Once you understand the breach, address every weakness that it exposed. You may need to reset passwords, fix misconfigured servers, patch software, or revise how you handle sensitive information. If personal data was stolen, consider offering users practical support, like credit monitoring or identity theft protection.

What’s important is to demonstrate that you take user privacy seriously.

5. Learn and prevent

A breach is often a symptom of broader gaps in security practices. Use what you’ve learned to improve. Update your data protection policies. Strengthen your processes for storing, accessing, and sharing data. Train your team, especially those handling user data, on privacy and security best practices. Build or revise your incident response plan so you’re better prepared next time.

How Usercentrics can help you handle your data privacy

The data privacy examples in this guide demonstrate that even well-known brands can struggle when privacy isn’t treated as a core priority. Whether it’s due to unclear consent, outdated systems, or gaps in third-party oversight, the legal, financial, and reputational consequences can be severe.

Usercentrics Consent Management Platform (CMP) helps companies of all sizes avoid these pitfalls by making privacy management clear, scalable, and compliant from the start. Our solutions empower you to collect and manage user consent transparently, comply with global data protection regulations like the GDPR and CPRA, and build trust through proactive privacy practices.

From customizable consent banners to detailed audit logs, we give you the tools to turn privacy into a competitive advantage, so you’re not just reacting to issues after the fact, but actively preventing them. 

Because strong privacy practices don’t just protect data — they protect your brand.