In addition to being convenient and efficient, single sign-on is meant to be a more secure way to handle user logins. Instead of having to set up and record or remember separate usernames and passwords for every site, individuals can login with account credentials they already have. Large tech platforms accounts are popular options, like Google or Facebook.
But that added security and convenience can result in a privacy violation if requirements of data privacy laws like the European Union’s General Data Protection Regulation (GDPR). Where is the data that’s collected stored? And who may have access to it?
This type of issue is what happened when visitors to a website managed by the European Commission were able to login using social platform credentials, resulting in an infringement to EU citizens’ privacy rights.
We look at what happened and how, as well as how the Commission was penalized and what companies can learn to employ single sign-on compliantly.
Conference on the Future of Europe website login and the complaint
The Conference on the Future of Europe ran in 2021 and 2022, and visitors to the website could register for the various related events there. The European Commission (the Commission) managed the conference website.
One of the login options for visitors interested in registering for conference events was single sign-on using social platform login credentials. Specifically, there was a “Sign in with Facebook” link on the login web page.
However, as Meta Platforms, Facebook’s parent company, is located in the United States, if an EU resident used this login method, it created the conditions for that user’s personal data to potentially be transferred to the United States without the individual’s knowledge or consent.
Who was affected by the GDPR violation?
An individual residing in Germany logged in to the conference website and registered for the “GoGreen” event using his Facebook account credentials. According to the individual, in doing so, his personal data was collected and transferred to the US, including IP address plus browser and device information.
Amazon Web Services was the operator of the Amazon CloudFront content delivery network in use by the conference website, which is how his personal data was transferred. Amazon is also based in the United States.
The individual who made the complaint maintained that the data transfers created a risk of his data being accessed by US security and intelligence services. An additional claim was that neither the Commission nor conference organizers indicated that appropriate measures were in place to prevent or justify those data transfers if visitors used that sign-in method.
How did the European Commission violate the GDPR?
The Court of Justice of the European Union (CJEU) found that the “Sign in with Facebook” link on the conference website created conditions for transferring the complainant’s personal data to Facebook, which, as noted, is based in the US. As the European Commission managed the conference website, they were responsible for the data transfer and contravened their own rules.
At the time the transfer occurred (the conference ran in 2021 and 2022), the US was not considered adequate for ensuring data protections for the personal data of EU residents. The EU-U.S. Privacy Shield framework had been struck down in 2020 and the EU-U.S. Data Privacy Framework, which introduced a new adequacy agreement between the two regions, was not enacted until 2023.
Additionally, the Commission was found not to have demonstrated nor claimed that an appropriate safeguard for personal data transfers was in place for personal data obtained and transferred via the login using Facebook account credentials, i.e. a standard contractual clause or data protection clause. Facebook’s platform entirely governed the terms and conditions of displaying — and as a result, logging in with — the “Sign in with Facebook” link.
The CJEU found that the Commission did not comply with the requirements of EU law for data transfers to a third country by “an EU institution, body, office or agency” (Chapter 5 GDPR.)
How was the complaint resolved?
The complainant was awarded EUR 400 by the CJEU, to be paid by the European Commission, as compensation for non-material damage experienced due to the data transfers.
The complainant also sought several other methods of redress, including:
- annulment of the data transfers of his personal data
- a declaration from the Commission of unlawfully failing to define their position on a request for information
- EUR 800 as compensation for non-material damage resulting from the infringement of his right to access to information
The CJEU dismissed all three. The Court found that in one connection the data was transferred to a server in Germany, rather than to the United States, as Amazon Web Services is required to ensure that data remains in Europe in transit and at rest.
In another connection, the Court found that the complainant was responsible for the redirection of the data to US-based services via the Amazon CloudFront routing mechanism. A technical adjustment made the complainant appear to be located in the US at that time. Using a VPN can cause this result.
How can companies operating in digital spaces protect their operations?
Single sign-on options using popular tech platforms are convenient. But companies that knowingly or that may process personal data from EU residents need to be aware of how the login process works, what personal data is collected, where it may be transferred to and stored, and who may have access to it. Users whose personal data may be processed need to be informed as well and enabled to exercise their rights under relevant laws.
Facebook and Google are two such popular platforms where their account credentials are used for single sign-on, and they are both US-based companies, though they do have EU-based servers and data centers, necessitated by certain legal requirements.
If providing such login options is necessary on your website, ensure that the required agreements and/or contractual clauses to ensure adequate data protection are in place and that users are adequately informed and their privacy rights — including consent or opt-out — are maintained.
This also goes for other third-party services that process users’ personal data, which many companies use on their websites for advertising, analytics, ecommerce fulfillment, and other functions. Under the GDPR and other data privacy laws, controllers are responsible for the privacy compliance and data security of third-party processors working for them.
Obtain informed and explicit consent from website visitors and others whose personal data is collected and processed for various purposes, so that they know about the data processing and third parties that may have access to their data, and can exercise their rights and consent choices.
A consent management platform would have enabled the Commission to notify users about personal data collection and transfer and obtain their consent. Or enable them to use another login option if they declined.
Usercentrics does not provide legal advice, and information is provided for educational purposes only. We always recommend engaging qualified legal counsel or privacy specialists regarding data privacy and protection issues and operations.
The General Data Protection Regulation (GDPR) sets strict standards for how organizations must handle personal data collected from individuals in the European Union (EU) and European Economic Area (EEA). This comprehensive data protection regulation applies to all organizations that collect or process this data — regardless of where the organization is located — if they offer goods or services to EU/EEA residents or monitor their behavior.
Among its many requirements, the GDPR places specific legal obligations on how organizations may handle special categories of personal data or sensitive personal data. These data categories receive additional protections due to their potential impact on an individual’s rights and freedoms if they are misused.
In this article, we’ll look at what constitutes sensitive personal data under the GDPR, what additional protections it receives, and the steps organizations can take to achieve compliance with the GDPR’s requirements.
What is sensitive personal data under the GDPR?
Sensitive personal data includes specific categories of data that require heightened protection under the GDPR, because their misuse could significantly impact an individual’s fundamental rights and freedoms.
Under Art. 9 GDPR, sensitive personal data is:
- data revealing an individual’s racial or ethnic origin
- information related to a person’s political opinions or affiliations
- data concerning a person’s religious or philosophical beliefs
- information indicating whether a person is a member of a trade union
- data that provides unique insights into a natural person’s inherent or acquired genetic characteristics
- biometric data that can be used to uniquely identify a natural person, such as fingerprints or facial recognition data
- information regarding an individual’s past, current, or future physical or mental health
- data concerning a person’s sex life or sexual orientation
Recital 51 GDPR elaborates that the processing of photographs is not automatically considered processing of sensitive personal data. Photographs fall under the definition of biometric data only when processed through specific technical means that allow the unique identification or authentication of a natural person.
By default, the processing of sensitive personal data is prohibited under the GDPR. Organizations must meet specific conditions to lawfully handle such information.
This higher standard of protection reflects the potential risks associated with the misuse of sensitive personal data, which could lead to discrimination, privacy violations, or other forms of harm.
What is the difference between personal data and sensitive personal data?
Under the GDPR, personal data includes any information that can identify a natural person — known as a data subject under the regulation — either directly or indirectly. This may include details such as an individual’s name, phone number, email address, physical address, ID numbers, and even IP address and information collected via browser cookies.
While all personal data requires protection, sensitive personal data faces stricter processing requirements and heightened protection standards. Organizations must meet specific conditions before they can collect or process it.
The distinction lies in both the nature of the data and its potential impact if misused. Regular personal data helps identify an individual, while sensitive personal data can reveal intimate details about a person’s life, beliefs, health, financial status, or characteristics that could lead to discrimination or other serious consequences if compromised.
Conditions required for processing GDPR sensitive personal data
Under the GDPR, processing sensitive personal data is prohibited by default. However, Art. 9 GDPR outlines specific conditions under which processing is allowed.
- Explicit consent: The data subject can provide explicit consent for specific purposes, unless EU or member state law prohibits consent. Data subjects must also have the right to withdraw consent at any time (Art. 7 GDPR).
- Employment and social protection: Processing is required for employment, social security, and social protection obligations or rights under law or collective agreements.
- Vital interests: If processing protects the vital interests of the data subject or another natural person who physically or legally cannot give consent.
- Nonprofit activities: A foundation, association, or other nonprofit body with a political, philosophical, religious, or trade union aim can process sensitive data, but only in relation to members, former members, or individuals in regular contact with the organization. The data cannot be disclosed externally without consent.
- Public data: Data may be processed if the data subject has made the personal data publicly available.
- Legal claims: Processing is required for establishing, exercising, or defending legal claims, or when courts are acting in their judicial capacity.
- Substantial public interest: Processing may be necessary for substantial public interest reasons, based on law that is proportionate and includes safeguards.
- Healthcare: Processing may be required for medical purposes, including preventive or occupational medicine, medical diagnosis, providing health or social care treatment, or health or social care system management. The data must be handled by professionals bound by legal confidentiality obligations under EU or member state law, or by others subject to similar secrecy requirements.
- Public health: Processing may be necessary for public health reasons, such as ensuring high standards of quality and the safety of health care, medicinal products, or medical devices.
- Archiving and research: Processing may be required for public interest archiving, scientific or historical research, or statistical purposes.
The GDPR authorizes EU member states to implement additional rules or restrictions for processing genetic, biometric, or healthcare data. They may establish stricter standards or safeguards beyond the regulation’s requirements.
What is explicit consent under the GDPR?
Art. 4 GDPR defines consent as “any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.”
Although the GDPR does not separately define explicit consent, it does require a clear and unambiguous action from users to express their acceptance of data processing. In other words, users must take deliberate steps to consent to their personal data being collected. Pre-ticked boxes, inactivity, or implied consent through continued use of a service do not meet GDPR requirements for explicit consent.
Common examples of explicit consent mechanisms include:
- ticking an opt-in checkbox, such as selecting “I Agree” in a cookie banner
- confirming permission for marketing emails, particularly with a double opt-in process.
- permitting location tracking for a map application by responding to a direct authorization request
Additional compliance requirements for processing sensitive personal data under the GDPR
Organizations processing personal data under the GDPR must follow several core obligations. These include maintaining records of processing activities, providing transparent information on data practices, and adhering to principles such as data minimization and purpose limitation. However, processing sensitive personal data requires additional safeguards due to the potential risks involved.
Data Protection Officer (DPO)
Organizations with core activities that involve large-scale processing of sensitive personal data must appoint a Data Protection Officer (DPO) under Art. 37 GDPR. The DPO may be an employee of the organization or an outside consultant.
Among other responsibilities, the DPO monitors GDPR compliance, advises on data protection obligations, and acts as a point of contact for regulatory authorities.
Data Protection Impact Assessment (DPIA)
Art. 35 GDPR requires a Data Protection Impact Assessment (DPIA) for processing operations that are likely to result in high risks to individuals’ rights and freedoms. A DPIA is particularly important when processing sensitive data on a large scale. This assessment helps organizations identify and minimize data protection risks before beginning processing activities.
Restrictions on automated processing and profiling
Art. 22 GDPR prohibits automated decision-making, including profiling, based on sensitive personal data unless one of the following applies:
- the data subject has explicitly consented
- the processing is necessary for reasons of substantial public interest under the law
If automated processing of sensitive personal data is permitted under these conditions, organizations must implement safeguards to protect individuals’ rights and freedoms.
Penalties for noncompliance with the GDPR
GDPR penalties are substantial. There are two tiers of fines based on the severity of the infringement or if it’s a repeat offense.
For severe infringements, organizations face fines up to:
- EUR 20 million, or
- four percent of total global annual turnover of the preceding financial year, whichever is higher
Less severe violations can result in fines up to:
- EUR 10 million, or
- two percent of global annual turnover of the preceding financial year, whichever is higher
While violations involving sensitive personal data are often categorized as severe, supervisory authorities will consider the specific circumstances of each case when determining penalties.
Practical steps for organizations to protect GDPR sensitive personal data
Organizations handling sensitive personal data must take proactive measures to meet GDPR requirements and protect data subjects’ rights.
Conduct data mapping
Organizations should identify and document all instances in which sensitive personal data is collected, processed, stored, or shared. This includes tracking data flows across internal systems and third-party services. A thorough data inventory helps organizations assess risks, implement appropriate safeguards, and respond to data subject requests efficiently.
Develop internal policies
Establish clear internal policies and procedures to guide employees through the proper handling of sensitive personal data. These policies should cover, among other things, data access controls, storage limitations, security protocols, and breach response procedures, as well as specific procedures for data collection, storage, processing, and deletion. Organizations should conduct regular training programs to help employees understand their responsibilities and recognize potential compliance risks.
Obtain explicit consent
The GDPR requires businesses to obtain explicit consent before processing sensitive personal data. Consent management platforms (CMPs) like Usercentrics CMP provide transparent mechanisms for users to grant or withdraw explicit consent, which enables organizations to be transparent about their data practices and maintain detailed records of consent choices.
Manage third-party relationships
Many businesses rely on third-party vendors to process sensitive personal data, so it’s essential that these partners meet GDPR standards. Organizations should implement comprehensive data processing agreements (DPAs) that define each party’s responsibilities, outline security requirements, and specify how data will be handled, stored, and deleted. Businesses should also conduct due diligence on vendors to confirm their compliance practices before engaging in data processing activities.
Perform regular audits
Conducting periodic reviews of data processing activities helps businesses identify compliance gaps and address risks before they become violations. Review consent management practices, security controls, and third-party agreements on a regular basis to maintain GDPR compliance and respond effectively to regulatory scrutiny.
Checklist for GDPR sensitive personal data handling compliance
Below is a non-exhaustive checklist to help your organization handle sensitive personal data in compliance with the GDPR. This checklist includes general data processing requirements as well as additional safeguards specific to sensitive personal data.
For advice specific to your organization, we strongly recommend consulting a qualified legal professional or data privacy expert.
- Obtain explicit consent before processing sensitive personal data. Do so using a transparent mechanism that helps data subjects understand exactly what they’re agreeing to.
- Create straightforward processes for users to withdraw consent at any time, which should be as easy as giving consent. Stop data collection or processing immediately or as soon as possible if consent is withdrawn.
- Implement robust security measures such as encryption, access controls, and anonymization to protect sensitive personal data from unauthorized access or breaches.
- Keep comprehensive records of all data processing activities involving sensitive personal data. Document the purpose, legal basis, and retention periods.
- Publish clear and accessible privacy policies that inform users how their sensitive data is collected, used, stored, and shared.
- Update your data protection policies regularly to reflect changes in processing activities, regulations, or organizational practices.
- Train employees on GDPR requirements and proper data handling procedures, emphasizing security protocols and compliance obligations.
- Create clear protocols for detecting, reporting, and responding to data breaches. Include steps for notifying affected individuals and supervisory authorities when required.
- Conduct data protection impact assessments (DPIAs) before starting new processing activities involving sensitive data.
- Determine if your organization requires a Data Protection Officer based on the scale of sensitive personal data processing.
- Verify that all external processors that handle sensitive data meet GDPR requirements through formal agreements and regular audits.
Usercentrics does not provide legal advice, and information is provided for educational purposes only. We always recommend engaging qualified legal counsel or privacy specialists regarding data privacy and protection issues and operations.
Google Analytics is a powerful tool for understanding website performance, user behavior, and traffic patterns. However, its compliance with the General Data Protection Regulation (GDPR) has been a subject of concern and controversy, particularly in the European Union (EU). The data protection authorities of several European Union (EU) countries have weighed in on privacy compliance issues with Google Analytics, with similar complaints that focus on its insufficient protections and data transfer practices.
In this article, we’ll examine the timeline of EU-US data transfers and the law, the relationship between Google Analytics and data privacy, and whether Google’s popular service is — or can be — GDPR-compliant.
Google Analytics and data transfers between the EU and US
One of the key compliance issues with Google Analytics is its storage of user data, including EU residents’ personal information, on US-based servers. Because Google is a US-owned company, the data it collects is subject to US surveillance laws, potentially creating conflicts with EU privacy rights.
The EU-US Privacy Shield was invalidated in 2020 with the Schrems II ruling, and there was no framework or Standard Contractual Clauses (SCC) in place for EU to US data transfers until September 2021 when new SCCs were implemented. These were viewed as a somewhat adequate safeguard if there were additional measures like encryption or anonymization in place to make data inaccessible by US authorities.
A wave of rulings against Google Analytics after the invalidation of the Privacy Shield
The Schrems II ruling sparked a series of legal issues and decisions by European Data Protection Authorities (DPAs), which declared the use of Google Analytics as noncompliant with the GDPR.
- Austria: Austrian DPA Datenschutzbehörde (DSB) ruled Google Analytics violated the Schrems II ruling.
- France: Commission Nationale de l’Informatique et des Libertés (CNIL) found that the use of Google Analytics was not compliant with Art. 44 GDPR due to international data transfers without adequate protection; organizations were given one month to update their usage.
- Italy: Garante ruled that the transfer of data to the US via Google Analytics violated the GDPR and legal bases and reasonable protections were required.
- Netherlands: Dutch data protection authority AP announced investigations into two complaints against Google Analytics, with the complaints echoing issues raised in other EU countries.
- United Kingdom: Implemented the UK version of the GDPR after Brexit, UK data protection authority removed Google Analytics from its website after the Austrian ruling.
- Norway: Datatilsynet stated it would align with Austria’s decision against Google Analytics and publicly advised Norwegian companies to seek alternatives to the service.
- Denmark: Datatilsynet stated that lawful use of Google Analytics “requires the implementation of supplementary measures in addition to the settings provided by Google.” Companies that could not implement additional measures were advised to stop using Google Analytics.
- Sweden: IMY ordered four companies to stop using Google Analytics on the grounds that these companies’ additional security measures were insufficient for protecting personal data.
- European Parliament: European Data Protection Supervisor (EDPS) sanctioned the European Parliament for using Google Analytics on its COVID testing sites due to insufficient data protections.
A week before the Austrian ruling, the European Data Protection Supervisor (EDPS) sanctioned the European Parliament for using Google Analytics on its COVID testing sites due to insufficient data protections. This is viewed as one of the earliest post-Schrems II rulings and set the tone for additional legal complaints.
The EU-U.S. Data Privacy Framework
On July 10, 2023, the European Commission adopted its adequacy decision for the EU-U.S. Data Privacy Framework, which covers data transfers among the EU, European Economic Area (EEA) and the US in compliance with the GDPR.
The framework received some criticism from experts and stakeholders. Some privacy watchdogs, including the European Data Protection Board (EDPB), pointed out striking similarities between the new and the previous agreements, raising doubts about its efficacy in protecting EU residents’ data.
As of early 2025, the EU-U.S. Data Privacy Framework and adequacy for EU/U.S. data transfers are in jeopardy. President Trump fired all of the Democratic party members of the Privacy and Civil Liberties Oversight Board (PCLOB). As a result, the number of PCLPB board members is below the threshold that enables the PCLOB to operate as an oversight body for the EU-U.S. Data Privacy Framework.
This action will likely undermine the legal validity of the Framework for EU authorities, particularly the courts. The EU Commission could withdraw its adequacy decision for the EU-U.S. Data Privacy Framework, which would invalidate it. The Court of Justice of the EU (CJEU) could also overturn the Commission’s adequacy decision following a legal challenge. The last option is how the preceding agreements to the Framework were struck down, e.g. with Schrems II.
Should the EU-U.S. Data Privacy Framework be struck down, it could have significant effects on data transfers, cloud storage, and the function of platforms based outside of the EU, like those from Google, including Analytics. At the very least, Google may be required to make further changes to the function of tools like Google Analytics, along with related data storage, to meet European privacy standards.
Google Analytics GDPR compliance?
Google Analytics 4 has several significant changes compared to Universal Analytics. The new version adopts an event-based measurement model, contrasting the session-based data model of Universal Analytics. This shift enables Google Analytics 4 to capture more granular user interactions, better capturing the customer journey across devices and platforms. Website owners can turn this off to stop it from collecting data such as city or latitude or longitude, among others. Website owners also have the option to delete user data upon request.
Another notable feature is that Google Analytics 4 does not log or store IP addresses from EU-based users. According to Google, this is part of Google Analytics 4’s EU-focused data and privacy measures. This potentially addresses one of the key privacy concerns raised by the Data Protection Authorities, which found that anonymizing IP addresses was not an adequate level of protection.
The EU-U.S. Data Privacy Framework alone doesn’t make Google Analytics 4 GDPR-compliant. The framework can make data transfers to the US compliant, if they are with a certified US company, but the onus is on website owners to ensure that the data was collected in compliance with the legal requirements of the GDPR in the first place.
How to make Google Analytics GDPR compliant
1. Enable explicit or opt-in consent
All Google Analytics cookies should be set up and controlled so they only activate after users have granted explicit consent. Users should also have granular control so that they can choose to allow cookies for one purpose while rejecting cookies for another.
A consent management platform (CMP) like Usercentrics can enable blocking of the activation of services until user consent has been obtained. Google Analytics couldn’t transfer user data because it would never have collected it.
2. Use Google Consent Mode
Google Consent Mode allows websites to dynamically adjust the behavior of Google tags based on the user’s consent choices regarding cookies. This feature ensures that measurement tools, such as Google Analytics, are only used for specific purposes if the user has given their consent, even though the tags are loaded onto the webpage before the cookie consent banner appears. By implementing Google Consent Mode, websites can modify the behavior of Google tags after the user allows or rejects cookies so that it doesn’t collect data without consent.
Read about consent mode GA4 now
3. Have a detailed privacy policy and cookie policy
Website operators must provide clear, transparent data processing information for users on the website. This information is included in the privacy policy. Information related specifically to cookies should be provided in the cookie policy, with details of the Google Analytics cookies and other tracking technologies that are used on the site, including the data collected by these cookies, provider, duration and purpose. The cookie policy is often a separate document, but can be a section within the broader privacy policy.
The GDPR requires user consent to be informed, which is what the privacy policy is intended to enable. To help craft a GDPR-compliant privacy policy, extensive information on the requirements can be found in Articles 12, 13 and 14 GDPR.
4. Enter into a Data Processing Agreement with Google
A data processing agreement (DPA) is a legally binding contract and a crucial component of GDPR compliance. The DPA covers important aspects such as confidentiality, security measures and compliance, data subjects’ rights, and the security of processing. It helps to ensure that both parties understand their responsibilities and take appropriate measures to protect personal data. Google has laid down step-by-step instructions on how to accept its DPA.
Can server-side tracking make Google Analytics more privacy-friendly?
Server side tracking allows for the removal or anonymization of personally identifiable information (PII) before it reaches Google’s servers. This approach can improve data accuracy by circumventing client-side blockers, and it offers a way to better align with data protection regulations like the GDPR. By routing data through your own server first, you gain more control over what eventually gets sent to Google Analytics.
Impact of the Digital Markets Act on Google Analytics 4
The implementation of the Digital Markets Act (DMA) has had some impact on Google Analytics 4, affecting functions, data collection practices, and privacy policies. Website owners who use the platform have been encouraged to take the following steps for ongoing compliance.
- Audit your privacy policy, cookies policy and data practices.
- Conduct a data privacy audit to check compliance with GDPR, and take any corrective steps if necessary.
- Install a CMP that enables GDPR compliance to obtain valid user consent per the regulation’s requirements.
- Seek advice from qualified legal counsel and/or a privacy expert, like a Data Protection Officer, on measures required specific to your business.
Learn more about DMA compliance.
How to use Google Analytics 4 and achieve GDPR compliance with Usercentrics CMP
Taking steps to meet the conditions of Art. 7 GDPR for valid user consent, website operators must obtain explicit end-user consent for all Google Analytics cookies set by the website. Consent must be obtained before these cookies are activated and in operation. Using Usercentrics’ DPS Scanner helps identify and communicate to users all cookies and tracking services in use on websites to ensure full consent coverage options.
Next steps with Google Analytics and Usercentrics
Google Analytics helps companies pursue growth and revenue goals, so understandably, businesses are caught between not wanting to give that up, but also not wanting to risk GDPR violation penalties or the ire of their users over lax privacy or data protection.
The Usercentrics team closely monitors regulatory changes and legal rulings, makes updates to our services and posts recommendations and guidance as appropriate.
However, website operators should always get relevant legal advice from qualified counsel regarding data privacy, particularly in jurisdictions relevant to them. This includes circumstances where there could be data transfers outside of the EU to countries without adequacy agreements for data privacy protection.
As the regulatory landscape and privacy compliance requirements for companies are complex and ever-changing, we’re here to help.
The Interactive Advertising Bureau (IAB) launched the Global Privacy Platform (GPP) in 2022, a project of, and part of the portfolio of solutions from, the IAB Tech Lab’s Global Privacy Working Group. The GPP is the result of significant collaboration among industry stakeholders, including leading tech companies and tech experts around the world.
In line with aspects of the evolution of data privacy, the GPP enables streamlined transmission of signals from websites and apps to ad tech vendors and advertisers. This includes consent, preferences, permissions, and other relevant and often legally required information that affects data handling tools and processing. We look at how this tool can benefit publishers as data privacy compliance requirements expand and evolve, especially across digital marketing platforms.
What is the Global Privacy Platform (GPP)?
The GPP provides a framework for publishers that works similarly to the TCF or Google Consent Mode. Where Consent Mode signals consent information to Google services’ tags to control use of cookies and trackers, the GPP is a protocol that enables simple and automated communication of users’ consent and preference choices via a signal to third parties like ad tech vendors.
The GPP enables advertisers, publishers, and technology vendors in the digital advertising industry to adapt to regulatory demands over time and across markets. It employs a GPP String, which encapsulates and encodes transparency details and consumer choices (like granular consent) as applicable to each region, helping enable compliance with privacy requirements by jurisdiction.
How does the GPP signal work?
Digital property owners, like companies running websites or apps, are responsible for generating, transmitting, and documenting the GPP String and the information it sends. This enables data integrity and contributes to compliance.
Usercentrics CMP generates and manages the GPP String in an HTTP-transferable and encoded format. Ad tech vendors receive user choice information for consent and preferences, and can decode the GPP String to determine compliance requirements and status for each user.
The format’s flexibility enables granular regulatory coverage, e.g. state-specific strings for the US where data privacy laws are in effect. The GPP covers 15 states as of early 2025, and five more are expected to get coverage this year. Country and regional strings like for the US and EU are also supported, as are non-geographic signals like those from Global Privacy Control, which are browser-based, with recognition of them to date only required by some laws.
The GPP is designed to evolve as the data privacy and regulatory landscape does, not requiring significant redevelopment when requirements change. The IAB Tech Lab’s Global Privacy Working Group handles the ongoing work of the GPP’s technical specification.
Why do publishers and others in ad tech need the GPP?
The majority of the world’s population is now covered by at least one data privacy law. Some regions, like the European Union, have multiple laws that intersect in various ways. Additionally, these regulations affect major tech platforms, which are adopting more stringent requirements for their customers to enable privacy-compliant ecosystems. This has significant effects on digital advertising, as major players like Google and Facebook adapt their operations and requirements.
Additionally, in the United States, there isn’t one federal law to comply with. To date the data privacy laws are state-level, so a company could have to comply with one or ten or more as the regulatory landscape continues to evolve. However, many of these US regulations are fairly similar, which does support the “US National” signaling approach. Companies need tools, like a consent management platform and the Global Privacy Platform, designed to evolve with changes and expansion in regulations.
The GPP is designed for flexibility and scalability. It supports all current privacy signals and will be able to support future ones as new laws are passed and existing ones evolve. The architecture is designed to grow with companies’ operations, enabling publishers to better respect users’ privacy choices and more effectively signal them to vendors and partners.
Does the GPP affect the TCF?
The GPP isn’t the IAB’s first framework for publishers and ad tech. The Transparency & Consent Framework (TCF) was launched in 2018, the same year the EU’s General Data Protection Regulation (GDPR) came into effect. As of 2024, the TCF is now at version 2.2.
The GPP is designed to better meet the needs of publishers that need to signal consent across multiple jurisdictions, as many companies doing business around the world — or across the United States — need to do.
The plan is to ensure that updates made to the TCF over time are also reflected in the GPP, giving companies the best tools to achieve and maintain compliance with their digital advertising operations. Eventually, the goal is for the Global Privacy Platform — as the name suggests — to be the single framework for consent and preference signaling.
In Europe and the UK, Google will continue to use the TCF and will not be accepting the GPP signal. Using Ad Manager will still require the use of a certified consent management platform integrated with the TCF. TCF strings sent through the GPP won’t be accepted.
What is the multi-state privacy agreement (MSPA) and how does the GPP affect it?
The Multi-State Privacy Agreement (MSPA) is an industry-centric contractual framework for companies doing business in the US, which covers 19 states as of early 2025. It’s meant to “aid advertisers, publishers, agencies, and ad tech intermediaries in complying with five state privacy laws.” The IAB Tech Lab is prioritizing updates to MSPA/US National before providing further state-specific strings, though that’s expected later in 2025.
The MSPA evolved from the IAB’s Limited-Service Provider Agreement (LSPA), from 2020 and focused on CCPA/CPRA compliance initially. The evolution has focused on legal standards and protecting consumers’ privacy rights, and working with the GPP (including the specific privacy strings for each state). The MPSA is also designed for flexibility and scalability as US data privacy challenges become more complex.
What consent and preference signals does the GPP support?
The Global Privacy Platform currently supports various privacy signals around the world, both for their own frameworks and external ones. Some US state-level data privacy laws require recognizing a universal opt-out mechanism like Global Privacy Control, but not all of them.
- EU TCF v2 signal (IAB Europe TCF)
- CA TCF signal (IAB Canada TCF)
- US National signal (IAB MSPA US National)

GPP and international privacy laws
The Global Privacy Platform was designed to address the increasing complexity of data privacy regulation and requirements. Many companies do business across international jurisdictions and have many partners and vendors that they work with. This is only going to increase.
GPP and the GDPR
Europe has led the way in modern data privacy with the GDPR, TCF, and other relevant regulations and frameworks. It was the IAB Europe that brought the TCF to the market, and the GPP supports the EU TCF v2 signal. As noted, Google does not currently support the TCF via the GPP, so until industry adoption changes, this implementation isn’t recommended.
One of the main goals of the TCF was to help organizations meet GDPR compliance requirements, and the GPP is meant to extend this mandate.
- Compliant consent handling: Using a CMP companies can collect valid consent from users and clearly signal the information to relevant entities like ad tech vendors.
- International data transfers: There are stringent requirements to protect personal data when transferring it to other countries. The GPP helps compliantly manage cross-border data transfers of consent information to vendors.
- Secure and comprehensive documentation: In securely maintaining consent signaling records, the GPP helps uphold data subjects’ rights along with requirements for secure and comprehensive documentation of data processing in case of inquiry or complaint.
GPP and PIPEDA
In Canada data privacy is governed by the Personal Information Protection and Electronic Documents Act (PIPEDA), which has been in effect since 2000, and a lot has changed since then. There are a number of requirements in PIPEDA and Quebec’s Law 25 that the GPP helps with, and the Platform already does support the CA TCF signal. Here are some of the benefits.
- Explicit consent: Like the GDPR, PIPEDA requires explicit user consent for various types of data processing. The GPP enables that consent to be activated among ad tech and other martech platforms.
- Providing transparency: All data privacy laws require informing data subjects about how their data will be used, among other requirements. The GPP helps with this by supporting standardized privacy consent mechanisms which can be easily communicated.
- Data subject rights: Under PIPEDA, users have the rights to access their personal data and to revoke their consent. The GPP enables streamlined signaling to connected platforms if a data subject revokes consent for data processing. It also helps with tracking and providing consent data for data subject access requests (DSAR).
GPP and US privacy laws
The patchwork of data privacy laws and requirements in the US was a major factor in building out the Global Privacy Platform. As of the end of 2024, 21 data privacy laws have been passed by US state-level governments, which can introduce a lot of complexity into doing business.
The IAB Tech Lab created the US Privacy Specifications, which have been used to support the CCPA Compliance Framework. However, a lot more laws have been passed since the CCPA came into effect. As of 2023, the US Privacy Specifications are not being updated, and have been replaced by state-specific privacy strings available via the GPP.
However, IAB MSPA US National also provides a national approach to privacy compliance with state-level laws by utilizing the highest standard.
Additionally, the GPP is designed to evolve and scale with further data privacy regulatory requirements in the US, and to enable companies to manage consent and preferences with vendor relations in a streamlined way. This will also be relevant as more and more platforms evolve their data privacy requirements.
How Usercentrics supports the Global Privacy Platform
Usercentrics currently supports the GPP and is working toward additional regulatory coverage. Direct support from the Consent Management Platform’s Admin Interface is also being developed, along with further enhancements.
The Usercentrics CMP integrates with the GPP and generates the necessary GPP string to signal consent information.
Companies serving Google ads in the EU, EEA, or UK also continue to need a Google-certified CMP like Usercentrics CMP, which comes with the TCF v2.2 integrated, since, as noted, Google will continue to only support this format and is not accepting TCF strings sent through the GPP.
As complexity and requirements for data privacy continue to evolve, and as individuals become more invested in their privacy and choice, it’s never been more important to invest in reliable, scalable tools to obtain, manage, and signal valid consent — in every region where you do business. It’s becoming a key competitive advantage to grow trust and revenue.
As more and more digital platforms adapt to regulatory requirements as well, your company’s international advertising operations will increasingly depend on how well you’ve implemented consent and preference management with tools like Usercentrics CMP and the Global Privacy Platform. The era of Privacy-Led Marketing is here, and Usercentrics has the tools to help you embrace it and grow with confidence.
In 2019, New York’s data breach laws underwent significant changes when the SHIELD Act was signed into law. The regulation has continued to evolve, with new amendments in December 2024. This article outlines the SHIELD Act’s requirements for businesses and protecting and handling New York state residents’ private information, from security requirements to breach notifications.
What is the New York SHIELD Act?
The New York Stop Hacks and Improve Electronic Data Security Act (New York SHIELD Act) established data breach notification and security requirements for businesses that handle the private information of New York state residents. The law updated the state’s 2005 Information Security Breach and Notification Act with expanded definitions and additional safeguards for data protection.
The New York SHIELD Act introduced several requirements to protect New York residents’ data. These include:
- a broader definition of what constitutes private information
- updated criteria for what qualifies as a security or data breach
- specific notification procedures for data breaches
- implementation of administrative, technical, and physical safeguards
- expansion of the law’s territorial scope
The law also increased penalties for noncompliance with its data security and breach notification requirements.
The New York SHIELD Act was implemented in two phases:
- breach notification requirements became effective on October 23, 2019
- data security requirements became effective on March 21, 2020
Who does the New York SHIELD Act apply to?
The New York SHIELD Act applies to any person or business that owns or licenses computerized data containing the private information of New York state residents. It applies regardless of whether the business itself is located in New York. This scope marked a significant expansion from the previous 2005 law, which only applied to businesses operating within New York state. The law’s extraterritorial reach means that organizations worldwide must comply with its requirements if they possess private information of New York residents, even if they conduct no business operations within the state.
What is a security breach under the New York SHIELD law?
The New York SHIELD Act expanded the definition of a security breach beyond the 2005 law’s limited scope. The previous law only considered unauthorized acquisition of computerized data as a security breach. The New York SHIELD Act includes the following actions that compromise the security, confidentiality, or integrity of private information:
- unauthorized access to computerized data
- acquisition without valid authorization to computerized data
The law provides specific criteria to determine unauthorized access by examining whether an unauthorized person viewed, communicated with, used, or altered the private information.
What is private information under the New York SHIELD Act?
The New York SHIELD law defines two types of information: personal and private.
Personal information includes any details that could identify a specific person, such as their name or phone number.
Under the 2005 law, private information was defined as personal information concerning a natural person combined with one or more of the following:
- Social Security number
- driver’s license number
- account numbers with security codes or passwords
The New York SHIELD Act expands this definition of private information to include additional elements:
- account numbers and credit or debit card numbers that could enable access to a financial account without additional security codes, passwords, or other identifying information
- biometric information that is used to authenticate and ascertain an individual’s identity, such as a fingerprint, voice print, or retina or iris image
- email addresses or usernames combined with passwords or security questions and answers
The law specifically states that publicly available information is not considered private information.
This definition is set to expand once again. On December 21, 2024, Governor Kathy Hochul signed two bills that strengthened New York’s data breach notification laws. Under one of the amendments, effective March 21, 2025, private information will include:
- medical information, including medical history, conditions, treatments, and diagnoses
- health insurance information, including policy numbers, subscriber identification numbers, unique identifiers, claims history, and appeals history
What are the data security requirements under the New York SHIELD Act?
This New York data security law requires any person or business that maintains private information to implement reasonable safeguards for its protection. There are three categories of safeguards required: administrative, technical, and physical.
Administrative safeguards include:
- appointing one or more specific employees to manage security programs
- finding potential risks from internal and external sources
- reviewing existing safeguards to check their effectiveness
- training employees on the organization’s security practices and procedures
- choosing qualified service providers who meet security requirements through contracts
- modifying security programs when business need change
Technical safeguards include:
- assessing risks in network structure and software design
- evaluating risks in information processing, transmission, and storage
- detecting, preventing, and responding to attacks or system failures
- regularly testing and monitoring the effectiveness of key controls, systems, and procedures
Physical safeguards include:
- assessing risks related to information storage and disposal methods
- implementing systems to detect and prevent intrusions
- protecting private information from unauthorized access or use during collection, transportation, and disposal
- Properly disposing of electronic media within a reasonable timeframe to prevent data reconstruction when it is no longer needed
- disposing of private information by erasing electronic media when no longer needed for business purposes so that the information cannot be read or reconstructed
Businesses are deemed compliant with these safety requirements if they are subject to and compliant with certain federal laws, such as the Gramm-Leach-Bliley Act (GLBA), the Health Insurance Portability and Accountability Act (HIPAA), and the Health Information Technology for Economic and Clinical Health Act (HITECH).
What are the data breach notification requirements under the New York SHIELD law?
The New York SHIELD Act sets specific requirements for how and when businesses must notify individuals and authorities about data breaches involving private information.
The law previously required businesses that discover a security breach of computer data systems containing private information to notify affected consumers “in the most expedient time possible and without unreasonable delay.” The December 2024 amendment added a specific timeline to this requirement. Businesses now have a maximum of 30 days in which to notify affected New York state residents of data breaches. The 30-day time limit came into effect immediately upon the bill being signed.
The New York SHIELD Act also previously required businesses to notify three state agencies about security breaches:
- the Office of the New York State Attorney General
- the New York Department of State
- the New York State Police
The December 2024 amendment added a fourth state agency to be notified, with immediate effect: the New York State Department of Financial Services.
These notices must include information about the timing, content, distribution of notices, and approximate number of affected persons, as well as a copy of the template of the notice sent to affected persons. If more than 5,000 New York state residents are affected and notified, businesses must also notify consumer reporting agencies about the timing, content, distribution of notices, and approximate number of affected persons.
The law introduced specific restrictions on methods for notifying affected consumers. Email notifications are not permitted if the compromised information includes an email address along with a password or security question and answer that could allow access to the online account.
All notifications must provide contact information for the person or business notifying affected persons as well as telephone numbers and websites for relevant state and federal agencies that offer guidance on security breach response and identity theft prevention.
Enforcement of the New York SHIELD Act and penalties for noncompliance
The New York Attorney General has the authority to enforce the New York SHIELD Act, with the power to pursue injunctive relief, restitution, and penalties against businesses that violate the law.
The law establishes different levels of penalties based on the nature and severity of the violations. When businesses fail to provide proper breach notifications, but their actions are not reckless or intentional, courts may require them to pay damages that cover the actual costs or losses experienced by affected persons.
More severe penalties apply to knowing and/or reckless violations of notification requirements. In these cases, courts can impose penalties of up to USD 5,000 or USD 20 per instance of failed notification, whichever amount is greater. These penalties are capped at USD 250,000.
Businesses that fail to implement reasonable safeguards as required by the law face separate penalties. Courts can impose fines of up to USD 5,000 for each violation of these security requirements.
Impact of the New York SHIELD Act on businesses
The New York SHIELD law imposes significant obligations for any organization handling New York residents’ private information, regardless of location. Businesses must implement comprehensive data security programs with specific safeguards, meet strict breach notification deadlines, and prepare for expanded data protection requirements.
Key impacts include:
- 30-day mandatory breach notification requirement (currently in effect)
- the implementation of administrative, technical, and physical security safeguards
- expanded private information definition, in effect March 21, 2025
- potential penalties up to USD 250,000 for notification violations and USD 5,000 per security requirement violation
New York SHIELD Act Compliance Checklist
Below is a non-exhaustive checklist to help your business comply with the New York SHIELD Act. For advice specific to your organization, it’s strongly recommended to consult a qualified legal professional.
- Implement reasonable administrative, technical, and physical safeguards to protect the private information of New York residents.
- Create and maintain a process to detect data breaches affecting private information.
- Establish procedures to notify affected New York state residents within 30 days of discovering a breach.
- Set up a system to report breaches to the Attorney General, Department of State, State Police, and Department of Financial Services.
- Include contact information and agency resources for breach response and identity theft prevention in all notifications.
- Use appropriate notification methods (for instance, do not use email if the breach involves email/password combinations).
- Notify consumer reporting agencies if more than 5,000 New York state residents are affected by a breach.
- Train employees on security practices and procedures.
- Review and update security programs when business circumstances change.
- Prepare to protect additional categories of private information (medical and health insurance data) starting March 21, 2025.
The Norwegian Electronic Communications Act (E-com Act / Ekomloven) has been updated, effective January 1, 2025. This follows the Norwegian Parliament (Stortinget) adopting a proposal submitted by the Norwegian Ministry of Digitalisation and Public Governance in November 2024. Previously, Norway’s cookie use and consent requirements were notably more lax than European standards.
The revision better aligns Norwegian regulation of cookie use with the GDPR and ePrivacy Directive (though Norway is not an EU Member State). It introduces stricter standards for obtaining and managing user consent for use of cookies and other tracking technologies.
Norway also has the Personal Data Act to protect data and privacy when data processing occurs, with oversight and enforcement by the Norwegian Data Protection Authority (Datatilsynet).
Who must comply with the new Norwegian cookie guidelines?
The updated guidelines affect all businesses operating websites or applications that have or target Norwegian users, so both Norway-based businesses and international companies with platforms, products, or services used by Norwegians.
Specific platforms and parameters that will be affected:
- Websites with a domain name using the .no ccTLD (e.g. https://www.stortinget.no/)
- Websites or apps in the Norwegian language
- Websites or apps that target Norwegian users, including:
- Advertising to Norwegian users
- Pricing in the Norwegian kroner (NOK)
- Features tailored to Norwegian customers (e.g. local payment systems or shipping to Norway)
- Collecting personal data from Norwegian residents (businesses based outside Norway must also comply if they meet this criterium)
What are the requirements of the new Norwegian cookie guidelines?
The E-comm Act’s consent requirements are now aligned with the stricter consent standards of the GDPR. Like in Art. 4(11) GDPR, consent must be “freely given, informed, specific, and unambiguous.”
Active/explicit consent is mandatory, so users must perform a specific action to indicate giving consent. Ignoring a consent banner being construed as consent, or passive actions like pre-checking boxes or using browser settings is not allowed. Previously, some passive actions were acceptable under the law.
Businesses must also enable users to modify or withdraw previously granted consent at any time. The tools to do so must also be user-friendly to be compliant with the law’s requirements.
Only cookies or tracking technologies classified as “strictly necessary” can be used to collect data without obtaining user consent. What qualifies has been refined and now only includes cookies required for the basic operation of a website or app, e.g. shopping cart functionality or maintaining an active login session.
Analytics, marketing, and user preference cookies are not strictly necessary and do require valid user consent prior to being activated.
How can businesses achieve compliance with the new Norwegian cookie guidelines?
The law does not specify what the lifespan of cookies is allowed to be, but does require transparency from businesses about the cookies and trackers in use, including what data is collected and for what purposes, how long it will be retained, what parties it may be shared with, and what users’ rights are and how they can exercise them.
Additionally, companies that meet the law’s criteria must deploy a cookie consent banner that meets new guidelines’ requirements. There must be mechanisms to equally enable users to consent to cookie use or decline it, as well as to manage consent at a granular level, and to easily modify or withdraw it.
Companies must provide information about cookie use and consent in an easily accessible way on their website or app, including the E-com Act’s rules for cookie use, and details about which cookies or other tracking technologies are in use, what data is processed and why, and the processor’s identity.
Websites must remain accessible to users who refuse cookies, so cookie walls are not allowed, though it is acceptable for some functionality to be reduced slightly if a user declines cookies.
Companies also need to document and securely store users’ consent information over time, and be able to provide it in the event of a data request or audit.
Businesses that are already GDPR-compliant are already well positioned for compliance with the Norwegian cookie guidelines as well.
What are the penalties for noncompliance with the new Norwegian cookie guidelines?
Companies operating websites and apps that do not comply with the new guidelines risk daily fines and government orders to improve their compliance activities. Fines can be up to 5 percent of the business’s total sale revenue for the preceding year, depending on how long the violation has been going on and how serious it is.
Compliance is overseen by the Norwegian Communications Authority (NKOM) and Norwegian Data Protection Authority (Datatilsynet).
How Usercentrics enables cookie compliance
Usercentrics has been enabling ongoing data privacy compliance since the GDPR was implemented. In addition to helping companies to meet their legal obligations, Usercentrics Web CMP and Usercentrics App CMP enable you to deliver better transparency and great user experience.
Collect, securely store, and document valid user consent that meets Norwegian, EU, and/or international regulatory requirements while building trust with your users, helping you get the data you need and growing engagement and revenue.
Setup is designed for ease of use for technical and non-technical teams. Use one of our high quality pre-built templates, or fully customize your consent banner to match your brand.
Our powerful scanning technology detects and automates categorization of the cookies and tracking technologies you’re using, and we provide over 2,200 legal templates for data processing services in use, saving your time and resources at implementation and maintenance. A/B testing and in-depth analytics help you understand user interactions and consent choices to optimize your banner for higher consent rates.
Plus, you always get our expert guidance and detailed documentation every step of the way, so you can stay focused on your core business and harness the competitive advantage of Privacy-Led Marketing.
The United States does not have a comprehensive federal data privacy law that governs how businesses access or use individuals’ personal information. Instead, privacy protections and regulation are currently left to individual states. California led the way in 2020 with the California Consumer Privacy Act (CCPA), later strengthened by the California Privacy Rights Act (CPRA). As of January 2025, 20 states have passed similar laws. The variances in consumers’ rights, companies’ responsibilities, and other factors makes compliance challenging for businesses operating in multiple states.
The American Data Privacy and Protection Act (ADPPA) sought to simplify privacy compliance by establishing a comprehensive federal privacy standard. The ADPPA emerged in June 2022 when Representative Frank Pallone introduced HR 8152 to the House of Representatives. The bill gained strong bipartisan support in the House Energy and Commerce Committee, passing with a 53-2 vote in July 2022. It also received amendments in December 2022. However, the bill did not progress any further.
As proposed, the ADPPA would have preempted most state-level privacy laws, replacing the current multi-state compliance burden with a single federal standard.
In this article, we’ll examine who the ADPPA would have applied to, its obligations for businesses, and the rights it would have granted US residents.
What is the American Data Privacy and Protection Act (ADPPA)?
The American Data Privacy and Protection Act (ADPPA) was a proposed federal bill that would have set consistent rules for how organizations handle personal data across the United States. It aimed to protect individuals’ privacy with comprehensive safeguards while requiring organizations to meet strict standards for handling personal data.
Under the ADPPA, an individual is defined as “a natural person residing in the United States.” Organizations that collect, use, or share individuals’ personal data would have been responsible for protecting it, including measures to prevent unauthorized access or misuse. By balancing individual rights and business responsibilities, the ADPPA sought to create a clear and enforceable framework for privacy nationwide.
What data would have been protected under the American Data Privacy and Protection Act (ADPPA)?
The ADPPA aimed to protect the personal information of US residents, which it refers to as covered data. Covered data is broadly defined as “information that identifies or is linked, or reasonably linkable, alone or in combination with other information, to an individual or a device that identifies or is linked or reasonably linkable to an individual.” In other words, any data that would either identify or could be traced to a person or to a device that is linked to an individual. This includes data that may be derived from other information and unique persistent identifiers, such as those used to track devices or users across platforms.
The definition excludes:
- Deidentified data
- Employee data
- Publicly available information
- Inferences made exclusively from multiple separate sources of publicly available information, so long as they don’t reveal private or sensitive details about a specific person
Sensitive covered data under the ADPPA
The ADPPA, like other data protection regulations, would have required stronger safeguards for sensitive covered data that could harm individuals if it was misused or unlawfully accessed. The bill’s definition of sensitive covered data is extensive, going beyond many US state-level data privacy laws.
Protected categories of data include, among other things:
- Personal identifiers, including government-issued IDs like Social Security numbers and driver’s licenses, except when legally required for public display.
- Health information, including details about past, present, or future physical and mental health conditions, treatments, disabilities, and diagnoses.
- Financial data, such as account numbers, debit and credit card numbers, income, and balance information. The last four digits of payment cards are excluded.
- Private communications, such as emails, texts, calls, direct messages, voicemails, and their metadata. This does not apply if the device is employer-provided and individuals are given clear notice of monitoring.
- Behavioral data, including sexual behavior information when collected against reasonable expectations, video content selections, and online activity tracking across websites.
- Personal records, such as private calendars, address books, photos, and recordings, except on employer-provided devices with notice.
- Demographic details, including race, color, ethnicity, religion, and union membership.
- Biological identifiers, including biometric information and genetic information, precise location data, login credentials, and information about minors.
- Security credentials, login details or security or access codes for an account or device.
Who would the American Data Privacy and Protection Act (ADPPA) have applied to?
The ADPPA would have applied to a broad range of entities that handle covered data.
Covered entity under the ADPPA
A covered entity is “any entity or any person, other than an individual acting in a non-commercial context, that alone or jointly with others determines the purposes and means of collecting, processing, or transferring covered data.” This definition matches similar terms like “controller” in US state privacy laws and the European Union’s General Data Protection Regulation (GDPR). To qualify as a covered entity under the ADPPA, the organization would have had to be in one of three categories:
- Businesses regulated by the Federal Trade Commission Act (FTC Act)
- Telecommunications carriers
- Nonprofits
Although the bill did not explicitly address international jurisdiction, its reach could have extended beyond US borders. Foreign companies would have needed to comply if they handle US residents’ data for commercial purposes and meet the FTC Act’s jurisdictional requirements, such as conducting business activities in the US or causing foreseeable injury within the US. This type of extraterritorial scope is common among a number of other international data privacy laws.
Service provider under the ADPPA
A service provider was defined as a person or entity that engages in either of the following:
- Collects, processes, or transfers covered data on behalf of a covered entity or government body
OR
- Receives covered data from or on behalf of a covered entity of government body
This role mirrors what other data protection laws call a processor, including most state privacy laws and the GDPR.
Large data holders under the ADPPA
Large data holders were not considered a third type of organization. Both covered entities and service providers could have qualified as large data holders if, in the most recent calendar year, they had gross annual revenues of USD 250 million or more, and collected, processed, or transferred:
- Covered data of more than 5,000,000 individuals or devices, excluding data used solely for payment processing
- Sensitive covered data from more than 200,000 individuals or devices
Large data holders would have faced additional requirements under the ADPPA.
Third-party collecting entity under the ADPPA
The ADPPA introduced the concept of a third-party collecting entity, which refers to a covered entity that primarily earns its revenue by processing or transferring personal data it did not collect directly from the individuals to whom the data relates. In other contexts, they are often referred to as data brokers.
However, the definition excluded certain activities and entities:
- A business would not be considered a third-party collecting entity if it processed employee data received from another company, but only for the purpose of providing benefits to those employees
- A service provider would also not be classified as a third-party collecting entity under this definition
An entity is considered to derive its principal source of revenue from data processing or transfer if, in the previous 12 months, either:
- More than 50 percent of its total revenue came from these activities
or
- The entity processed or transferred the data of more than 5 million individuals that it did not collect directly
Third-party collecting entities that process data from more than 5,000 individuals or devices in a calendar year would have had to register with the Federal Trade Commission by January 31 of the following year. Registration would require a fee of USD 100 and basic information about the organization, including its name, contact details, the types of data it handles, and a link to a website where individuals can exercise their privacy rights.
Exemptions under the ADPPA
While the ADPPA potentially would have had a wide reach, certain exemptions would have applied.
- Small businesses: Organizations with less than USD 41 million in annual revenue or those that process data for fewer than 50,000 individuals would be exempt from some provisions.
- Government entities: The ADDPA would not apply to government bodies or their service providers handling covered data. It also excluded congressionally designated nonprofits that support victims and families with issues involving missing and exploited children.
- Organizations subject to other federal laws: Organizations already complying with certain existing privacy laws, including the Health Insurance Portability and Accountability Act (HIPAA), the Gramm-Leach-Bliley Act (GLBA), and the Family Educational Rights and Privacy Act (FERPA), among others, were deemed compliant with similar ADPPA requirements for the specific data covered by those laws. However, they would have still been required to comply with Section 208 of the ADPPA, which contains provisions for data security and protection of covered data.
Definitions in the American Data Privacy and Protection Act (ADPPA)
Like other data protection laws, the ADPPA defined several terms that are important for businesses to know. While many — like “collect” or “process” — can be found in other regulations, there are also some that are unique to the ADPPA. We look at some of these key terms below.
Knowledge under the ADPPA
“Knowledge” refers to whether a business is aware that an individual is a minor. The level of awareness required depends on the type and size of the business.
- High-impact social media companies: These are large platforms that are primarily known for user-generated content. They would have to have at least USD 3 billion in annual revenue and 300 million monthly active users over 3 months in the preceding year. They would be considered to have knowledge if they were aware or should have been aware that a user was a minor. This is the strictest standard.
- Large data holders: These are organizations that have significant data operations but do not qualify as high-impact social media. They have knowledge if they knew or willfully ignored evidence that a user was a minor.
- Other covered entities or service providers: Those that do not fall into the above categories are required to have actual knowledge that the user is a minor.
Some states — like Minnesota and Nebraska — define “known child” but do not adjust the criteria for what counts as knowledge based on the size or revenue of the business handling the data. Instead, they apply the same standard to all companies, regardless of their scale.
Affirmative express consent under the GDPR
The ADPPA uses the term “affirmative express consent,” which refers to “an affirmative act by an individual that clearly communicates the individual’s freely given, specific, and unambiguous authorization” for a business to perform an action, such as collecting or using their personal data. Consent for data collection would have to be obtained after the covered entity provides clear information about how it will use the data.
Like the GDPR and other data privacy regulations, consent would have needed to be freely given, informed, specific, and unambiguous.
Under this definition, consent cannot be inferred from an individual’s inaction or continued use of a product or service. Additionally, covered entities cannot trick people into giving consent through misleading statements or manipulative design. This includes deceptive interfaces meant to confuse users or limit their choices.
Transfer under the ADPPA
Most data protection regulations include a definition for the sale of personal data or personal information. While the ADPPA did not define sale, it instead defined “transfer” as “to disclose, release, disseminate, make available, license, rent, or share covered data orally, in writing, electronically, or by any other means.”
What are consumers’ rights under the American Data Privacy and Protection Act (ADPPA)?
Under the ADPPA, consumers would have had the following rights regarding their personal data.
- Right of awareness: The Commission must publish and maintain a webpage describing the provisions, rights, obligations, and requirements of the ADPPA for individuals, covered entities, and service providers. This information must be:
- Published within 90 days of the law’s enactment
- Updated quarterly as needed
- Available in the ten most commonly used languages in the US
- Right to transparency: Covered entities must provide clear information about how consumer data is collected, used, and shared. This includes which third parties would receive their data and for what purposes.
- Right of access: Consumers can access their covered data (including data collected, processed, or transferred within the past 24 months), categories of third parties and service providers who received the data, and the purpose(s) for transferring the data.
- Right to correction: Consumers can correct any substantial inaccuracies or incomplete information in their covered data and instruct the covered entity to notify all third parties or service providers that have received the data.
- Right to deletion: Consumers can request that their covered data processed by the covered entity be deleted. They can also instruct the covered entity to notify all third parties or service providers that have received the data of the deletion request.
- Right to data portability: Consumers can request their personal data in a structured, machine-readable format that enables them to transfer it to another service or organization.
- Right to opt out: Consumers can opt out of the transfer of their personal data to third parties and its use for targeted advertising. Businesses are required to provide a clear and accessible mechanism for exercise of this right.
- Private right of action: Consumers can sue companies directly for certain violations of the act, with some limitations and procedural requirements. (California is the only state to provide this right as of early 2025.)
What are privacy requirements under the American Data Privacy and Protection Act (ADPPA)?
The ADPPA would have required organizations to meet certain obligations when handling individuals’ covered data. Here are the key privacy requirements under the bill.
Consent
Organizations must obtain clear, explicit consent through easily understood standalone disclosures. Consent requests must be accessible, available in all service languages, and give equal prominence to accept and decline options. Organizations must provide mechanisms to withdraw consent that are as simple as giving it.
Organizations must avoid using misleading statements or manipulative designs, and must obtain new consent for different data uses or significant privacy policy changes. While the ADPPA works alongside the Children’s Online Privacy Protection Act (COPPA)’s parental consent requirements for children under 13, it adds its own protections for minors up to age 17.
Privacy policy
Organizations must maintain clear, accessible privacy policies that detail their data collection practices, transfer arrangements, retention periods, and rights granted to individuals. These policies must specify whether data goes to countries like China, Russia, Iran, or North Korea, which could present a security risk, and they must be available in all languages where services are offered. When making material changes, organizations must notify affected individuals in advance and give them a chance to opt out.
Data minimization
Organizations can only collect and process data that is reasonably necessary to provide requested services or for specific allowed purposes. These allowed purposes include activities like completing transactions, maintaining services, protecting against security threats, meeting legal obligations, and preventing harm or if there is a risk of death, among others. Collected data must also be proportionate to these activities.
Privacy by design
Privacy by design is a default requirement under the ADPPA. Organizations must implement reasonable privacy practices that consider the organization’s size, data sensitivity, available technology, and implementation costs. They must align with federal laws and regulations and regularly assess risks in their products and services, paying special attention to protecting minors’ privacy and implementing appropriate safeguards.
Data security
Organizations must establish, implement, and maintain appropriate security measures, including vulnerability assessments, preventive actions, employee training, and incident response plans. They must implement clear data disposal procedures and match their security measures to their data handling practices.
Privacy and data security officers
Organizations with more than 15 employees must appoint both a privacy officer and data security officer, who must be two distinct individuals. These officers are responsible for implementing privacy programs and maintaining ongoing ADPPA compliance.
Privacy impact assessments
Organizations — excluding large data holders and small businesses — must conduct regular privacy assessments that evaluate the benefits and risks of their data practices. These assessments must be documented and maintained, and consider factors like data sensitivity and potential privacy impacts.
Loyalty with respect to pricing
Organizations cannot discriminate against individuals who exercise their privacy rights. While they can adjust prices based on necessary financial information and offer voluntary loyalty programs, they cannot retaliate through changes in pricing or service quality, e.g. if an individual exercises their rights and requests their data or does not consent to certain data processing.
Special requirements for large data holders
In addition to their general obligations, large data holders would have had unique responsibilities under the proposed law.

Privacy policy
Large data holders would have been required to maintain and publish 10-year archives of their privacy policies on their websites. They would need to keep a public log documenting significant privacy policy changes and their impact. Additionally, they would need to provide a short-form notice (under 500 words) highlighting unexpected practices and sensitive data handling.
Privacy and data security officers
At least one of the appointed officers would have been designated as a privacy protection officer who reports directly to the highest official at the organization. This officer, either directly or through supervised designees, would have been required to do the following:
- Establish processes to review and update privacy and security policies, practices, and procedures
- Conduct biennial comprehensive audits to ensure compliance with the proposed law and make them accessible to the Commission upon request
- Develop employee training programs about ADPPA compliance
- Maintain detailed records of all material privacy and security practices
- Serve as the point of contact for enforcement authorities
Privacy impact assessments
While all organizations other than small businesses would be required to conduct privacy impact assessments under the proposed law, large data holders would have had additional requirements.
- Timing: While other organizations must conduct assessments within one year of the ADPPA’s enactment, large data holders would have been required to do so within one year of either becoming a large data holder or the law’s enactment, whichever came first.
- Scope: Both must consider nature and volume of data and privacy risks, but large data holders would need to specifically assess “potential adverse consequences” in addition to “substantial privacy risks.”
- Approval: Large data holders’ assessments would need to be approved by their privacy protection officer, while other entities would have no specific approval requirement.
- Technology review: Large data holders would need to include reviews of security technologies (like blockchain and distributed ledger), this review would be optional for other entities.
- Documentation: While both would need to maintain written assessments until the next assessment, large data holders’ assessments would also need to be accessible to their privacy protection officer.
Metrics reporting
Large data holders would be required to compile and disclose annual metrics related to verified access, deletion, and opt-out requests. These metrics would need to be included in their privacy policy or published on their website.
Executive certification
An executive officer would have been required to annually certify to the FTC that the large data holder has internal controls and a reporting structure in place to achieve compliance with the proposed law.
Algorithm impact assessments
Large data holders using covered algorithms that could pose a consequential risk of harm would be required to conduct an annual impact assessment of these algorithms. This requirement would be in addition to privacy impact assessments and would need to begin no later than two years after the Act’s enactment.
American Data Privacy and Protection Act (ADPPA) enforcement and penalties for noncompliance
The ADPPA would have established a multi-layered enforcement approach that set it apart from other US privacy laws.
- Federal Trade Commission: The FTC would serve as the primary enforcer, treating violations as unfair or deceptive practices under the Federal Trade Commission Act. The proposed law required the FTC to create a dedicated Bureau of Privacy for enforcement.
- State Attorneys General: State Attorneys General and State Privacy Authorities could bring civil actions on behalf of their residents if they believed violations had affected their state’s interest.
- California Privacy Protection Authority (CPPA): The CPPA, established under the California Privacy Rights Act, would have special enforcement authority. The CPPA could enforce the ADPPA in California in the same manner as it enforces California’s privacy laws.
Starting two years after the law would have taken effect, individuals would gain a private right of action, or the right to sue for violations. However, before filing a lawsuit, they would need to notify both the Commission and their state Attorney General.
The ADPPA itself did not establish specific penalties for violations. Instead, violations of the ADPPA or its regulations would be treated as violations of the Federal Trade Commission Act, subject to the same penalties, privileges, and immunities provided under that law.
The American Data Privacy and Protection Act (ADPPA) compared to other data privacy regulations
As privacy regulations continue to evolve worldwide, it’s helpful to understand how the ADPPA would compare with other comprehensive data privacy laws.
The EU’s GDPR has set the global standard for data protection since 2018. In the US, the CCPA (as amended by the CPRA) established the first comprehensive state-level privacy law and has influenced subsequent state legislation. Below, we’ll look at how the ADPPA compares with these regulations.
The ADPPA vs the GDPR
There are many similarities between the proposed US federal privacy law and the EU’s data protection regulation. Both require organizations to implement privacy and security measures, provide individuals with rights over their personal data (including access, deletion, and correction), and mandate clear privacy policies that detail their data processing activities. Both also emphasize data minimization principles and purpose limitation.
However, there are also several important differences between the two.
Aspect | ADPPA | GDPR |
---|---|---|
Territorial scope | Would have applied to individuals residing in the US. | Applies to EU residents and any organization processing their data, regardless of location. |
Consent | Not a standalone legal basis; required only for specific activities like targeted advertising and processing sensitive data. | One of six legal bases for processing; can be a primary justification. |
Government entities | Excluded federal, state, tribal, territorial and local government entities. | Applies to public bodies and authorities. |
Privacy officers | Required “privacy and security officers” for covered entities with more than 15 employees, with stricter rules for large data holders. | Requires a Data Protection Officer (DPO) for public authorities or entities engaged in large-scale data processing. |
Data transfers | No adequacy requirements; focus on transfers to specific countries (China, Russia, Iran, North Korea). | Detailed adequacy requirements and transfer mechanisms. |
Children’s data | Extended protections to minors up to age 17. | Focuses on children under 16 (can be lowered to 13 by member states). |
Penalties | Violations would have been treated as violations of the Federal Trade Commission Act. | Imposes fines up to 4% of annual global turnover or €20 million, whichever is higher. |
The ADPPA vs the CCPA/CPRA
There are many similarities between the proposed US federal privacy law and California’s existing privacy framework. Both include comprehensive transparency requirements, including privacy notices in multiple languages and accessibility for people with disabilities. They also share similar approaches to prohibiting manipulative design practices and requirements for regular security and privacy assessments.
However, there are also differences between the ADPPA and CCPA/CPRA.
Aspect | ADPPA | CCPA/CPRA |
---|---|---|
Covered entities | Would have applied to organizations under jurisdiction of the Federal Trade Commission, including nonprofits and common carriers; excluded government agencies. | Applies only to for-profit businesses meeting any of these specific thresholds:gross annual revenue of over USD 26,625,000receive, buy, sell, or share personal information of 100,000 or more consumers or householdsearn more than half of their annual revenue from the sale of consumers’ personal information |
Private right of action | Broader right to sue for various violations. | Limited to data breaches only. |
Data minimization | Required data collection and processing to be limited to what is reasonably necessary and proportionate. | Similar requirement, but the CPRA allows broader processing for “compatible” purposes. |
Algorithmic impact assessments | Required large data holders to conduct annual assessments focusing on algorithmic risks, bias, and discrimination. | Requires risk assessments weighing benefits and risks of data practices, with no explicit focus on bias. |
Executive accountability | Required executive certification of compliance. | No executive certification requirement. |
Enforcement | Would have been enforced by the Federal Trade Commission, State Attorney Generals, and the California Privacy Protection Authority (CPPA). | CPPA and local authorities within California. |
Consent management and the American Data Privacy and Protection Act (ADPPA)
The ADPPA would have required organizations to obtain affirmative express consent for certain data processing activities through clear, conspicuous standalone disclosures. These consent requests would need to be easily understood, equally prominent for either accepting or declining, and available in all languages where services are offered. Organizations would also need to provide simple mechanisms for withdrawing consent that would be as easy to use as giving consent was initially. The bill also required organizations to honor opt-out requests for practices like targeted advertising and certain data transfers. These opt-out mechanisms would need to be accessible and easy to use, with clear instructions for exercising these rights.
Organizations would need to clearly disclose not only the types of data they collect but also the parties with whom this information is shared. Consumers would also need to be informed about their data rights and how to act on them, such as opting out of processing, through straightforward explanations and guidance.
To support transparency, organizations would also be required to maintain privacy pages that are regularly updated to reflect their data collection, use, and sharing practices. These pages would help provide consumers with access to the latest information about how their data is handled. Additionally, organizations would have been able to use banners or buttons on websites and apps to inform consumers about data collection and provide them with an option to opt out.
Though the ADPPA was not enacted, the US does have an increasing number of state-level data privacy laws. A consent management platform (CMP) like the Usercentrics CMP for website consent management or app consent management can help organizations streamline compliance with the many existing privacy laws in the US and beyond. The CMP securely maintains records of consent, automates opt-out processes, and enables consistent application of privacy preferences across an organization’s digital properties. It also helps to automate the detection and blocking of cookies and other tracking technologies that are in use on websites and apps.
Usercentrics does not provide legal advice, and information is provided for educational purposes only. We always recommend engaging qualified legal counsel or privacy specialists regarding data privacy and protection issues and operations.
How far can companies go to get a user’s consent? When does inconvenience or questionable user experience tip over into legally noncompliant manipulation? These continue to be important questions across the data privacy landscape, especially with mobile apps, an area where regulatory scrutiny and enforcement have been ramping up.
French social networking app BeReal requests users’ consent to use their data for targeted advertising, which is very common. However, how they go about presenting (and re-presenting) that request has led to a complaint against them relating to their GDPR compliance. Let’s look at what BeReal is doing to get user consent, what the complaint is, and the legal basis for it.
BeReal’s consent request: A false sense of choice?
According to noyb’s complaint, BeReal introduced a new consent banner feature for European users in July 2024. The contention is that this banner requested user consent for use of their data for targeted advertising, which is not unusual or problematic in itself. However, the question is whether the banner provides users with real consent choice or not.
Based on the description from the complaint, BeReal designed their banner to be displayed to users when they open the app. If a user accepts the terms — giving consent for data use for targeted advertising — then they never see the banner again. However, if a user declines consent, the banner allegedly reappears every day when users attempt to post on the app. As the app requires users to snap photos multiple times a day, seeing a banner display every time one tries to do so could be understandably frustrating.
In addition to resulting in an annoying user experience, this alleged action is also potentially a GDPR violation. It’s questionable if user consent under these described conditions is actually freely given.
The GDPR does require organizations to ask users for consent again if, for example, there have been changes in their data processing operations, like they want to collect new data, or want to use data for a new purpose.
It’s also recommended that organizations refresh user consent data from time to time, even though the GDPR doesn’t specify an exact time frame, as some other laws and guidelines do. For example, a company could ask users for consent for specific data uses every 12 months, either to ensure consent is still current, or to see if users who previously declined have changed their minds.
The noyb complaint against BeReal
In December 2024, privacy advocacy group noyb (the European Center for Digital Rights) filed a complaint against BeReal with French data protection authority Commission Nationale de l’Informatique et des Libertés (CNIL), arguing that the company’s alleged repeated banner displays for non-consenting users are a form of “nudging” or use of dark patterns.
The CNIL is one of the EU data protection authorities that has previously announced increased enforcement of data privacy for mobile apps, and released guidelines for better privacy protection for mobile apps in September 2024.
While regulators have increasingly taken a dim view of various design manipulations to obtain users’ consent, like hiding the “reject” option, noyb argues BeReal’s actions are a new dark pattern trend: “annoying people into consent”. Simply put, they contend that BeReal does not take no for an answer, meaning consent obtained through this repeated tactic is not freely given, and thus is a clear violation of the GDPR’s requirements.
The noyb legal team has requested that the CNIL order BeReal to delete the personal data of affected users, modify its consent practices to be GDPR-compliant, and impose an administrative fine as a deterrent to other companies that may consider similar tactics.
European regulators take a dim view of manipulations to obtain user consent
Whether it’s making users go hunting to find the “reject” button (or removing it entirely), or wearing them down with constant banner displays until they give in and consent to the requested data use, the European Data Protection Board (EDPB) has seen and addressed similar issues before.
It’s generally understood that users are likely to give in over time out of fatigue or frustration and consent to the requested data use. Companies get what they want, but not in a way that is voluntary or a good user experience. The EDPB has emphasized that in addition to being specific, informed, and unambiguous, consent must be freely given. Persistent prompts can be a form of coercion, and thus consent received that way may not be legally valid (Art. 4 GDPR).
As technologies change over time, the ways in which dark patterns can be deployed to manipulate users into giving consent are likely to further evolve and become more sophisticated.
A fine balance: Data monetization and privacy compliance
It is a common challenge for companies to try to find ways to increase consent rates for access to user data to drive monetization strategies via their websites, apps, and other connected platforms. Cases like the one against BeReal could potentially set the tone for regulators’ increasingly stringent expectations for online platforms’ data operations, and the company could serve as a cautionary tale for others considering questionable tactics where user privacy is concerned.
As more individuals around the world are protected by more data privacy laws, what data companies are allowed to access and under what circumstances is becoming more strictly controlled. Thus the increasing challenge for companies that need data for advertising, analytics, personalization, and additional uses to grow their businesses.
Fortunately, there is a way to strike a balance between data privacy and data-driven business. With clear, user-friendly consent management, a shift to reliance on zero- and first-party data, and embracing Privacy-Led Marketing by employing preference management and other strategies to foster engagement and long-term customer satisfaction and loyalty.
How Usercentrics helps
Good consent practices require making user experience better, not more frustrating. Usercentrics App CMP helps your company deliver, building trust with users and providing a smooth, friendly user experience for consent management. You can obtain higher consent rates while achieving and maintaining privacy compliance.
Simple, straightforward setup for technical and non-technical teams automates integration of your vendors, SSPs, and SDKs with the App Scanner. We provide over 2,200 pre-built legal templates so you can provide clear, comprehensive consent choices to your users.
With extensive customization, you can make sure your banners fit your app or game’s design and branding and provide key information, enabling valid user consent without getting in their way or causing frustration. And you also get our expert guidance and detailed documentation every step of the way.
Oregon was the twelfth state in the United States to pass comprehensive data privacy legislation with SB 619. Governor Tina Kotek signed the bill into law on July 18, 2023, and the Oregon Consumer Privacy Act (OCPA) came into effect for most organizations on July 1, 2024. Nonprofits have an extra year to prepare, so their compliance is required as of July 1, 2025.
In this article, we’ll look at the Oregon Consumer Privacy Act’s requirements, who they apply to, and what businesses can do to achieve compliance.
What is the Oregon Consumer Privacy Act (OCPA)?
The Oregon Consumer Privacy Act protects the privacy and personal data of over 4.2 million Oregon residents. The law establishes rules for any individual or entity conducting business in Oregon or those providing goods and services to its residents and processing their personal data. Affected residents are known as “consumers” under the law.
The OCPA protects Oregon residents’ personal data when they act as individuals or in household contexts. It does not cover personal data collected in a work context. This means information about individuals acting in their professional roles, rather than as consumers, is not covered under this law.
Consistent with the other US state-level data privacy laws, the OCPA requires businesses to inform residents about how their personal data is collected and used. This notification — usually included in a website’s privacy policy — must cover key details such as:
- What data is collected
- How the data is used
- Whether the data is shared and with whom
- Information about consumers’ rights
The Oregon privacy law uses an opt-out consent model, which means that in most cases, organizations can collect consumers’ personal data without prior consent. However, they must make it possible for consumers to opt out of the sale of their personal data and its use in targeted advertising or profiling. The law also requires businesses to implement reasonable security measures to protect the personal data they handle.
Who must comply with the Oregon Consumer Privacy Act (OCPA)?
Similar to many other US state-level data privacy laws, the OCPA establishes thresholds for establishing which organizations must comply with its requirements. However, unlike some other laws, it does not contain a revenue-only threshold.
To fall under the OCPA’s scope, during a calendar year an organization must control or process the personal data of:
- 100,000 consumers, not including consumers only completing payment transactionsor
or
- 25,000 consumers if 25 percent or more of the organization’s annual gross revenue comes from selling personal data
Exemptions to OCPA compliance
The OCPA is different from some other data privacy laws because many of its exemptions focus on the types of data being processed and what processing activities are being conducted, rather than just on the organizations themselves.
For example, instead of exempting healthcare entities under the Health Insurance Portability and Accountability Act (HIPAA), the OCPA exempts protected health information handled in compliance with HIPAA. This means protected health information is outside of the OCPA’s scope, but other data that a healthcare organization handles could still fall under the law. Organizations that may be exempt from compliance with other state-level consumer privacy laws should consult a qualified legal professional to determine if they are required to comply with the OCPA.
Exempted organizations and their services or activities include:
- Governmental agencies
- Consumer reporting agencies
- Financial institutions regulated by the Bank Act and their affiliates or subsidiaries, provided they focus exclusively on financial activities
- Insurance companies
- Nonprofit organizations established to detect and prevent insurance fraud
- Press, wire, or other information services (and the non-commercial activities of media entities)
Personal data collected, processed, sold, or disclosed under the following federal laws is also exempt from the OCPA’s scope:
- Health Insurance Portability and Accountability Act (HIPAA)
- Gramm-Leach-Bliley Act (GLBA)
- Health Care Quality Improvement Act
- Fair Credit Reporting Act (FCRA)
- Driver’s Privacy Protection Act
- Family Educational Rights and Privacy Act (FERPA)
- Airline Deregulation Act
Definitions in the Oregon Consumer Privacy Act (OCPA)
This Oregon data privacy law defines several key terms related to the data it protects and relevant data processing activities.
What is personal data under the OCPA?
The Oregon privacy law protects consumers’ personal data, which it defines as “data, derived data or any unique identifier that is linked to or is reasonably linkable to a consumer or to a device that identifies, is linked to or is reasonably linkable to one or more consumers in a household.”
The law specifically excludes personal data that is:
- Deidentified data
- made legally available through government records or widely distributed media
- made public by the consumer
The law does not specifically list what constitutes personal data. Common types of personal data that businesses collect include a consumer’s name, phone number, email address, Social Security Number, or driver’s license number.
It should be noted that personal data (also called personal information under some state privacy laws) and personally identifiable information are not always the same thing, and distinctions between the two are often made in data privacy laws.
What is sensitive data under the OCPA?
Sensitive data is personal data that requires special handling because it could cause harm or embarrassment if misused or unlawfully accessed. It refers to personal data that would reveal an individual’s:
- Racial or ethnic background
- National origin
- Religious beliefs
- Mental or physical condition or diagnosis
- Genetic or biometric data
- Sexual orientation
- Status as transgender or non-binary
- Status as a victim of crime
- Citizenship or immigration status
- Precise present or past geolocation (within 1,750 feet or 533.4 meters)
All personal data belonging to children is also considered sensitive data under the OCPA.
Oregon’s law is the first of the US privacy laws to include either transgender or non-binary gender expression or the status as a victim of crime as sensitive data. The definition of biometric data excludes facial geometry or mapping unless it is done for the purpose of identifying an individual.
An exception to the law’s definition of sensitive data includes “the content of communications or any data generated by or connected to advanced utility metering infrastructure systems or equipment for use by a utility.” In other words, the law does not consider sensitive information to include communications content, like that in emails or messages, or data generated by smart utility meters and related systems used by utilities.
What is consent under the OCPA?
Like many other data privacy laws, the Oregon data privacy law follows the European Union’s General Data Protection Regulation (GDPR) regarding the definition of valid consent. Under the OCPA, consent is “an affirmative act by means of which a consumer clearly and conspicuously communicates the consumer’s freely given, specific, informed and unambiguous assent to another person’s act or practice…”
The definition also includes conditions for valid consent:
- the consumer’s inaction does not constitute consent
- the user interface used to request consent must not attempt to obscure, subvert, or impair the consumer’s choice
These conditions are highly relevant to online consumers and reflect that the use of manipulative dark patterns are increasingly frowned upon by data protection authorities, and increasingly prohibited. The Oregon Department of Justice (DOJ) website also clarifies that the use of dark patterns may be considered a deceptive business practice under Oregon’s Unlawful Trade Practices Act.
What is processing under the OCPA?
Processing under the OCPA means any action or set of actions performed on personal data, whether manually or automatically. This includes activities like collecting, using, storing, disclosing, analyzing, deleting, or modifying the data.
Who is a controller under the OCPA?
The OCPA uses the term “controller” to describe businesses or entities that decide how and why personal data is processed. While the law uses the word “person,” it applies broadly to both individuals and organizations.
The OCPA definition of controller is “a person that, alone or jointly with another person, determines the purposes and means for processing personal data.” In simpler terms, a controller is anyone who makes the key decisions about why personal data is collected and how it will be used.
Who is a processor under the OCPA?
The OCPA defines a processor as “a person that processes personal data on behalf of a controller.” Like the controller, while the law references a person, it typically refers to businesses or organizations that handle data for a controller. Processors are often third parties that follow the controller’s instructions for handling personal data. These third parties can include advertising partners, payment processors, or fulfillment companies, for example. Their role is to carry out specific tasks without deciding how or why the data is processed.
What is profiling under the OCPA?
Profiling is increasingly becoming a standard inclusion in data privacy laws, particularly as it can relate to “automated decision-making” or the use of AI technologies. The Oregon privacy law defines profiling as “an automated processing of personal data for the purpose of evaluating, analyzing or predicting an identified or identifiable consumer’s economic circumstances, health, personal preferences, interests, reliability, behavior, location or movements.”
What is targeted advertising under the OCPA?
Targeted advertising may involve emerging technologies like AI tools. It is also becoming a standard inclusion in data privacy laws. The OCPA defines targeted advertising as advertising that is “selected for display to a consumer on the basis of personal data obtained from the consumer’s activities over time and across one or more unaffiliated websites or online applications and is used to predict the consumer’s preferences or interests.” In simpler terms, targeted advertising refers to ads shown to a consumer based on their interests, which are determined by personal data that is collected over time from different websites and apps.
However, some types of ads are excluded from this definition, such as those that are:
- Based on activities within a controller’s own websites or online apps
- Based on the context of a consumer’s current search query, visit to a specific website, or app use
- Shown in response to a consumer’s request for information or feedback
The definition also excludes processing of personal data solely to measure or report an ad’s frequency, performance, or reach.
What is a sale under the OCPA?
The OCPA defines sale as “the exchange of personal data for monetary or other valuable consideration by the controller with a third party.” This means a sale doesn’t have to involve money. Any exchange of data for something of value, even if it’s non-monetary, qualifies as a sale under the law.
The Oregon privacy law does not consider the following disclosures of personal data to be a “sale”:
- Disclosures to a processor
- Disclosures to an affiliate or a third party to help the controller provide a product or service requested by the consumer
- Disclosures or transfers of personal data as part of a merger, acquisition, bankruptcy, or similar transaction in which a third party takes control of the controller’s assets, including personal data
- Disclosures of personal data that occur because the consumer:
- directs the controller to disclose the data
- intentionally discloses the data while directing the controller to interact with a third party
- intentionally discloses the data to the public, such as through mass media, without restricting the audience
Consumers’ rights under the Oregon Consumer Privacy Act (OCPA)
The Oregon privacy law grants consumers a range of rights over their personal data, comparable to other US state-level privacy laws.
- Right to access: consumers can request confirmation of whether their personal data is being processed and the categories of personal data being processed, gain access to the data, and receive a list of the specific third parties it has been shared with (other than natural persons), all subject to some exceptions.
- Right to correction: consumers can ask controllers to correct inaccurate or outdated information they have provided.
- Right to deletion: consumers can request the deletion of their personal data held by a controller, with some exceptions.
- Right to portability: consumers can obtain a copy of the personal data they have provided to a controller, in a readily usable format, with some exceptions.
- Right to opt out: consumers can opt out of the sale of their personal data, targeted advertising, or profiling used for decisions with legal or similarly significant effects.
Consumers can designate an authorized agent to opt out of personal data processing on their behalf. The OCPA also introduces a requirement for controllers to to recognize universal opt-out signals, further simplifying the opt-out process.
This Oregon data privacy law stands out by giving consumers the right to request a specific list of third parties that have received their personal data. Unlike many other privacy laws, this one requires controllers to maintain detailed records of the exact entities they share data with, rather than just general categories of recipients.
Children’s personal data has special protections under the OCPA. Parents or legal guardians can exercise rights for children under the age of 13, whose data is classified as sensitive personal data and subject to stricter rules. For minors between 13 and 15, opt-in consent is required for specific processing activities, including its use for targeted advertising or profiling. “Opt-in” means that explicit consent is required before the data can be used for these purposes.
Consumers can make one free rights request every 12 months, to which an organization has 45 days to respond. They can extend that period by another 45 days if reasonably necessary. Organizations can deny consumer requests for a number of reasons. These include cases in which the consumer’s identity cannot reasonably be verified, or if the consumer has made too many requests within a 12-month period.
Oregon’s privacy law does not include private right of action, so consumers cannot sue data controllers for violations. California remains the only state that allows this provision.
What are the privacy requirements under the Oregon Consumer Privacy Act (OCPA)
Controllers must meet the following OCPA requirements to protect the personal data they collect from consumers.
Privacy notice and transparency under the OCPA
The Oregon privacy law requires controllers to be transparent about their data handling practices. Controllers must provide a clear, easily accessible, and meaningful privacy notice for consumers whose personal data they may process. The privacy notice, also known as the privacy policy, must include the following:
- Purpose(s) for processing personal data
- Categories of personal data processed, including the categories of sensitive data
- Categories of personal data shared with third parties, including categories of sensitive data
- Categories of third parties with which the controller shares personal data and how each third party may use the data
- How consumers can exercise their rights, including:
- How to opt out of processing for targeted advertising or profiling
- How to submit a consumer rights request
- How to appeal a controller’s denial of a rights-related request
- The identity of the controller, including any business name the controller uses or has registered in Oregon
- At least one actively monitored online contact method, such as an email address, for consumers to directly contact the organization
- A “clear and conspicuous description” for any processing of personal data for the purpose of targeted advertising or profiling “in furtherance of decisions that produce legal effects or effects of similar significance”
According to the Oregon DOJ website, the third-party categories requirement must strike a particular balance. It should offer consumers meaningful insights into the relevant types of businesses or processing activities, without making the privacy notice overly complex. Acceptable examples include ”analytics companies,” “third-party advertisers,” and ”payment processors,” among others.
The privacy notice or policy must be easy for consumers to access. It is typically linked in the website footer for visibility and accessibility from any page.
Data minimization and purpose limitation under the OCPA
The OCPA requires controllers to limit the personal data they collect to only what is “adequate, relevant, and reasonably necessary” for the purposes stated in the privacy notice. If the purposes for processing change, controllers must notify consumers and, where applicable, obtain their consent.
Data security under the OCPA
The Oregon data privacy law requires controllers to establish, implement, and maintain reasonable safeguards for protecting “the confidentiality, integrity and accessibility” of the personal data under their control. The data security measures also apply to deidentified data.
Oregon’s existing laws about privacy practices remain in effect as well. These laws include requirements for reasonable administrative, technical, and physical safeguards for data storage and handling, IoT device security features, and truth in privacy and consumer protection notices.
Data protection assessments (DPA) under the OCPA
Controllers must perform data protection assessments (DPA), also known as data protection impact assessments, for processing activities that present “a heightened risk of harm to a consumer.” These activities include:
- Processing for the purposes of targeted advertising
- Processing sensitive data
- The sale of personal data
- Processing for the purposes of profiling if there is a reasonably foreseeable risk to the consumer of:
- Unfair or deceptive treatment
- Financial, physical, or reputational injury
- Intrusion into a consumer’s private affairs
- Other substantial injury
The Attorney General may also require a data controller to conduct a DPA or share the results of one in the course of an investigation.
Consent requirements under the OCPA
The OCPA primarily uses an opt-out consent model. This means that in most cases controllers are not required to obtain consent from consumers before collecting or processing their personal data. However, there are specific cases where consent is required:
- Processing sensitive data requires explicit consent from consumers.
- For children’s data, the OCPA follows the federal Children’s Online Privacy Protection Act (COPPA) and requires consent from a parent or legal guardian before processing the personal data of any child under 13.
- Controllers must obtain explicit consent to use the personal data of minors between the ages of 13 and 15 for targeted ads, profiling, or sale.
- Controllers must obtain consent to use personal data for purposes other than those originally disclosed in the privacy notice.
To help consumers to make informed decisions about their consent, controllers must clearly disclose details about the personal data being collected, the purposes for which it is processed, who it is shared with, and how consumers can exercise their rights. Controllers must also provide clear, accessible information on how consumers can opt out of data processing.
Consumers must be able to revoke consent at any time, as easily as they gave it. Data processing must stop after consent has been revoked, and no later than 15 days after receiving the revocation.
Nondiscrimination under the OCPA
The OCPA prohibits controllers from discriminating against consumers who exercise their rights under the law. This includes actions such as:
- Denying goods or services
- Charging different prices or rates than those available to other consumers
- Providing a different level of quality or selection of goods or services to the consumer
For example, if a consumer opts out of data processing on a website, that individual cannot be blocked from accessing that website or its functions.
Some website features and functions do not work without certain cookies or trackers being activated, so if a consumer does not opt in to their use because they collect personal data, the site may not work as intended. This is not considered discriminatory.
This Oregon privacy law permits website operators and other controllers to offer voluntary incentives for consumers’ participation in activities where personal data is collected. These may include newsletter signups, surveys, and loyalty programs. Offers must be proportionate and reasonable to the request as well as the type and amount of data collected. This way, they will not look like bribes or payments for consent, which data protection authorities frown upon.
Third party contracts under the OCPA
Before starting any data processing activities, controllers must enter into legally binding contracts with third-party processors. These contracts govern how processors handle personal data on behalf of the controller, and must include the following provisions:
- The processor must ensure that all individuals handling personal data are bound by a duty of confidentiality
- The contract must provide clear instructions for data processing, detailing:
- The nature and purpose of processing
- The types of data being processed
- The duration of the processing
- The rights and obligations of both the controller and the processor
- The processor must delete or return the personal data at the controller’s direction or after the services have ended, unless legal obligations require the data to be retained
- Upon request, the processor must provide the controller with all necessary information to verify compliance with contractual obligations
- If the processor hires subcontractors, they must have contracts in place requiring the subcontractors to meet the processors’ obligations
- The contract must allow the controller or their designee to conduct assessments of the processor’s policies and technical measures to ensure compliance
These contracts are known as data processing agreements under some data protection regulations like the GDPR.
Universal opt-out mechanism under the OCPA
As of January 1, 2026, organizations subject to the OCPA must comply with a universal opt-out mechanism. Also called a global opt-out signal, it includes tools like the Global Privacy Control.
This mechanism enables a consumer to set their data processing preferences once and have those preferences automatically communicated to any website or platform that detects the signal. Preferences are typically set via a web browser plugin.
While this requirement is not yet standard across all US or global data privacy laws, it is becoming more common in newer legislation. Other states that require controllers to recognize global opt-out signals include California, Minnesota, Nebraska, Texas, and Delaware.
How to comply with the Oregon Consumer Privacy Act (OCPA)
Below is a non-exhaustive checklist to help your business and website address key OCPA requirements. For advice specific to your organization, consulting a qualified legal professional is strongly recommended.
- Provide a clear and accessible privacy notice detailing data processing purposes, shared data categories, third-party recipients, and consumer rights.
- Maintain a specific list of third parties with whom you share consumers’ personal data.
- Limit data collection to what is necessary for the specified purposes, and notify consumers if those purposes change.
- Obtain consent from consumers if you plan to process their data for purposes other than those that have been communicated to them.
- Implement reasonable safeguards to protect the confidentiality, integrity, and accessibility of personal and deidentified data.
- Conduct data protection assessments for processing activities with heightened risks, such as targeted advertising, activities involving sensitive data, or profiling.
- Implement a mechanism for consumers to exercise their rights, and communicate this mechanism to consumers.
- Obtain explicit consent for processing sensitive data, children’s data, or for purposes not initially disclosed.
- Provide consumers with a user-friendly method to revoke consent.
- Once consumers withdraw consent, stop all data processing related to that consent within the required 15-day period.
- Provide a simple and clear method for consumers to opt out of data processing activities.
- Avoid discriminatory practices against consumers exercising their rights, while offering reasonable incentives for data-related activities.
- Include confidentiality, compliance obligations, and terms for data return or deletion in binding contracts with processors.
- Comply with global opt-out signals like the Global Privacy Control by January 1, 2026.
Enforcement of the Oregon Consumer Privacy Act (OCPA)
The Oregon Attorney General’s office is the enforcement authority for the OCPA. Consumers can file complaints with the Attorney General regarding data processing practices or the handling of their requests. The Attorney General’s office must notify an organization of any complaint and in the event that an investigation is launched. During investigations, the Attorney General can request controllers to submit data protection assessments and other relevant information. Enforcement actions must be initiated within five years of the last violation.
Controllers have the right to have an attorney present during investigative interviews and can refuse to answer questions. The Attorney General cannot bring in external experts for interviews or share investigation documents with non-employees.
Until January 1, 2026, controllers have a 30-day cure period during which they can fix OCPA violations. If the issue is not resolved within this time, the Attorney General may pursue civil penalties. The right to cure sunsets January 1, 2026, after which the opportunity to cure will only be at the discretion of the Attorney General.
Fines and penalties for noncompliance under the OCPA
The Attorney General can seek civil penalties up to USD 7,500 per violation. Additional actions may include seeking court orders to stop unlawful practices, requiring restitution for affected consumers, or reclaiming profits obtained through violations.
If the Attorney General succeeds, the court may require the violating party to cover legal costs, including attorney’s fees, expert witness fees, and investigation expenses. However, if the court determines that the Attorney General pursued a claim without a reasonable basis, the defendants may be entitled to recover their attorney’s fees.
How does the Oregon Consumer Privacy Act (OCPA) affect businesses?
The OCPA introduces privacy law requirements that are similar to other state data protection laws. These include obligations around notifying consumers about data practices, granting them access to their data, limiting data use to specific purposes, and implementing reasonable security measures.
One notable distinction is that the law sets different compliance timelines based on an organization’s legal status. The effective date for commercial entities is July 1, 2024, while nonprofit organizations are given an additional year and must comply by July 1, 2025.
Since the compliance deadline for commercial entities has already passed, businesses that fall under the OCPA’s scope should ensure they meet its requirements as soon as possible to avoid penalties. Nonprofits, though they have more time, should actively prepare for compliance.
Businesses covered by federal laws like HIPAA and the GLBA, which may exempt them from other state data privacy laws, should confirm with a qualified legal professional whether they need to comply with the OCPA.
The Oregon Consumer Privacy Act (OCPA) and consent management
Oregon’s law is based on an opt-out consent model. In other words, consent does not need to be obtained before collecting or processing personal data unless it is sensitive or belongs to a child.
Processors do need to inform consumers about what data is collected and used and for what purposes, as well as with whom it is shared, and if it is to be sold or used for targeted advertising or profiling.
Consumers must also be informed of their rights regarding data processing and how to exercise them. This includes the ability for consumers to opt out of processing of their data or change their previous consent preferences. Typically, this information is presented on a privacy page, which must be kept up to date.
As of 2026, organizations must also recognize and respect consumers’ consent preferences as expressed via a universal opt-out signal.
Websites and apps can use a banner to inform consumers about data collection and enable them to opt out. This is typically done using a link or button. A consent management platform (CMP) like the Usercentrics CMP for website consent management or app consent management also helps to automate the detection of cookies and other tracking technologies that are in use on websites and apps.
A CMP can streamline sharing information about data categories and the specific services in use by the controller and/or processor(s), as well as third parties with whom data is shared.
The United States still only has a patchwork of state-level privacy laws rather than a single federal law. As a result, many companies doing business across the country, or foreign organizations doing business in the US, may need to comply with a variety of state-level data protection laws.
A CMP can make this easier by enabling banner customization and geotargeting. Websites can display data processing, consent information, and choices for specific regulations based on specific user location. Geotargeting can also improve clarity and user experience by presenting this information in the user’s preferred language.
Usercentrics does not provide legal advice, and information is provided for educational purposes only. We always recommend engaging qualified legal counsel or a privacy specialist regarding data privacy and protection issues and operations.
Microsoft Universal Event Tracking (UET) with Consent Mode helps businesses responsibly manage data while optimizing digital advertising efforts. UET is a tracking tool from Microsoft Advertising that collects user behavior data to help businesses measure conversions, optimize ad performance, and build remarketing strategies.
Consent Mode works alongside UET. It’s a feature that adjusts how data is collected based on user consent preferences. This functionality is increasingly important as businesses address global privacy regulations like the GDPR and CCPA.
For companies using Microsoft Ads, understanding and implementing these tools helps them prioritize user privacy, build trust, and achieve better marketing outcomes while respecting data privacy standards.
What is Microsoft UET Consent Mode?
Microsoft UET Consent Mode is a feature designed to help businesses respect user privacy while maintaining effective advertising strategies. It works alongside Microsoft Universal Event Tracking (UET) by dynamically adjusting how data is collected based on user consent.
When visitors interact with your website, Consent Mode determines whether tracking is activated or limited, depending on their preferences. For instance, if a user opts out of tracking, Consent Mode restricts data collection. This function aligns the tracking process with privacy preferences and applicable regulations.
Consent Mode supports businesses as they balance privacy expectations with effective campaign management. It also helps businesses align their data practices with Microsoft’s advertising policies and regional privacy laws to create a more transparent and user-focused approach to data management.
Why businesses need Microsoft UET Consent Mode
The role of UET in advertising
Microsoft Universal Event Tracking (UET) offers businesses the tools they need to optimize advertising strategies. With a simple tag integrated into a business’s website, UET helps advertisers monitor essential user actions like purchases, form submissions, and page views. This data is invaluable for building remarketing audiences, tracking conversions, and making data-backed decisions that improve ad performance.
However, effectively collecting and utilizing this data requires alignment with user consent preferences. Without proper consent, businesses risk operating outside privacy regulations, and could face penalties or restrictions. By integrating UET with Consent Mode, businesses can respect user choices while continuing to access the insights needed to run impactful advertising campaigns.
Challenges in advertising compliance
In today’s digital age, businesses must carefully balance data-driven advertising with growing privacy expectations. Regulations like the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and the California Privacy Rights Act (CPRA) have set clear rules about how user data can be collected, stored, and used. Non-compliance can lead to significant consequences, such as hefty fines, restricted access to ad platforms, or even account suspension.
Beyond financial and operational risks, non-compliance can damage a company’s reputation. When businesses fail to address privacy concerns, they risk losing customer trust—a resource that is difficult to rebuild. As users become more aware of how their data is used, businesses that fail to adopt transparent practices may struggle to retain their audience.
Enforcement of Microsoft UET Consent Mode
Microsoft Advertising is requiring that customers start to enforce explicitly obtaining and providing consent signals by May 5, 2025.
Providing consent signals enables Microsoft Ads customers to comply with the requirements of privacy laws like the GDPR, where violations can result in hefty fines and other penalties.
Obtaining explicit consent also demonstrates respect for users’ privacy and rights, building user trust. Consumers increasingly indicate concerns over access to and use of their data online.
Consent benefits advertising performance as part of your Privacy-Led Marketing strategy as well. Continue generating valuable insights into campaigns for effective targeting and conversion tracking.
Benefits of using Microsoft UET Consent Mode
By integrating Microsoft UET Consent Mode, companies can address user expectations, improve data accuracy, and create a more transparent relationship with their audience. Let’s take a closer look at the benefits of using Microsoft UET Consent Mode.
Supporting privacy regulations
Privacy laws such as the GDPR, CCPA, and the ePrivacy Directive require businesses to handle user data responsibly. Microsoft UET Consent Mode adjusts data collection practices based on user preferences, helping companies better align with these requirements. By respecting user choices, businesses can reduce the risks associated with non-compliance.
Accurate data collection
Data accuracy is a key component of any successful advertising strategy. With Consent Mode, businesses only collect insights from users who agree to data tracking. This focus helps prevent skewed data caused by collecting information from users who have not consented. These insights are therefore more reliable and actionable.
Optimized ad campaigns
Consent Mode enables businesses to continue leveraging tools like remarketing and conversion tracking while honoring user privacy preferences. This functionality helps advertisers maintain the effectiveness of their campaigns by focusing on audiences who have opted into tracking. As a result, companies can make data-driven decisions without compromising privacy.
Building trust through transparency
Demonstrating respect for user privacy goes beyond privacy compliance — it also fosters trust. Transparency about how data is collected and used enables businesses to strengthen their relationships with customers. A privacy-first approach can set companies apart in a competitive advertising environment by showing users that their choices and rights are valued.
Why use Usercentrics Web CMP with Microsoft UET Consent Mode
Usercentrics Web CMP provides businesses with a practical solution for integrating Microsoft UET with Consent Mode. By leveraging Usercentrics Web CMP’s unique features, companies can manage user consent effectively while maintaining a seamless advertising strategy.
Streamlined implementation
Usercentrics Web CMP simplifies the process of integrating Microsoft Consent Mode. With automated configuration, businesses can set up their systems quickly and focus on optimizing their campaigns without the complexities of manual implementation.
Seamless compatibility
As among the first consent management platforms to offer automated support for Microsoft Consent Mode, Usercentrics Web CMP is designed for smooth integration with Microsoft UET. This compatibility reduces technical challenges and supports reliable functionality.
Customizable consent banners
The CMP enables businesses to design consent banners that align with their branding, creating a consistent user experience. Clear, branded messaging helps communicate data collection practices effectively while maintaining professionalism.
Privacy-focused data management
Usercentrics Web CMP provides a centralized platform for managing user consent across different regions and regulations. Businesses can easily adapt to global privacy requirements and organize their data collection practices efficiently, all in one place.
How to set up Microsoft UET with Consent Mode using Usercentrics Web CMP
Usercentrics Web CMP simplifies the process of setting up Microsoft UET with Consent Mode. As the first platform to offer automated implementation of Microsoft Consent Mode, Usercentrics Web CMP enables companies to focus on their marketing efforts while managing user consent effectively.
To integrate Microsoft UET with Consent Mode using Usercentrics Web CMP, follow these steps:
For a detailed walkthrough, refer to the support article.
Adapting to Privacy-Led Marketing with Microsoft UET Consent Mode
Microsoft UET with Consent Mode, supported by Usercentrics Web CMP, provides businesses with a practical approach to balancing effective advertising with user privacy. With this solution, companies can streamline consent management, enhance their advertising strategies, and adapt to ever-changing privacy expectations.
Respecting user choices isn’t just about privacy compliance—it’s an opportunity to build trust and demonstrate a commitment to transparency. Businesses that embrace Privacy-Led Marketing position themselves as trustworthy partners in a competitive digital marketplace.
Adopting Privacy-Led Marketing does more than support long-term customer relationships. It also enables companies to responsibly leverage valuable insights to optimize their campaigns. Microsoft UET with Consent Mode and Usercentrics Web CMP together create a strong foundation for businesses to effectively navigate the intersection of privacy and performance.