Artificial intelligence (AI) seems to be everywhere, and has been getting almost as much investment funding as media attention. However, from a data privacy perspective, AI needs regulators to catch up.
Data is a critical part of developing AI, so data privacy (also, by extension, user consent) is an important part of the question. Governments are now starting to weigh in, with the passage of the European Union’s Artificial Intelligence Act and the AI Act in the US state of Colorado, and the drafting of the US federal law: Future of Artificial Intelligence Innovation Act.
The European Union AI Act is likely to be highly influential, as the EU’s General Data Protection Regulation (GDPR) was when it came into effect in 2018. With AI being integrated into everything from recruitment to the marketing stack to cybersecurity, it’s important to understand what the EU AI Act includes and excludes, how it will affect businesses, and how data privacy compliance fits in.
EU AI Act summary
The European Union AI Act is a law on artificial intelligence (AI) adopted by the European Commission in March 2024. It is the world’s first comprehensive law to regulate AI. The aim is to balance positive uses of the technology while mitigating negative ones and codifying rights. There is also a goal to clarify many current and future questions about AI development and make the Act a global standard, as the GDPR has become.
The primary goals of the AI Act are two-fold, to respect and protect the fundamental rights of EU citizens, while also boosting innovation. Parliamentarians agreed that how the Act is implemented will be of key importance in achieving these goals.
European Commission President Ursula von der Leyen noted the Act’s historic and global potential, “Our AI Act will make a substantial contribution to the development of global rules and principles for human-centric AI.”
EU AI Act timeline
The AI Act proposal was originally released in April 2021. In December 2023, the European Commission, Council of the European Union and European Parliament reached a political agreement on the AI Act. The Act was adopted in March 2024, with the plan for it to come into force 20 days after the Act’s publication in the EU Official Journal. The EU has 23 official languages, so the Act’s final text has to be extensively translated, which will take some time.
European AI Act overview of risk categories
The law assigns applications of AI technology to one of several categories:
Political agreement on EU AI Act rules
All parties agreed on several main rule categories:
- safeguards regarding general purpose artificial intelligence
- limitations on law enforcement’s use of biometric identification systems
- social scoring using AI is banned
- manipulation or exploitation of users’ vulnerabilities using AI is banned
- consumers have the right to launch complaints and receive meaningful responses
Banned AI applications under the EU AI regulations
Certain applications of AI by corporations, governments, law enforcement, etc., have been banned under the Act, with some exceptions, based on recognized potential threats to the rights, health, and safety of citizens and democracy more generally.
- biometric categorization systems that use sensitive characteristics, aka sensitive data (e.g. political, religious or philosophical beliefs, sexual orientation, race, etc.)
- untargeted scraping of facial images from the internet or closed-circuit television (CCTV) footage to create facial recognition databases (remote biometric identification)
- emotion recognition in the workplace and educational institutions
- social scoring based on social behavior or personal characteristics
- AI systems that manipulate human behavior to circumvent their free will
- AI used to exploit the vulnerabilities of people (due to age, disability, social or economic situation, etc.)
Learn more: Not all data is created equal. Learn the differences between PII vs. personal data.
EU AI Act high risk categories
AI with reasonably high potential risks to health, safety, human rights, the environment, etc. is allowed, but is subject to certain requirements like maintaining use logs, ensuring transparency and accuracy, and ensuring human oversight, as well as assessments (before and after going on the market) to reduce risks. This category includes:
- critical infrastructure that could risk citizens lives and health (e.g. transportation)
- essential private and public services (e.g. healthcare, banking, e.g. credit scoring affecting loan qualification)
- education and vocational training (as it could influence access to education and professional opportunities, e.g. exam scoring)
- employment (including management of workers and access to self-employment, e.g. software for resume sorting)
- justice and democratic processes (e.g. court rulings, elections processing)
- law enforcement (certain systems, e.g. evidence evaluation)
- migration, asylum, and border management (e.g. visa application examinations)
General purpose AI (GPAI) — risks and obligations under the EU AI regulation
General purpose AI includes tools and applications that tend to be widely available to academia, business, and consumers, e.g. ChatGPT and similar tools. There are further safeguards for more powerful AI models that pose greater systemic risks, including:
- additional risk management obligations
- monitoring of serious incidents
- evaluation of models/modeling
- red teaming (adopting an adversarial approach to rigorously challenge plans, policies, systems, etc.)
Codes of practice around these new requirements will be jointly developed by industry, the scientific community, the public, and others.
It is understood that GPAI systems can do a wide variety of tasks and analysis, and such systems’ capabilities are rapidly expanding. As a result, certain “guardrails” have been agreed upon as control mechanisms:
- transparency requirements for what the systems are designed to do, how, with what data, and for what purposes are clear
- detailed summaries about content used to train AI systems will need to be disseminated
- adherence to EU copyright law
- comprehensive technical documentation
GPAI models with potential high impact and systemic risks will have additional and more stringent requirements:
- conducting model/modeling evaluations
- assessing and mitigating systemic risks
- conducting adversarial testing
- reporting to the European Commission on serious incidents
- ensuring strong cybersecurity
- reporting on energy efficiency
- reliance on codes of practice for regulatory compliance (until harmonized EU standards are published.
A wide variety of industries, systems and tools can and will be identified as high risk under the Act, including healthcare, financial systems, public infrastructure, and the legal system.
Where AI is used in these areas, assessments before and after launch will have to be done and risk mitigation will have to be implemented or bolstered. Datasets used will have to be of confirmed high quality and copyrighted data summarized and published. Documentation and logging will have to be detailed, there will have to be human oversight, information for users will need to be clear, and strong cybersecurity measures will need to be taken and maintained. Regulatory sandboxes will be used where authorities can facilitate testing of organizations’ systems.
Individuals will be able to launch complaints about AI systems and have the right to receive explanations about decisions based on high-risk AI system activities that may impact their rights.
Support for innovation and SMEs with AI solutions under the EU AI Act
The legislators understand that AI tools and systems can be strong drivers of innovation in business, and do not want companies, especially SMEs, to be hamstrung by excessive regulation, or be pressured by industry giants with outsized industry influence.
To help mitigate these possibilities, the agreement under the Act promotes the use of regulatory “sandboxes” for development, as well as real-world testing for innovations. National authorities will establish these environments and initiatives to develop and train AI before it is launched to the market.
EU AI Act governance
An AI Office will be established at the EU level, within the European Commission. It will work to coordinate national governance among member countries and supervise enforcement of general purpose AI rules. National authorities within the EU will govern the Act more directly, using qualified market surveillance.
EU AI Act enforcement and fines
Under the Act there will be multiple levels of fines based on risk and severity of the violation. There are caps on potential fines for startups and SMEs.
Consent provisions in the EU AI regulation
User consent and data privacy and protection are addressed in the Act’s statutes on a number of fronts.
EU AI Act compliance
Companies that acquire data for AI training or other uses in the EU need to ensure that consent has been obtained from the sources or users. In some cases it may be a requirement for doing business with partners or vendors.
Consent is also becoming important to monetization strategy. For example, increasingly, premium advertisers are insisting on proof of consent for collection of user data before partnering with app developers.
Companies that collect user data from their own platforms and users for AI training or other uses have direct responsibility for obtaining valid consent and complying with data protection laws. There are a number of ways companies can achieve compliance and valid consent.
Providing transparency to users for privacy compliance
Privacy laws require clear, accessible notifications, and companies should provide understandable information to users about how user data will be used and processed, including for AI training. As the uses for personal data change, companies need to update their privacy notices, inform users, and, under many privacy laws, get new consent for the new uses of personal data.
Enabling granular consent from users for privacy compliance
Users must be able to accept or decline the collection and processing of their personal data, but they should be able to do it at a detailed level, e.g. approving some kinds of processing, like targeted advertising or AI training, but not others, like sale of the data. This also helps ensure people are informed, which is a requirement for consent to be valid under most privacy laws. A Consent Management Platform (CMP) like Usercentrics CMP enables providing granular information and obtaining specific consent from users.
Employing user-friendly mechanisms to obtain user consent
Just as notifications must be clear and accessible, the way users accept or decline consent must be easy to understand and access. Information to inform users about data processing must be available there as well as the ability to consent or decline at a granular level. It must also be as easy to decline consent as it is to accept, and under many privacy laws users must also be able to easily change their consent preferences.
Achieve and maintain regulatory familiarity with the European AI Act and other laws
Different jurisdictions have different privacy laws with different requirements and consent models. It’s important for companies to know which laws they need to comply with, and how to do so. It can be important to consult with or appoint qualified legal counsel or a privacy expert, e.g. a data protection officer (DPO), which is also required by some privacy laws. Such a role helps to establish guidelines and processes, update operations, and manage security for data and processing.
ChatGPT AI Act coverage
The EU AI regulations don’t ban any specific technology or company, so ChatGPT and its parent company, OpenAI, can still do business in the EU. ChatGPT is a General Purpose AI (GPAI) model, which we’ve covered, and most certainly it was considered by the European Commission and others involved in drafting and finalizing the European Artificial Intelligence Act, given its popularity. (Interestingly, when the EU AI regulation was first drafted, these technologies didn’t exist or weren’t widely available.)
GPAIs are now categorized under the AI Act as “conventional GPAIs” or “systemic-risk GPAIs”. There are minimal documentation requirements for conventional GPAIs. However, more rigorous oversight must be applied to systemic-risk GPAIs. The distinction is important to help ensure that GPAI models are governed, and that the framework for doing so still enables innovation while providing safety and accountability.
AI and cookies under the EU AI regulation
Use of cookies online has been declining as there are newer and better technologies to accomplish what cookies are used for. The question today and going forward is less how AI uses cookies, or may do so, and more how AI could accelerate the replacement of cookies.
Apple and Mozilla have blocked third-party cookies, and Google plans to deprecate them entirely. New tools and methods also enable better data privacy and consent, and can result in higher quality user data.
Current cookie consent models may not be sufficient to cover AI use, since AI systems may analyze large amounts of data in real-time, rather than tools analyzing data from active cookies over time. For consent to be obtained before data collection or use begins, with current pop-ups the user would have to be bombarded with consent banners faster and more often than a human could process them.
AI models can enable more effective ads or personalized user experiences without relying on collection of personally identifiable information, as they can analyze large amounts of data very quickly to group people into audiences based on behaviors. If the system doesn’t need to collect user data, then consent may not be needed, at least for the data collection.
Laws and best practices would likely still require users to be notified of how their behaviors could be tracked and analyzed, and what that analysis could be used for, e.g. personalized ads or shopping experiences. But people’s personal data couldn’t be sold if it was never collected.
European Artificial Intelligence Act and data protection
Research firm Gartner has predicted that by the end of 2024, 75 percent of the world’s population will be protected by at least one data privacy regulation. However, training AI requires huge amounts of data, much of which belongs to individuals or has been collected by companies. There have already been issues and a number of lawsuits launched over data scraping to train AI models done without owner consent or compensation.
AI and new consent requirements under data privacy law
Many data privacy laws also require companies to obtain new, specific consent from customers and users if the purposes for their collection and processing of personal data change. So if companies want to use consumers’ data for training AI or similar new uses, they would need to request new consent from everyone whose data would be used.
Consumers are increasingly savvy these days about data privacy and their rights where their personal data is concerned. Even if they may not understand how AI systems and other functions work in detail, they understand if they have or haven’t consented to such systems using their data, and are likely to want to know specifically what for beyond just “training”.
Many companies have paid for use of intellectual property and aren’t inclined to just let it be used (and possibly replicated and sold) by AI startups. We are starting to see more licensing deals being struck for AI startups like OpenAI to gain access to large companies’ stores of data, with companies like Apple, Microsoft, News Corp. and more.
Does it matter where AI training data sets come from?
There are ever more potential sources of user data, especially online, like from social platforms and apps. It can also be tricky for companies to determine their data privacy responsibilities when the company is headquartered in one place, but potentially has users around the world. This can make an organization responsible to comply with multiple different privacy regulations. Many such laws are extraterritorial, in which case it only matters where users are located with regards to rights and protections, not the companies.
A lot of consumers don’t focus too much on just how much data they create on a daily basis, who might have access to it, and how it could be used. Children may not pay attention or fully understand user data generation or processing at all, even though most data privacy laws require extra protections and consent for access to their data. That consent must typically be obtained from a parent or legal guardian if the child is under a certain age threshold determined by the specific law.
A number of data privacy laws do not cover personal data that people make publicly available, which could include that generated on social platforms. Perhaps posts, comments, and photos are not a big privacy concern to some. But what about private messages or chats? Those could contain far more sensitive material.
Once data has been collected, ideally with user consent, people should know what happens to it. It’s a condition of most privacy laws that the controller—the entity responsible for collecting and using the data—notify users about what data will be collected and for what purposes. If those purposes change, under many privacy laws the controller must notify users and get new consent. With AI training, this could require a lot of granular detail, and could change often.
Challenges with obtaining AI consent from users
Because AI systems are often still experimental and the results unpredictable, it can make some data privacy requirements tricky. Organizations can notify users about what they want to use data for, but it’s possible what the data actually gets used for, or how it may be changed, or the results from using it may be different.
While users are supposed to be notified before any new purpose is put in place, those doing the work may not know of the change until it’s happened. If data is being analyzed in vast quantities in real time, traditional mechanisms for obtaining user consent, like cookie banners, may not be fast or granular enough, or otherwise sufficient.
User-facing AI systems can be potentially manipulative, resulting in users providing information they didn’t anticipate. Systems may also surface more sophisticated and nebulous connections between data points, enabling identification and profiling at a level we have not seen before. This could potentially turn just about any data into personally identifiable or sensitive data. Current consent requirements may not adequately address this.
While manipulative user interface and user experience functions commonly known as dark patterns are increasingly frowned upon and, in some cases, have been regulated against, those tend to focus on tactics that are already familiar. Responsive design could enable the development of new and more sophisticated ways of manipulating users.
Transparency requirements for AI systems and use under the European AI Act
The agreed-upon rules for general purpose AI include requirements for transparency about data sources, purposes, etc. But transparency will be a requirement for many systems and uses of AI. Summaries of copyrighted data used for training need to be published. Users will have to be informed if they are interacting with a chatbot, for example. AI-generated or edited content must be labeled. If biometrics categorization or emotion recognition systems are in use, users who may be affected must be informed.
Exemptions for AI use by law enforcement under the European Artificial Intelligence Act
AI-powered tools and systems can be extremely useful to law enforcement, but risks to personal privacy and human rights are also recognized. So a series of safeguards and well-defined exemptions have been agreed upon with regards to the use of real-time biometric identification systems, such as facial recognition, by law enforcement in public spaces.
Such access will require prior judicial authorization, and will be limited to strictly defined lists of crimes. Use of biometric identification systems after the fact, e.g. reviewing footage and analysis, would be done only in the case of a targeted search for a person who has been convicted of a serious crime or is suspected of having committed one.
Use of real-time biometric identification systems would be limited by time and location, and for the following purposes:
- targeted searches of victims (e.g. abduction, trafficking, sexual exploitation)
- prevention of a specific and present terrorist threat
- localization or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organization, environmental crime)
Future of Artificial Intelligence Innovation Act
The Future of Artificial Intelligence Innovation Act is current federal legislation in the United States, which has had two readings and been referred to the Committee on Commerce, Science, and Transportation. Like the American Privacy Rights Act (APRA), the current federal privacy legislation in the US, it’s still a long way from becoming law.
The AI legislation recognizes that AI needs to be regulated, but also agrees that policies should “maximize the potential and development of AI to benefit all private and public stakeholders”. One section focuses on identifying regulatory barriers to innovation.
There would also be a focus on creating international alliances within the US government and with other countries to work together on AI innovation and to coordinate and promote the development and adoption of common AI standards.
Oversight and enforcement of the Future of Artificial Intelligence Innovation Act
The Act would authorize the establishment of the Artificial Intelligence Safety Institute at the National Institute of Standards and Technology (NIST) for oversight, “with the mission of assisting the private sector and agencies in developing voluntary standards and best practices for AI.”
The Institute would have three primary functions:
- conducting research, evaluation, testing, and supporting voluntary standards development
- developing voluntary guidance and best practices for the use and development of AI
- engaging with the private sector, international standards organizations, and multilateral organizations to promote AI innovation and competitiveness
As of yet the world has far less AI regulation than data privacy regulation, but the latter has expanded significantly in only a few years. It is likely that AI regulation will develop even faster. Clear, robust standards that protect people and companies, including their privacy and data, while enabling international collaboration and innovation are the strongest and most sustainable ways forward. The EU AI Act is a good start.
To learn more about EU data privacy requirements, AI Act’s stipulations for consent, and how you can comply, talk to our experts today.
What you need to know: More and more US states are passing data privacy laws. We compare data privacy laws by state and explain what you need to know for compliance.
Learn more: The EU and US once again have an agreement to govern international data transfers. We have everything you need to know about the EU-U.S. Data Privacy Framework.
Read more: All privacy laws require companies to notify users about data use and their rights. Learn what you need for a compliant privacy policy.
Usercentrics does not provide legal advice, and information is provided for educational purposes only. We always recommend engaging qualified legal counsel or privacy specialists regarding data privacy and protection issues and operations.