Artificial intelligence (AI) and data privacy
Home Resources Articles Artificial intelligence (AI) and data privacy

Artificial intelligence (AI) and data privacy

Businesses use artificial intelligence for data analysis to improve predictions and decisions. This brings security responsibilities and data privacy opportunities.
by Usercentrics
Apr 12, 2022
Artificial intelligence (AI) and data privacy
Table of contents
Show more Show less
Book a demo
Learn how our consent management solution can improve privacy and user experience for your users.
Get your free data privacy audit now!

It is increasingly common for organizations to have at least heard of artificial intelligence (AI) and the capabilities it promises. Some would already consider us to live in an AI-driven world. Many companies use it to analyze data sets, enable predictions of customer behaviors, mimic human decision-making or categorize vast, highly complicated sets of information, among other functions.

 

AI can be a powerful asset, but it can also present a threat to data privacy and security, and issues with regulation, especially as artificial intelligence evolves. How can companies use AI to improve their business operations while also prioritizing user privacy and data protection? A good place to start is by defining just what artificial intelligence is, what we want it to do, and how AI can affect privacy.

Defining artificial intelligence for business leaders

AI can mean a lot of things depending on the usage of the term and the intended audience. It’s sometimes used interchangeably with “machine learning” and assumed to have something to do with “big data”. (Machine learning is a subset of AI and big data is generally just large, complex data sets, often from new sources.) For business leaders, AI can refer to different technologies, like automation and complex analytics, that make business operations more efficient. When we refer to “data-driven decision-making”, AI can help do the “driving”.

 

Organizations are usually looking for ways to streamline operations, generate better predictions and insights, and make smarter decisions to drive revenue. Artificial intelligence promises to do just that.

 

AI’s goal of streamlining businesses’ decisions and operations is often predicated on its ability to simulate the behavior of real people, ideally large numbers of them. This simulation is what makes AI “intelligent”, and it’s what contributes to the many different ways that AI is used in business, particularly when it comes to data usage, customer management, and marketing.

How businesses use artificial intelligence

AI systems need lots of data to simulate human behavior. When they have it, those systems can extract useful insights from previous customer behaviors that can be highly valuable to businesses. There will always be high-touch and customer-facing functions that can’t optimally be automated or handed over to AI. But there are plenty that can, freeing up resources to enable humans to focus where they need to, and to use the resulting analyses to help make big decisions.

 

Companies that are active on multiple online platforms, for example, can obtain data from plenty of sources. Some are provided by users directly, like contact information or purchase histories. Others are collected “behind the scenes”, like through the use of cookies and other tracking technologies.

 

The more sources of data from which AI can draw from and analyze, the quicker and more accurately it can learn about and predict consumer patterns, such as navigation patterns, grouped interests, purchasing behaviors, and more.

 

Getting more granular from large-scale data analysis, AI can help influence customer management and marketing as well. AI systems continuously learn and get “smarter” the longer they analyze data sets, and the more data they have to work with. They can improve on automating processes like customer relationship management and strategic marketing analysis.

 

However, these systems only learn based on instructions or how they’re taught, and we have already seen negative scenarios where human biases have negatively skewed analysis. Preventing oppressive or inaccurate analysis is also a data and user protection responsibility.

 

To make full use of AI as a tool, businesses require a lot of data to “feed” it. But the more data a business needs, the more they need to ensure transparency about what the data is to be used for, that they have a legitimate and legal basis to use it, and/or the consent of the people whose data it is. Which is where data privacy regulations come in.

Artificial intelligence and privacy laws

European Union

 

AI processing makes use of vast amounts of data, and some of it could be sensitive personal information. Through analyses some of it could be used for identifying purposes. There is also the risk, where data has been anonymized, of it being deanonymized (possibly using AI) or not anonymized sufficiently to begin with.

 

Which raises the legal and ethical question of how to establish and maintain user privacy while still obtaining and processing the data companies need to power AI usage.

 

The European Union already has the General Data Protection Regulation (GDPR) to protect people’s privacy and data broadly. Article 15 addresses “right of access by the data subject”, and “automated decision-making” directly in point 1(h). Article 22 also addresses “automated individual decision-making, including profiling” directly, which would include AI use. “Automated decision-making” is probably the most common AI-related language to appear in privacy laws.

 

In 2021 the European Commission proposed the Artificial Intelligence Act to govern the development, marketing and use of AI in the EU, to broadly “harmonize rules on artificial intelligence”.

 

This law would have a significant impact on how AI is regulated and used within the EU, but also among trading partners (anywhere EU residents’ data may be processed). Like the GDPR, it would also provide the ability to severely penalize noncompliance. The proposed regulation has four sub-objectives:

  • ensuring AI systems in the EU are safe and respect fundamental rights and values
  • fostering investment and innovation in AI
  • enhancing governance and enforcement
  • encouraging a single European market for AI

Overall, it aims to balance addressing risks without hindering innovation.

 

 

United States

 

The US does not yet have a federal privacy law. However, like the GDPR, California’s Consumer Privacy Act (CCPA) addresses data protection and user privacy broadly, but doesn’t specifically address AI or its use. Now, companies would need to disclose AI usage in their purposes for collecting and processing data, e.g. browser history to be used for decisions made using algorithms.

 

Companies also need to make a significant effort at clarity and transparency regarding more complex uses and insights from AI that users may not readily understand, like psychological or behavioral insights that can be determined from the data analysis.

 

Since AI needs a great deal of data, it is likely this data would come from a number of sources. This would mean a significant requirement for strong data protection and security practices when the data is collected, shared, and stored. If AI processing is done by a third party, they and the data controller for whom they’re working must also be careful to comply with regulatory requirements for the safeguarding and use of data for AI analysis.

 

The California Privacy Rights Act (CPRA), which goes into effect in 2023, does more explicitly address technologies like AI and their use. Under the Act, consumers have the right to understand (and opt out of) automated decision-making technologies, which would include AI and machine learning.

 

And again, data from multiple sources that may be harmonized must be handled very carefully. For example, if two people with very similar personal information, coming from more than one system, get merged into one record erroneously, not only could one person receive offers they’re not interested in, as a mild consequence, it could be a legal violation, as the person receiving the offers did not opt in. It could have been the other person who agreed to data collection and communications, but who, from the standpoint of the harmonized data, has ceased to exist.

 

Also, untangling these records is not a simple matter, and may require even more third parties to access the data — who were not part of the original processing plan — in order to resolve the issue.

 

Under the CPRA, the California Privacy Protection Agency (CPPA) is also being created, and part of its mandate is to issue “regulations governing access and opt-out rights with respect to businesses’ use of automated decision-making technology, including profiling and requiring businesses’ response to access requests to include meaningful information about the logic involved in such decision-making processes, as well as a description of the likely outcome of the process with respect to the consumer”.

 

 

Brazil

 

Brazil’s Lei Geral de Proteção de Dados (LGPD) addresses AI use in Article 20, referring to data subjects’ “right to request a review of decisions taken solely on the basis of automated processing of personal data affecting his/her interests, including decisions aimed at defining his/her personal, professional, consumer and credit profile or personality aspects.” This is in line with the GDPR and previously proposed legislation in Canada.

 

It further requires controllers to provide, if requested, “clear and adequate information regarding the criteria and procedures used for the automated decision, in compliance with commercial and industrial secrets.” Where secrecy is invoked, the national data protection authority (ANPD) can perform an audit to verify if there are any discriminatory aspects of the automated data processing.

 

 

South Africa

 

Chapter 8 of South Africa’s Protection of Personal Information Act (POPIA) directly addresses the “Rights of Data Subjects Regarding Direct Marketing by Means of Unsolicited Electronic Communications, Directories and Automated Decision Making”. Under that chapter, the third section, Section 71, is about automated decision-making.

 

With exceptions (outlined in subsections 2 and 3), under POPIA “a data subject may not be subject to a decision which results in legal consequences for him, her or it, or which affects him, her or it to a substantial degree, which is based solely on the basis of the automated processing of personal information intended to provide a profile of such person including his or her performance at work, or his, her or its credit worthiness, reliability, location, health, personal preferences or conduct.”

 

Section 57 may also be relevant, as it covers the requirement for prior authorization for processing. Particularly that prior authorization must be obtained before any data processing if the “responsible party” plans to process unique identifiers of data subjects, either for a purpose other than the only initially and specifically intended for the data collection, and/or with the plan to link unique identifier information together with other information the responsible party has processed.

 

So basically without obtaining new consent from data subjects for the specific updated processing purpose or the linking of the data, this type of automated processing could well be noncompliant.

How artificial intelligence can solve privacy issues

It is clear that regulatory privacy compliance must be considered in the use of AI, but can this technology actually help protect customer privacy and data and contribute to compliance with privacy laws? Is there responsible AI use? Potentially, yes. AI could well create good opportunities for organizations to mitigate risks of data processing.

 

Large scale data analysis, such as that done by AI systems, is good at predicting patterns. This is not just limited to the world of customer-related behaviors, for example. AI could also be used for analysis to find patterns or trends relating to consent and its management, anomalies in user access to data, or data security from collection through processing and storage.

 

AI could also be used to consistently and thoroughly perform anonymization activities on data sets to ensure identifiers are removed from the aggregate sets and the data cannot be deanonymized.

What are the potential security threats with the use of artificial intelligence?

There are several security risks that companies should be aware of if using artificial intelligence or machine learning-based systems to enhance business operations. Third-party attackers can target AI systems.

 

Attacks can both target the large amounts of data companies are processing (some of it personal and potentially sensitive) and others can target the resulting analysis and predictions. Many AI models that companies use for specific analysis are “pretrained”, and how that has been done is easily learned by third parties, and thus potentially more at risk of attack or manipulation.

 

Companies without solid policies and processes around data management can also be at risk of unauthorized access or theft of the data, or, even before the data is collected, there may be legal noncompliance if users are not accurately informed about how their data may be used. I.e. they may not be aware of how data sets could be combined, or the potential results of the analysis.

What are privacy compliance best practices when using artificial intelligence?

In addition to having a robust overall privacy policy and privacy-first operational playbook, there are some specific best practices companies can employ when using AI.

 

Obviously it’s a good idea to stay abreast of changes in how governments plan to regulate AI. While many privacy regulations currently reference things like “automated decision-making”, as the CPRA in California has shown, the advancement of technologies can and will be addressed in more detail as laws evolve.

 

Additionally, the authorities, like the CPPA in California or ANPD in Brazil, are likely to have an eye on the evolution, use, and risks of these technologies, and to establish rules and recommendations relating to them.

 

As privacy laws most often apply to the location of the data subjects and/or processing, companies may well need to be well versed in regulations of jurisdictions where they do not have a physical presence.

 

If organizations use a third party for AI processing, all the same requirements apply as for any other third-party data access or processing. Additionally, though, organizations need to ensure the compliance and security of the data before, during, and after processing. As noted, this can include everything from robust anonymization through to secure algorithms and thorough deletion of data no longer in use.

 

Organizations need to map and classify their risks and do a privacy assessment specifically for AI, not just for privacy or data-related operations broadly. Consent management needs to specifically reflect AI-related responsibilities and outcomes. And these need to be maintained and updated as AI use grows and changes.

 

Organizations also need to make a concerted effort to be clear about how AI will be processing data, and to what purpose. This is equally important for their own processes and operations as for clearly communicating this to users or customers. If organizations don’t properly understand what AI or machine learning is doing, or what their goals are for it to do, the risk of noncompliance is elevated both in how the data is used and how that use is communicated. The more complex the operations, the more skilled and experienced those responsible for them need to be.

Conclusion

The world continues to create and collect more and more data, and much of it can be valuable to companies, for the data itself, or for the insights that analysis of it can bring. However, given the volume, robust tools are needed to collect, process, and store this data, which includes AI.

 

However, new tools bring new challenges and risks along with new opportunities, so organizations looking to harness AI must be prepared and responsible in adopting it, and transparent with those whose data they are using what is being done with it, how, and for what purposes. The more technology evolves, the less likely the average person is to understand it deeply, or what it can do and how that can affect them. It is not their responsibility to be the experts. Those wanting to profit from their data are responsible for privacy compliance, data security, and clear communication.

 

Websites and apps can collect a lot of valuable data, and we can help ensure that you do so compliantly and in a user-friendly way. Contact one of our experts!

Related Articles

New Hampshire Privacy Act (NHPA)

New Hampshire Privacy Act (NHPA): An Overview

The New Hampshire Privacy Act is the 14th state-level data privacy law passed in the United States. It was...

iab logo - Usercentrics

TCF 2.2 publishers’ guide: updates, insights, and best practices

The Interactive Advertising Bureau (IAB) has recently announced the latest version of its Transparency and Consent...