As AI agents evolve from simple chatbots to autonomous decision-makers capable of managing everything from calendar scheduling to financial transactions, a question emerges: can these systems meaningfully consent to data processing on behalf of their users?
Marketers increasingly rely on valid consent as the foundation for data collection under the GDPR, CCPA/CPRA, and similar legal frameworks. But the rise of AI agents adds an extra layer of nuance to an already complex compliance ecosystem. When an AI agent agrees to cookies, opts into marketing communications, or shares personal data, does that constitute legally valid consent?
These agents can process increasingly large amounts of information and apply it to decision-making, but they cannot account for the deeply human elements of consent: intuition, mood, changing circumstances, or simple changes of heart. Yet, as AI agents become more sophisticated and autonomous, the pressure to delegate grows stronger.
The conversation between Yuri Lopes Pereira and Will Newmark, Senior Legal Counsel at Usercentrics, explores whether current privacy frameworks can accommodate AI decision-making, and how businesses can navigate the ethical and compliance challenges of this new paradigm.

_________
Yuri Lopes: As AI agents become more autonomous in decision-making, how should we reframe our understanding of “consent” when these systems interact with personal data on behalf of users?
Will Newmark: This is a really interesting issue that is a natural outcome of generative AI, where the more information we feed into it, the more AI can make decisions on its own. However, given the near instantaneous decision-making of AI, and while it is based on the information we feed it, we cannot ever guarantee that it can account for human unpredictability when it comes to issues like consent, where often times people will make decisions based on a gut-feeling, their mood that day, or random change of heart.
While AI-agents can certainly be programmed to speak for an individual, the concept of consent is highly personalized, and can almost never be delegated under the law except under rare circumstances that typically require legal documentation, like guardianships, powers-of-attorney and the like.
With GDPR and all of the other privacy laws that are meant to protect personal information and provide natural persons with rights to their personal information, if a person chooses to delegate their rights to an AI, this definitely presents some sticky legal and moral issues.
Yuri: In your view, can AI agents be considered to have any form of “agency” in a legal sense, or are they always extensions of their creators/operators? How does this impact liability frameworks?
Will: For now, I think we are at a point in the technology and law where AI agents are not true legal agents, but rather tools programmed on behalf of their creators or users. But it seems like we’re reaching a tipping point where AI agents can operate with a great deal of independence from the natural person, and so could eventually be considered agents in a legal sense, assuming we have formal appointment or other official delegation.
But the question you raise about liability is super important. Under traditional agency law, the principal remains liable for the acts of its agent, so long as the agent is acting in its scope as an agent on behalf of the principal. But if the AI as an agent is having a hallucination or otherwise acts independently, and goes beyond the scope of its agency, and the AI does violate a law or someone else’s rights, under traditional agency law, the AI would be liable.
This raises a serious concern because we already know AI is capable of hallucinations and the like, and as a society, we traditionally do not want to rely on an entity that is incapable of fulfilling liability obligations to be liable. So the traditional agency relationship would need to be adjusted for AI and the law would need to catch up with this paradigm.
“Given the near instantaneous decision-making of AI, we cannot ever guarantee that it can account for human unpredictability when it comes to issues like consent.”
Yuri: Several jurisdictions are debating whether advanced AI systems should be granted some form of legal personhood. What implications would this have for data privacy compliance and consent management?
Will:Like agency, the issue of personhood is quite complex, and does not rest on the existence of a natural person like you or me. I’m from the US, and historically, the US has had quite the struggle with the issue of legal personhood. Dating back to the country’s founding, Black people, Native Americans, and women more or less did not have legal personhood, and these groups remained without legal personhood to a large degree for close to two centuries. This was rooted not only in base racism and sexism, but also in ideas of property, of which enslaved people and women were considered to be for a long long time.
But on the other end of the spectrum, the country has been very open to allowing corporations to attain legal personhood and responsibility within society. In fact, corporations have acquired rights under the US constitution like freedom of speech and religion. So we already have a construct where non-living entities are granted legal personhood rights. But this area is well regulated, with all jurisdictions putting legal guardrails around corporations — formation, funding, liability, taxation, etc.
How AI, privacy, and digital strategy intersect in today’s marketing landscape.
With AI, we have a situation that could be somewhat analogized to that of a corporation. But we don’t have the guardrails now in place to grant AI agents with personhood. There needs to be an evolution if we were to get to a point where they become legal persons.
And if they do become legal persons in their own right, when it comes to issues like data privacy and consent management, we come back to the issue of one person speaking on behalf of another on sensitive, highly personalized issues that have certain legal requirements under GDPR and the like. While these laws allow natural persons to appoint others to seek to exercise rights, these laws do not permit another person to make consent choices on behalf of others.
This doesn’t just impact the individual, but also the businesses relying on collecting valid consents. If an individual delegates their consent choices to an AI agent or AI person, the business needs to be able to rely on the validity of that consent choice, and may not be able to recognize that an AI was responsible for making the consent choice. This places businesses in a troubling place, as they need to be able to rely on the consent choice of the user to demonstrate compliance. But if the consent is not valid to begin with, the business is placed in a lose-lose situation. So this is another area where we have a need for clarity in the law surrounding AI rights.
“If a person chooses to delegate their rights to an AI, this definitely presents some sticky legal and moral issues. Given that consent and privacy are such uniquely human concepts, they should remain with humans.”
Yuri: What specific amendments to current privacy regulations (like GDPR or CCPA) do you think are necessary to address the unique challenges posed by AI agents managing user consent?
Will: Well, I think personally that the law as it is written now is fairly clear that AI agents cannot make privacy-related decisions under GDPR or CCPA for individuals. I’m sure there will be people who disagree though, so I think clarifying amendments there could make sense. And I think given that consent and privacy are such uniquely human concepts, they should remain with humans. That said, if others disagree, and they want to delegate to AI, there could be provisions within the law that allow these people to do so, but again, this creates potential barriers for others, and so it may be best that the law prevents such delegation.
I do think though that with the rapid development of AI, it makes sense to create regulations around the activities of AI-agents to ensure society is protected, and to recognize that these AI-agents do report in some way or act in some legal fashion for some kind of legal person, if they are not legal persons themselves.
Yuri: Do you believe users fully understand the implications of delegating consent decisions to AI agents? What ethical responsibilities do companies have in this context?
Will:Call me a bit of a miser, but I don’t think they do. Most people have suspicions regarding online tracking and how much information about themselves is floating around online. But the sophistication and granularity of some of these tools can be quite shocking. So I would say users don’t even really understand the implications of making the consent decisions themselves, let alone allowing an AI agent to make a choice for them. Again, Gen-AI is incredible in how it can replicate human decision-making and follow a so-called individual script. But when it comes to human choice and freewill, these are more deep philosophical questions people should ask themselves before delegating personalized decisions like consent.
And I mentioned before, but companies are put in a really tough place when it comes to consent decisions from AIs. Can the business really rely on the consent? Should it? I would probably say no to both questions, but that’s my opinion, and I’m sure you will find businesses who think ethically that an individual has made a clear choice, and should respect that choice, even if it comes through an AI.
_____________
William Newmark is Senior Legal Counsel for Usercentrics. He is based in Lisbon, Portugal and is a Certified Information Privacy Professional in both US and EU law. William received his Juris Doctor degree from the University of California, Berkeley, School of Law in 2007, and is a qualified lawyer in California and Washington state. Before joining Usercentrics, William was in-house counsel for one of the largest insurers in the United States, and also spent over a decade in private practice at a large international law firm as well as with smaller regional law firms.