Skip to content

Can AI be ethical by design?

Can AI be ethical by design?
AI
AI is everywhere, from workplace tools to creative platforms. But can it be ethical by design, or are we just scrambling to fix guardrails later? This article explores consent, transparency, and privacy-by-design to understand if AI can really be ethical and human-centric.
Brunni Corsato
Written by
Brunni Corsato
Read time
10 mins
Published
Oct 7, 2025
Magazine / Articles / Can AI be ethical by design?

AI has become the most polarizing technology of our time. Whether you love it or hate it, in the last couple of years it has become ubiquitous in a lot of our interactions online. 

It promises to automate repetitive tasks, open new avenues to creativity, and reshape entire industries. And with all of that comes the question: Can AI be ethical by design, or are we racing ahead and fixing guardrails later?

Take bias. Amazon scrapped an AI-based hiring tool after it was found to systematically downgrade women’s résumés, echoing old inequalities in digital form. Or consider privacy. Sam Altman, CEO of OpenAI, admitted that conversations with chatbots don’t have the same confidentiality protections as doctor–patient or attorney–client exchanges, and could even be shared with law enforcement if requested.

These aren’t fringe anecdotes, and they point to a deeper concern among users of these technologies. 52 percent of Americans are more concerned than excited about AI according to Pew Research Center, with top worries around privacy and job security. 

With the AI space moving at breakneck speed, there are no easy nor definitive answers to that. But one thing seems to be already clear: without embedding ethics at the design stage, AI risks amplifying the very issues and biases it could help solve. 

So how do we get there? What would ethical AI actually look like in theory and in practice? 

We explore these questions and share the opinions of professionals working in the field. 

What would ethical AI look like?

If today’s AI is still tripping over privacy and bias, what would it take for the technology to be truly ethical? The answer depends on whether we look at it in theory and the ideals we say we want, or in practice, where market forces and human behavior complicate things. Let’s start with the theory.

Ethical AI in theory

At first, the recipe for ethical AI looks straightforward: systems that are transparent, rights-respecting, and fair from the start. But when we start to dig deeper, it becomes clear that ethics in AI isn’t a feature you can simply toggle on and off; it involves real-world trade-offs. 

As AI ethics researcher Rebecca Bultman explains, “Ethical AI isn’t a feature. It’s an ongoing negotiation between competing interests, values, and responsibilities.” She adds,”‘ethics’ is basically ‘what’s the right thing to do?’ – which sounds simple, until you realize ‘right’ changes depending on who you ask, where they’re from, and what century they’re living in.”

From our modern perspective, AI needs to incorporate some fundamental elements to be ethical.  Let’s look at what those are and where they bump up against real-world roadblocks.

In theory, users should always know when AI is in the loop, what data it is collecting, and how that data is used. In practice, that’s nearly impossible. 

AI systems are complex, their decision-making opaque, and their data inputs vast. Even if disclosures exist, they’re often too abstract for the average user to truly understand. That’s why it’s difficult for AI models to meet the GDPR’s standard of being “freely given, specific, informed, and unambiguous.”

Increasingly, AI is inescapable, making the issue of consent even murkier. Microsoft has rolled out Copilot across Office 365, Google is layering Gemini into Gmail, Docs and search queries, and Adobe has hardwired generative AI into its creative suite. 

It’s also mandatory in many cases, with increasing pressures from upper management to implement it into more projects and processes — to varying degrees of success.

For many professionals, opting out isn’t realistic, or even technically possible, depending on how much corporate IT has locked down settings access. Even when it is, resisting it means falling behind: less efficient, less competitive, even obsolete in their roles. 

If declining AI means career stagnation, can we really say consent is voluntary?

Transparency

An ethical AI system should disclose how it was trained, what data it learned from, and how its guardrails are set. Yet most AI companies treat this as proprietary information. 

Training datasets often include material scraped from the internet without permission, from copyrighted books to personal blogs and social media — practices that are widely known at this point, and frequently bring about lawsuits by artists and publishers

Anthropic, often considered the most ethical of the big players in AI development, currently faces a copyright dispute that could potentially bankrupt the company.

Without transparency, accountability is almost impossible. And when AI starts producing outputs shaped by those hidden inputs, the ethical risks multiply.

Human-centric

Ethical AI in theory would recognize and protect vulnerable groups. That means guardrails for minors, whose interactions with chatbots and other AI-driven tools aren’t always monitored, and who may lack the maturity to interpret AI outputs responsibly. 

It also means acknowledging emerging risks like “AI psychosis.” Psychiatrists are beginning to report cases where prolonged chatbot use has contributed to delusions or dependency. If we’re serious about ethics, safeguards must go beyond accuracy and bias and include mental well-being.

The common thread here is keeping humanity as the compass. 

OpenAI famously made that commitment in their charter when it started operations as a nonprofit. “OpenAI’s mission is to ensure that artificial general intelligence (AGI ) — by which we mean highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity. Our primary fiduciary duty is to humanity.”  

As investment pressures mounted, the company announced a shift from nonprofit to capped-profit, a move critics call a “corporate workaround.” After public pushback, the company put the restructuring on hold — but whether the company will stay a nonprofit is yet to be seen.

So on paper, ethical AI means informed consent, real transparency, robust guardrails, and a people-first ethos

But that only gets us so far. The harder question is what this looks like in practice.

Rebecca Bultman photo
— AI adoption strategist and AI ethics researcher

Technically yes, but only if we acknowledge that ‘ethical AI’ isn’t a feature. It’s an ongoing negotiation between competing interests, values, and responsibilities.

It requires:

  • Personal awareness: knowing when AI affects you
  • Organizational accountability: someone’s name on the line when it fails
  • Societal participation: affected communities having a voice
  • Global coordination: shared guardrails

Right now we’re building AI like we’re in a gold rush: grab what you can, figure out consequences later. But ethical AI needs the opposite approach: Slow down, bring everyone to the table, and accept that perfect ethics is impossible but that absolutely doesn’t excuse us from trying.

The real answer is that AI can be as ethical as we choose to make it.

But that choice happens at every level, every day, in every decision about what to build, who builds it, and who bears the cost when it breaks.

Ethical AI in practice

Theory is the foundation, but what users interact with, and are influenced by, is the implementation of those principles. In practice, ethical AI might involve embedding privacy by design and respecting the right to be forgotten, even in complex machine learning systems. 

Here are some of the frameworks that could be useful to translate principles into practice.

Privacy by design


Privacy by design means building privacy into systems from the very beginning, not as an afterthought. It’s about planning ahead to reduce risks, prevent misuse, and make sure people stay in control of their own data at every stage.

Apple’s AI system, Apple Intelligence, promises to put these principles in practice. It processes most of the AI workload on-device, limiting what leaves the user’s ecosystem. In their words, it “draws on your personal context without allowing anyone else to access your personal data — not even Apple.” The system also adds extra security around users’ data and doesn’t keep it indefinitely.

Data minimization

An important use of privacy by design is data minimization. Rather than collecting everything “just in case,” organizations should only gather what is necessary to achieve a specific, legitimate purpose. And, ideally, inform the user about the reason to collect that data and how it is going to be used.


For instance, if an AI-powered recruiting tool is designed to evaluate candidates’ job-related skills, the legitimate purpose would be to assess professional qualifications. Collecting information about a candidate’s age, marital status, or social media activity would go beyond that purpose and risk introducing bias

Storing data from candidates who weren’t hired and potentially using it for purposes other than that role’s hiring evaluation are also outside of scope of the legitimate purpose.

Data minimization would mean restricting inputs to what is directly relevant, such as work experience, certifications, or role-specific assessments, while excluding sensitive or unnecessary attributes (which can also be illegal to inquire about in some regions.)

Privacy-enhancing Technologies (PETs)


A number of privacy-enhancing technologies (PETs), such as differential privacy, federated learning, and zero-party proof, can be used to develop AI systems more ethically. These technologies enable data insights without exposing individual-level information.

Differential privacy, for example, introduces statistical “noise” to datasets so individuals cannot be re-identified, even when large-scale analysis is performed. Federated learning enables AI models to be trained across decentralized devices or servers, with only the model updates  shared centrally, instead of raw personal data. 

These approaches enable companies to benefit from collective insights without compromising the confidentiality of individual users.

Tilman Harmeling photo
— Strategy & Marketing Intelligence at Usercentrics

Yes, AI can be ethical by design.
The key lies in adopting a fair, transparent, and rights-respecting approach from the very start. That means building systems that avoid the well-known harms people reject when their data is taken and used to profile them, such as privacy intrusion, material harm, and discrimination.

Ethical AI by design requires:

  • Purpose limitation: collect and use data only for clearly defined, beneficial goals.
  • Bias prevention: actively test for and remove discriminatory outcomes.
  • Privacy protection: minimize data collection, apply strong safeguards, and give individuals control.
  • Transparency & accountability: ensure people understand how AI decisions are made and who is responsible.

If these principles are embedded into AI development and enforced in practice, we can create systems that serve people rather than exploit them.

The right to be forgotten


The GDPR’s right to be forgotten grants individuals in the EU the power to request their personal data be deleted from online databases. In conventional databases, this is fairly straightforward. 

That is not the case when it comes to machine learning. Once personal data influences a model’s parameters, simply deleting the original record doesn’t erase its imprint.

This challenge has led to the development of machine unlearning, a branch of machine learning dedicated to removing specific data points from trained models, including personal data, copyrighted material, outdated information, harmful or biased information, and so on. 

In practice, companies like Amazon are adapting their infrastructure to support erasure requests: session data in Amazon Bedrock Knowledge Bases, for example, is encrypted and auto-purged after 24 hours, enabling organizations to honor GDPR erasure rights more effectively.

Ethical AI is possible, but needs as much dedication as its expansion

The throughline here is that companies can — and some already do — embed privacy safeguards from the start: minimizing data, securing it, balancing functionality and privacy, and making these the default.

Still, as with everything, when it comes to AI and its speed-of-light development, there are no definitive answers. Both the private sector and governments are only beginning to contend with what it means to be “forgotten” in a digital reality where personal data can diffuse through countless layers of algorithmic training. 

Until standards catch up, ethical AI in practice requires technical innovation, regulatory clarity,  willingness to keep evolving together with the technology and its uses — and enforcement with teeth, where needed.

The greater scope

As complex as the questions and problems are that we’ve delved into, AI, and an ethical implementation of it, are even more complex. 

For example, we need to embed ethics, but whose? And while we focus on the human impacts, we can’t ignore the environmental costs of training and running massive models — which quickly become human impacts, but usually humans who are far away from those in charge. 

How do we account for and balance biases and ethics to create true diversity when such vast differences (potentially at odds) exist in global culture, history, politics, and language?

These questions remind us that “ethical by design” is less a one-and-done solution than a continuous process of reflection, adjustment, and accountability.

Building an ethical future with AI

Ethics in AI won’t ever be finished. It will be shaped, reshaped, and sometimes contested as the technology evolves, and as our understanding and use of it changes, too.

AI will be ethical if people, companies, and regulators collaborate on making it so, building the right guardrails into its design, deployment, and use. That means responsibility does not rest with one specific group, today or in the future. 

Privacy by design as a baseline, transparency as a practice, and ongoing accountability as the technology continues to be woven into the fabric of our everyday lives.

Dan Petrovic photo
— Managing Director of AI marketing agency DEJAN and machine learning specialist

AI is neutral like any other technology. You can use fire to cook a meal or burn the house down. The outcome depends entirely on human intent, design, and governance. The main problem is that ethics is artificially integrated into models. This happens at every touch point:

  • Pre-training
  • Post-training
  • Fine-tuning
  • Reinforcement learning

The alignment is designed by humans and it reflects their worldview and position on ethics.

The model must learn everything, all the lies, the harm and everything else that comes with various human dimensions. As it is now, when we interact with a behemoth such as Gemini, on the surface it’s polite, friendly and ethical. But each and every single model in its raw form is perfectly capable of uttering unspeakable horrors and doing harm if the guardrails were to be removed.

The key to a true ethical model is not in teaching it ethics, but in achieving a level of general intelligence where the model forms a unique view on it. AI should be good because it wants to.

The duality of algorithms in a privacy-conscious world
The new junior engineer: surviving an AI-dominated job market
Who shapes the self? Countering big tech’s future narratives
How answer engines are reshaping the internet
Protecting human values in AI deployment: lessons from science fiction