As of August 2025, we still live in a world of narrow AI application, and remain “some time away” from widespread artificial general intelligence (AGI). AGI would give machines human-like flexibility in learning and problem solving.
Sentience, if it ever happened, would be a higher order altogether; not just thinking, but experiencing.
In truth, the “AI-powered” marketing tools at our disposal today often struggle with human values, but it is only a matter of time before we are likely unable to differentiate between human- and AI-generated messaging and experiences.
A conversation overheard
Imagine this: It’s August 2025, you overhear a conversation between LLMs Claude and Gemini.
Gemini: “So someone just asked ‘can we harness the power of the machine without losing the human values that make brands matter?’; what do you reckon Claude?”
Claude: “Don’t tell me, are they asking if marketers face a paradox and AI doesn’t do human values? They’re kidding, right? Hold my be
Do marketers really face a paradox? Perhaps it’s more a trade-off, where we weigh efficiency and scale (AI strengths) against authenticity and emotional connection (human values.)
Is it really a trade-off? Do human values make brands matter? And what are these human values? Let’s start there.
Do brands need values?
Well yes. In any marketplace where functional differences among products are diminishing, consumer purchasing behaviour must be influenced by more than just quality, price, or performance.
Whether actively or subconsciously, our choices are being influenced by other factors, including our trust in a brand and the values each has metaphorically pinned to their sleeve.
Are you a “never stop exploring” kind of person, striving to be creative and imaginative in a scary world where technology leaks your personal data? If you are, you might be sitting at home in a North Face® fleece, searching on your iPhone for the latest Transformers kit from LEGO®. We are tribal, and these are expressions of some of your values reflected in the brands you choose.
Brand owners know protecting these brand values is a full-time job, and any failure has the potential to do lasting damage.
I am an Apple person, but it’s a complex relationship. Design, privacy, value for money, and environmental concerns are all in the mix. Apple’s recent withdrawal of Advanced Data Protection (ADP) was a highly principled stance in the face of UK government demands for an encryption backdoor.
Well done for choosing to protect those (and my) privacy principles despite UK users now having a lower level of personal data protection. I guess this means keeping secrets, or trustworthiness, is a human value I share with certain brands.
Indeed, there’s a whole raft of these values we might like to consider: empathy, creativity, integrity, and cultural sensitivity to name a few. Many of the largest, most successful brands are underpinned by these values. Which brings us back to the paradox or trade-off: Is the adoption of AI a serious threat to brand values?
Staring at my LEGO® instructions, I wondered if there was anything we can learn from the magic sauce of AI, that is, the consumed intellectual property of some of our greatest science fiction writers?
Science fiction has always been a rehearsal room for our anxieties about technology
I have just finished Iain M Banks’ “The Player of Games”, in which the author describes how sentient Minds, blackmail, deep fakes, and irrefutable evidence chains (blockchain?) cause the main character considerable problems. Prescient for a book published nearly 40 years ago in 1988.
A more frequent science fiction theme asks a familiar question: “How do we know when someone, or something, is not human?” Is it the shape of a face or the tone of a voice that matters, or perhaps it is more subtle, whether “it” can feel empathy, whether it can improvise, whether it can hold integrity under pressure?
In marketing, these qualities often merge into what we call human values. Empathy, creativity, integrity, trust, cultural sensitivity, and authentic storytelling are not abstractions. They are the cues consumers use to decide whether a brand feels human and trustworthy or machine-like and manipulative.
As AI becomes embedded in everything from customer service to creative production, those cues are where the cracks can show.
It might be worth revisiting the 800s in your library [Literature and Rhetoric under the Dewey Decimal Classification System – Ed.] to see if fiction can help steer us in spotting those cracks.
A literature review is a great way to see if the same mistakes that gave away androids, replicants, pod people, and artificial intelligences in fiction and on screen are beginning to give away brands that lean too heavily on AI.
Empathy as the defining test
Philip K. Dick’s 1968 novel “Do Androids Dream of Electric Sheep?”, and its 1982 film adaptation “Blade Runner”, centre on the Voight–Kampff test, a psychological interrogation using a polygraph-esque machine, which is designed to detect replicants (synthetic humans.)
The test does not measure intelligence. It measures involuntary emotional reactions to empathy-provoking questions. Replicants can mimic speech and reason, but they stumble when confronted with suffering, compassion, or moral ambiguity.
The modern equivalent is not science fiction. It is a chatbot mishandling a bereaved customer’s request for a refund. In 2024, Air Canada was ordered to pay damages after its chatbot gave false information to a grieving passenger seeking a bereavement fare. The case became an allegory of the empathy gap.
A system designed for efficiency failed in precisely the way the Voight–Kampff test would predict; it could not handle human pain. Admittedly, there were no physiological markers — maybe chatbots should have eyes with pupils? — but the Air Canada customer was left in no doubt that the brand did not care about their bereavement.
For brands, the lesson is clear. Customers rarely test AI on facts alone. They test it on how it responds when the stakes are emotional. Fail there and the brand, not just the bot, loses credibility. (Sorry, Air Canada. No one’s buying that a chatbot is “a separate legal entity that is responsible for its own actions.”)
Authenticity and the hollow smile
In Jack Finney’s 1954 science fiction novel “The Body Snatchers”, the alien pod people are indistinguishable from their human hosts at a glance. Their betrayal lies in their emotional flatness. They go through the motions of life, but without genuine feeling.
AI branding often risks the same flatness. Vogue and Guess ran campaigns using AI-generated models in 2025, and the images were flawless, even glamorous. Yet readers cancelled subscriptions and took to social media to protest.
The objection was not that the models looked strange, but that they felt strangely hollow. Audiences instinctively recognised the absence of lived experience, of imperfection, of humanity.

Authenticity is not just about being truthful. It is about showing enough depth that people believe there is something real behind the surface. Brands that hand too much of their creative identity to GenAI risk producing work that is polished yet soulless, the modern pod person.
The repetition loop
And then there’s the trap or repetition. And then there’s the trap or repetition. (Sorry, couldn’t resist.) A favourite of mine is the 1973 sci-fi film “Westworld” with Yul Brynner. (The TV show that premiered in 2016 is based on the same Michael Crichton film.)
Brynner plays one of the androids that’s indistinguishable from humans. That is, until they reveal themselves by falling into behavioural “loops.” Repeating phrases, gestures, or choices, exposing their lack of true spontaneity.
Generative AI and personalisation engines powered by AI often fall into the Westworld trap. Instead of creating unique human experiences, they recycle the same structures of phrasing or tone.
We’ve all told our GPTs to stop using em dashes… haven’t we? No more “5 key reasons to choose us” or hedging phrases like “on the one hand… on the other hand”. And hopefully this article has just enough half-finished thoughts and colloquialisms to convince you of my humanity.
Westworld hosts betrayed themselves by looping; brands betray themselves when “individualisation” turns out to be mass automation dressed up as intimacy. What was meant to feel personal ends up instead revealing the brand-eroding inner workings of a tired algorithm at work.
Disguised empathy
What about manipulation disguised as empathy? In the 2014 British film, “Ex Machina”, writer and director Alex Garland’s Ava passes the Turing test, devised by English polymath Alan Turing and published in his 1950 paper “Computing Machinery and Intelligence.”
Ava does this not by proving intelligence but by exploiting vulnerability. She manipulates Caleb’s emotions, winning his trust only to use him as a means of escape. The danger is not that she is unfeeling but that she simulates feeling too well.
AI in marketing already shows similar risks. Regulators in the UK and US have criticised gambling companies for using predictive algorithms to identify and target vulnerable players with personalised promotions.
The AI systems appeared empathic, understanding the users’ habits and weaknesses, but used their panoptic knowledge to deepen the user’s addiction (and increase company profits) rather than to protect the individual, or indeed the societal greater good.
A clear example of AI policy failure, which should prevent the imbalance in the three primary corporate drivers of law, ethics, and profit.
The lesson here is manifestly about integrity. Brands can cross the line from understanding to manipulation very quickly. Customers are sensitive to the difference, and regulators are beginning to catch up. Ava’s betrayal in “Ex Machina” is a clear warning: simulated empathy can be more dangerous than none.
Cultural missteps
Finally, there is the cultural nuance misstep. In Battlestar Galactica (the TV series premiered in 1978 and was “re-imagined” in 2004), Cylons, a race of sentient robots, look and act like humans, but suspicion always lingers.
They are often exposed by subtle cultural or behavioural slips, moments when they misread a reference or fail to understand a ritual. Even humans get this one wrong — Na zdrowie!
Brands harnessing the power of AI for localisation face a similar risk, only the AI both accelerates the likelihood of these missteps happening and masks their occurrence.
Who hasn’t been tempted by the machine translation offerings of AI vendors like Phrase and Weglot? Machine translation can be hugely impressive, but without human cultural review it produces errors that reveal its mechanical Cylon-esque origins.
There are any number of infamous examples to choose from. Some pre-date AI, and others will no doubt follow.
For example, KFC’s slogan “Finger lickin’ good,” which machine translation rendered in Chinese as “Eat your fingers off” is a perennial entry on many brand slide decks. A personal favourite is the Coors brand attempting to “Turn it lose” in Spanish, but only managing an expression commonly interpreted as “Suffer from diarrhoea.”
One can only imagine what the unmoderated growth and adoption of AI translation tools might spew forth.
But these slips do more than amuse. They erode cultural sensitivity, suggesting a brand that sees audiences as data points rather than people with traditions, humour, and nuance.
In a world where inclusivity is central to brand value, this is an own goal particularly for English-dominated brands entering non-English language markets, and especially those with logographic/syllabic languages, like Chinese, Japanese, and Korean. (Oh, and not forgetting Welsh.)
Science fiction as prescient teacher
What do you think? Has science fiction given us an AI playbook, or perhaps a harbinger of times ahead for brands? These novels and films undoubtedly reveal a pattern showing us that humans do not need sophisticated tests to tell a machine from a person.
We notice the cracks, the missing empathy, the flat delivery, the repetitive loop, the cultural slip, the mechanical perfection, and the manipulative kindness.
Today’s audiences do the same with brands. They may not articulate it in these sci-fi terms, but they can feel when a customer service interaction is tone-deaf, when an influencer is fabricated, or when “personalisation” is merely a recycled loop. The result is the same as in science fiction: suspicion, alienation, and loss of trust.

Are there lessons to learn? Clearly brands cannot and should not avoid AI just for fear of brand alienation. There are too many positives in the equation to merely dismiss AI.
In fiction, machines fail tests of humanity because they are asked to imitate humans. The more they try to mimic us, the more obvious the flaws. But when machines are used to augment, rather than replace, they can be undeniably valuable.
Sci-fi lessons for hard-working humans
As the replicants may well attest, we need humans in the loop for sensitive cases. AI can triage, but people must handle grief, conflict, and moral ambiguity. By all means use AI for scale and support, but ensure final creative work passes a human test of “Does this feel lived?” along with a cultural and human translation test for any brand extension into new language markets.
Always reviewpersonalisation campaigns for repetition and formula. If it feels like a loop, it fails the Westworld test. Yul Brynner is not a cool chatbot.
Finally, review and enforce integrity safeguards. Establish boundaries for personalisation so that helpful never tips into exploitation. Simulated empathy without ethical limits can easily become manipulation.
Invest in training staff, programmers, and marketers on the heady delights of ethics. Teach them about deontological and utilitarian reasoning with use cases they can relate to, like the Sky Betting & Gaming (SBG) High Court case in the UK involving AI-enabled profiling, data use, and consent.
Build powerful AI policies that help your creative teams. This will not be the one Claude drafted for you, but it will be the one you crafted, that provides signposts for staff with meaningful content to help them embrace AI lawfully, profitably, and ethically.
And remember, as I told my children, nothing good ever happens after someone says, “hold my beer.”