Open your Google history and you will find a diary you probably were not aware you were keeping: searches that mark cities you might move to, job hunts, and curiosity rabbit holes. These archives demonstrate that part of our inner lives are now mediated through corporate interfaces. Those tools are useful, but can also influence behavior: how we speak, what we value, what we share.
Vauhini Vara, author of Searches: Selfhood in the Digital Age, knows this phenomenon intimately. In her new book, she turns everyday internet tools such as search boxes, reviews, and chatbots back on themselves to ask what they’re doing to our language, our memory, and our sense of self.
In this conversation, we talk about resisting platform narratives without opting out, rethinking the metrics that impact our digital interactions, and how to keep the value exchange tilted toward users. We also explore creative frontiers and what an alternative, community-minded future could look like for the technology tools we engage with on a daily basis.
______
Brunni Corsato: Your work as a whole is very intertwined with technology, and in the book Searches specifically, you use the tech tools to probe back at them; understanding them but also yourself better in the process. Can you speak more to this reciprocal influence?
Vauhini Vara: I came of age with technology, but I am also old enough that I very much remember a time before, you know, social media and even search engines and the internet as we know it. My identity is very bound up with the internet and big technology companies’ products, but I also remember a time when that wasn’t the case.
So I’m in this generation that is very capable of looking at the phenomenon with a little bit of distance. I think it’s really hard to write about, or even talk about this, because our use of these technologies have become so intertwined with our psyches.
Which is why I tried in this book to just show it in practice, right? But what I find interesting is the way in which the very things about technology companies’ products that are exploitative are sometimes also the things that we benefit from, and maybe even give us more insight into ourselves.
So, for example, the fact that Google, unless we ask it to do otherwise, saves all of our Google searches in its archive. We can go back and learn more about who we were at various points in our lives. But Google also has that information.
Brunni: It didn’t even occur to me to just go and look at my past searches before I read your book. But you’re right, it’s almost like an archive of my life.
Do you think our generation was one of the last ones to have this separation from technology, and more and more people will have their selfhood completely bound with those tech tools?
Vauhini: I don’t know. Something that interests me as a writer, as a journalist writing about technology, and just as a person in the world is the notion that we don’t know what the future will be like.
I think it’s in a technology company’s interest to say “Look, the trajectory so far has been toward there being more relationships that are more and more intertwined with our products.” And so it’s fair to assume that that’s going to continue into the future.
But historically, there are all kinds of examples – politically, socially, technologically – where people decided that the trajectory that they were on was actually not the trajectory they wanted to stay on. And they change the trajectory. And that can be, in some ways, more difficult to imagine than the idea of just continuing on the same trajectory.
Brunni: You mentioned that the people promoting these technologies have an interest in making their narrative what is accepted as true. From Searches, they make “reality-bending declarations about the future.” Do you believe there is a way to counter those narratives as the mere users of those tools? Do you think the media or writers like you play a role in this, too?
Vauhini: Yeah, I do.
One thing that complicates that fact is that oftentimes, we as individuals are using the platforms that are owned by those big technology companies in order to communicate. We post on social media to share our thoughts and ideas. Then the big technology companies’ algorithms are ultimately deciding what actually gets shared.
So the project of the book is to ask that sort of question that you’re asking: to what extent can we use these big technology companies’ products to subvert the goals of the technology companies?
People do it all the time. People still organize on social media. People use Facebook-owned WhatsApp to send encrypted messages to one another to organize, right? They search on Google for information about how to accomplish their individual and collective goals. So all of this is obviously possible.
It requires a recognition of how the company’s products function and what their goals are, because sometimes we as users end up internalizing the value systems of the companies as we determine the value of our own communication.
What I mean by that is that we find ourselves considering it more valuable to have 100,000 followers on our chosen social media platform than 500 followers. There’s nothing intrinsically more valuable, one could argue, right? But because that’s the structure that’s embedded in the products, we find ourselves internalizing that.

My hunch is that the most effective use of these tools to subvert the goals of the companies should also involve a rethinking of the value systems by which we’re determining the usefulness of these products and the value of the messages we’re spreading on the products.
Searches (Pantheon, 2025), a work of journalism and memoir about how big technology companies exploit human communication, and how we’re complicit.
Brunni: Your book basically takes the tools and twists them to play a different game. So what strategies do you use to resist the narratives?
Vauhini: It’s interesting that you say that because one thing I found interesting about the book after it was published was that some of the discourse about the book made me realize that maybe I hadn’t been so successful in completely subverting the goals of technology companies after all.
A number of reviews or interviews published about the book would say things like, “this is a story about an author who used ChatGPT to help her tell her story.” And my goal was not to use ChatGPT to help me tell my story. My goal was to use ChatGPT to show readers something about how ChatGPT functions.
But now there are all these headlines that say something about an author using ChatGPT to help tell her story. So now if you Google my book or ask ChatGPT about it, oftentimes that kind of language about the book will be what comes up.
So in a way the book itself became a kind of case study in the difficulty of doing what it is I set out to do in the book.
And at first I found that really upsetting. But now with some distance I understand it to be a reflection of what the book is grappling with in the first place. Because toward the end of the book I do ask, is it really possible to use the tools of these companies to accomplish goals that are in opposition to what the companies want?
Of course, in some ways I really want the answer to be yes. But I think a more interesting version of the book is where the book itself becomes an artifact of the impossibility of that in some ways.
To what extent can we use big tech companies’ products to subvert the goals of the technology companies?
It is possible, people do it all the time.It requires a recognition of how the company’s products function and what their goals are, because sometimes we as users of the products end up internalizing the value systems of the companies in determining the value of our own communication.
Brunni: Do you think there’s a crisis of imagination – both from writers who wrote those same headlines about your book, but also from users not resisting the narratives the technology companies are trying to push? Do you believe the use of those tech tools are homogenizing our capacity to think differently?
Vauhini: Recently I’ve been interested in research that looks specifically at the use of AI and large language models, like ChatGPT. And it seems to suggest that these models may be inhibiting people’s ability to think and learn for themselves. There’s mixed research on this question of creativity specifically, I think partly because it’s so hard to quantify an idea of creativity.
As a writer, I’m really interested in this issue of relationality, like the relationship between an author and a reader, or a narrator and the person or people being addressed by a narrative. To me, that feels to be at the center of what narrative is, whether it’s a book or just a conversation like we’re having now.
So it’s interesting to me to think about how narrative changes when we’re addressing not only other human beings but technologies themselves. So when we’re addressing Google, we type into the search box in a particular way because we’ve learned that there’s a certain way to put something into a search box to elicit answers.
Protecting human values in AI deployment: lessons from science fiction
Vauhini: The same is true of ChatGPT. We talk to a chatbot in a particular way in order to elicit the kind of response that we’re looking for, and elicit a kind of “communication” from the chatbot. And the same is true of social media. When we post on social media, oftentimes people will post in a way that’s different from how they would talk to other individuals because of the way the algorithm functions, right?
So, somebody might be posting about their horrible day, but they’ll include a cute picture of their dog because they know that the cute picture of their dog is going to boost the post in the algorithm. There’s nothing natural about that. You wouldn’t send a message to your friends and include a picture of your cute dog so that they pay attention to your message about your horrible day. But people do it on social media.
And the reason I bring this up is that, rather than technology inhibiting our creativity, it’s that we know that when we’re communicating through the medium of these products, the way we need to communicate that’s going to elicit the response we need tends to be less creative. Those products don’t respond well to originality or creativity or unusual forms of self-expression.
They respond to very standardized ways of using the product. I would argue that when we’re communicating using those products, we tend to communicate in a more standardized way.
An interesting question to ask would be, does that then run the risk of making us into less original creative beings ourselves?
We’re not paying to use these products and these companies are all financially successful because they’re able to productize our use of the products. Which is to say, they have to make themselves useful to us. But at the same time, our use of the products is what’s ultimately beneficial to them. That’s all a long-winded way to say that what’s useful to us about these products is always bound up in what’s exploitative about them.
Anytime any of us searches or buys something online, or chats with a chatbot, we are participating in exploitation – ours, other people’s, and the planet’s.
And I don’t think there’s any way around that, but it needs to be acknowledged.
We’re not paying to use these products and these companies are all financially successful because they’re able to productize our use of the products. Which is to say, they have to make themselves useful to us. But at the same time, our use of the products is what’s ultimately beneficial to them.
Brunni: I think there’s also power in naming the contradiction.
All things considered, where do you land on whether it’s possible to have a freer and more empowered relationship with those technology tools?
Vauhini: What I find most interesting is imagining possible future technologies that allow us to have a freer and more empowered relationship. I think it is much more difficult with technologies owned by big publicly traded companies than it is, for example, with technologies owned by foundations, technologies that are collectively owned, or technologies that are owned by communities.
So I think that an easier path toward that is alternative ownership structures. So it isn’t just about using these existing technologies differently as individuals. It’s coming up, collectively, with different ways that we might build and own technologies.
Brunni: Cool. Do you know of any examples?
Vauhini: Yes, and you do too. Wikipedia is run by a foundation. In my town, Fort Collins, Colorado, we have community broadband. There are commercial internet providers, too, but there is an option to have internet provided by the city. So, it’s city-owned.
These examples actually already exist around us.
Brunni: To wrap up, do you consider yourself a techno optimist?
Vauhini: I would say if pressed that I’m neutral on the question of whether technology is beneficial or harmful.
Technology, like any invention, is something that we use to accomplish particular goals. And so the question then becomes, what are the goals behind the individuals and the institutions behind a given technology? Are those goals ultimately going to benefit us, the planet, and the natural world? What is the net benefit versus the net harm? So that’s how I would think about the question.
________
Vauhini Vara is the author of Searches, named a best book of the year by Esquire and a Belletrist Book Club pick; and which Publisher’s Weekly called a “remarkable meditation.” Her previous books are This is Salvaged, which was longlisted for the Story Prize and won the High Plains Book Award, and The Immortal King Rao, a Pulitzer Prize finalist and winner of the Colorado Book Award. She is also a journalist and a 2025 Omidyar Network Reporter in Residence, currently working as a contributing writer for Businessweek.
