Vauhini Vara’s Searches: Selfhood in the Digital Age, which came out earlier this year, isn’t a book that yells at you to delete your apps, cancel your Amazon Prime, or throw your phone into a lake. Instead, it quietly shows you just how deep we’re in. Take the chapter that casually lists her years of Amazon purchases: garlic sauce, a 10-pack of socks, a world atlas. It’s not a dramatic protest. It’s a confession, one that says, “Yes, I see the problem, and I still clicked ‘Buy Now’.”
And that’s the point. What starts with garlic sauce quickly becomes something bigger: a meditation on the slippery nature of convenience. First, we gave up bookstores, then conversation, then maybe a bit of privacy. Now, with AI, we’re handing over thought itself—offloading our curiosity, creativity, even grief to machines designed to help but optimised to extract. Vara never says stop. She says look. This is what it feels like to live inside the problem.
A tech reporter at The Wall Street Journal, Vara studied at Stanford (where Sam Altman was her junior), and became a Pulitzer finalist for her debut novel, The Immortal King Rao, a sweeping exploration of surveillance capitalism and family legacy. With her sophomore book, Searches, she turns inward but only slightly. Through 16 essays, what she’s really doing is tracing how digital tools have begun to blur the line between the self we live with and the ones we scroll through, search for, and simulate.
One of the most talked-about essays in the book is ‘Ghosts’, which she co-wrote with GPT-3 in 2021, a precursor to ChatGPT. The piece is a poignant reflection on the untimely death of Vara’s sister, who passed away from Ewing sarcoma. In every sense, it is profound. “Some of the best parts of the essay were actually written by the AI,” she tells me over a Zoom call, “People didn’t believe me, but it’s true.” But that same experiment now, she says, might fall flat. Newer versions of the AI are trained to be more polished and more boring. “They tend to be much more conventional, much less surprising,” Vara explains. “They have qualities that are not associated with creativity.”
AI has been rattling writers for a while now. Amazon is now cluttered with AI-generated books that are cheap and easy to produce. These titles are driving attention and sales away from authors who actually deserve our money. The problem has proliferated the fan-fiction community as well. With AI’s ability to churn alternative plots with quick prompts, communities that once thrived on original content and deep fan engagement are now grappling with the problem of low-effort content.
That isn’t just a software design choice. It’s a business model. “Literature is not a particularly large market,” she adds. “So, companies are more likely to invest in tools that produce consistent language because that has broader commercial value.” In other words, originality doesn’t scale. Beige sells better. Still, Vara doesn’t think artists need to panic. At least, not yet. “In my reporting, it became clear that readers are currently much more interested in writing by human writers than AI slop,” she admits.
Still, Searches nudges us to rethink what counts as authorship. Vara mentions coming across AI enthusiasts arguing that calling machine-generated creativity invalid was ableist. It was, after all, helping people write who never had access before. One mother, for instance, used AI to create a custom bedtime story when nothing on the bookstore shelves felt right for her son. Vara doesn’t dismiss these arguments, but she does complicate them. “I think the conversation needs to be broader than ‘Can ChatGPT generate a story that my child is going to find interesting?’,” she says. “I think the question has to be, like: Even if there is some nominal value or superficial value in what is being generated, what cost goes along with that?”
The writer doesn’t downplay the very real dangers of AI bias either: language models that reinforce stereotypes, misgender users, or disappear LGBTQ identities entirely. At one point, she asks ChatGPT for a history of art and, surprise, surprise, it gives her a lineup of white male artists. Push it for more diverse names, and the bot confidently spits out facts that are totally wrong. It’s both hilarious and horrifying. For Vara, it’s not just a tech glitch. It’s a warning about how easily AI can fumble or, worse still, recycle the same old hierarchies we’ve spent decades trying to dismantle. And when this is the default tool for a whole generation of AI natives, it raises the stakes.
And if bias doesn’t get you, carbon might. Vara is not the first to flag the growing environmental cost of AI. While it’s tough to pin down the exact number, as usage varies across models and tasks, research shows that generative AI draws massive amounts of electricity and water to train, fine-tune, and deploy. Even Sam Altman has admitted that a single ChatGPT query consumes roughly 0.34 watt-hours of electricity and one-fifteenth of a teaspoon of water. Sounds tiny…until you multiply that by billions of queries a day. “Anytime any one of us uses AI, we are contributing to that.” Vara doesn’t pretend to live above all this. She’s stepped back from major social media platforms but still uses digital tools. Nor does she wave flashing red flags to warn us about tech’s dark side.
She illustrates it with a personal story. Like the time she and her friend Sophie met a boy from an AOL chatroom—who turned out to be younger, awkward, and definitely not the 18-year-old heartthrob they were expecting. Things got weird fast. There was a sketchy corner-of-the-theatre moment, a very unwelcome request, and an escape plan that involved hiding in the girls’ bathroom until he left. A tidy conclusion would be: they learned their lesson and never talked to strangers online again. But that’s not how it goes. They went right back to the chatrooms. Because that’s what we do. We override caution with curiosity. We say yes to “accept all cookies”, give our apps access to everything short of our bloodstream, and then act surprised when it gets weird. Vara quietly reminds us that we’ve always been this way. And odds are, we’re not about to stop now.
If the internet once offered access and attention, today’s AI tools offer something even more intimate: an instant reader, writer, friend, therapist, ghost. “If people are spending time interacting with a corporate-owned chatbot,” Vara asks, “we have to ask what information it is getting from them. And how is that being used?”
When asked what she’d say to the tech bros building these systems, Vara doesn’t hesitate. “Humility,” she says. “There’s this assumption that financial success means broad expertise. But that doesn’t seem to be the case.”
It’s that clear-eyed, good-humoured realism that holds Searches together. It’s not always a breezy read. Half the book is made up of AI responses to Vara’s own essays, which, by design, get a bit repetitive. This feedback-loop format may be exactly her point: how the voice of the machine can start to mimic, flatten, and eventually dull our own.
But there’s also real wit in the way she builds it. Some chapters take the form of search histories, some display AI-generated images. One, as I mentioned before, is just a long list of Amazon buys, gently mocking herself (and us) for the ways we surrender to frictionless convenience. It’s a structural choice that feels like storytelling through clicks, prompts, loops, receipts. Vara insists Searches isn’t meant to resolve anything. “I don’t have answers,” she says, “I just want to open up the questions.”
Searches: Selfhood in the Digital Age is out on harpercollins.co.in; ₹599