The Cost of Taste
[Sept. 9th, 2025]
There’s a certain anonymity in my writing of late.
In high school, I could easily identify a paragraph as mine: a rhythm to the pacing or a particular pairing of words – perhaps only I could see it, but I could see it clearly. Lately, my writing has been laced with the brutality of efficiency and the laziness of an LLM era. I can’t recognize my own voice.
I find that difficult to accept, as the way one writes has always been unmistakably personal to me. Growing up, I could often tell which of my friends had written which essay without ever glancing at the name on the page. One friend filled every paragraph with bursts of imagery while another peppered her prose with short, choppy, three-word sentences. A third promiscuously scattered semicolons. Regardless of how much each person even cared about writing, we would all inevitably press our fingerprints onto the page.
In college, writing felt like a waste of time. Every humanities class was a checkbox against the HASS requirement, and every essay was a race toward getting back to an ever-so-important algorithms problem set. And of course, as LLMs became very capable assistants, I could no longer tell the difference between friends’ essays, at least in a room full of CS majors at MIT. We all had the same ghost writer after all.
A couple weeks ago, my friend Julia and I went gallery hopping in Chelsea one evening. As we strolled through the hallways and cos-played as art people, she asked, “What do you think are the most important traits for success in the AI era?” We pondered the question for a few minutes, joking that maybe one day all human hires will be personality hires, so we will have no problem at all. She later sent me a video which approached the topic a bit more seriously: mathematician Po-Shen Loh argued that the first thing AI steals is taste.
In the days since, I spent some time pondering – what exactly is taste? Colloquially, we often use the word to classify our preferences, as in food or music. My music taste includes Daft Punk but not MCR, yes to Sabrina Carpenter but no to Olivia Rodrigo. But what AI steals isn’t just a crude tally of likes and dislikes. AI erodes our awareness of why something resonates, and perhaps more importantly, the ability to consciously or subconsciously exercise that awareness – to let it guide the words you put down on a page, the jokes you make in conversation, or even the questions you decide are worth asking.
In a tweet from several months ago, professor Patrick Hsu suggested that “developing taste requires large-scale pre-training and extensive test-time compute.” Pre-training is the exposure phase. In the context of writing, it’s reading Donne, Dickinson, Didion, and dozens of others until an internal compass begins to form.
But pre-training alone isn’t enough. Taste emerges at inference time, in the process of putting words on the page, making choices, rejecting some of them, and trying again until you can read it the next day and feel something. In high school, I would happily spend significant compute budget on a single essay, generating at least ten-times more tokens than the output size.
Now with an LLM on-hand, my budget per token has collapsed. I jot a handful of bullets, the model expands them into fluent prose, and I skim every other sentence to make sure it’s coherent. The ratio flipped toward more output tokens than thoughtful ones of my own.
While the effect is painfully obvious in writing, the cultivation and exercise of taste goes far beyond this. In the way that Patrick Hsu likely meant it, taste is supremely important in research which I have been thinking about a lot as a soon-to-be PhD student. A good researcher must have some underlying intuition that guides their decisions through the inherently open-ended pursuit of expanding human knowledge. Reflecting on my undergraduate research, I see that the turning point of projects was often catalyzed by an intuitive guess from me or a mentor, whether in the choice of method or in the choice of application.
In a recent fascination with how we create knowledge, I read The Beginning of Infinity, where David Deutsch argues that the engine of human progress is our capacity to propose good explanations. Other than being testable, what struck me was Deutsch’s emphasis on such explanations being inherently creative. They require a bold, and inherently stochastic, leap.
When the space of possible explanations is effectively infinite, what guides us toward making a bold leap in one direction over another? Why does a researcher propose this conjecture, pursue this method, or choose this application instead of that one? While some of these decisions are ultimately resolved by experiment, there must first be an internal compass that determines which ideas even surface in our minds and which feel worth pursuing. Progress, then, is fundamentally about the continual exercise of taste. The cost of losing this, as a human race, is immeasurable.
I believe that in the AI era, one’s ability to deliberately cultivate and exercise taste is a crucial trait for success. In a world where generation is cheap and infinite, what matters is having opinions about what is worth pursuing, along with clarity on how and why to pursue it.
To me, this is a harder challenge than it seems at first glance. It took writing this blog post on a 6-hour cross-country flight with no WiFi, cut off from all LLMs and even the thesaurus, to force myself to massage the words myself. And while I don’t rely on LLMs for anything and everything, I do catch the reflex to bounce an idea off of GPT after thinking for just 30 seconds, short-circuiting any nascent originality.
As I start my PhD, I hope to reverse that reflex. I want to increase my inference-time compute budget, spending more tokens of thought even if it feels slow or inefficient. It’s a privilege to build a voice and vision I recognize as my own.