Why Taste Matters More Than Speed in the AI Era
Lena Vollmer
There's a conversation happening right now in tech that is one of the most important of the decade. It's about taste.
Not taste in the fashion sense or the food sense, but taste as Steve Jobs meant it: the ability to discern what's good from what's merely functional, what's elegant from what's just correct. The instinct for quality that no amount of technical skill can replace.
For decades, the tech industry rewarded speed above everything. Ship fast. Type fast. Code fast. The fastest engineer was the best engineer. The bottleneck was always execution: how quickly could you turn an idea into working software?
AI has obliterated that bottleneck. And in doing so, it has revealed something that was always true but easy to ignore: the quality of what you build was never limited by how fast you could type. It was limited by how well you could think.
The great equalizer and the great revealer
Here's the paradox of AI tools. They democratize execution while simultaneously exposing the gap between people who have taste and people who don't.
When everyone has access to GPT-4, Claude, Cursor, Copilot, and a dozen other AI-powered tools, the playing field for raw output flattens. A junior developer can scaffold an entire application in an afternoon. A marketing manager can generate fifty ad variations before lunch. A designer can produce a hundred logo concepts in the time it used to take to sketch three.
But look at what's actually being produced. Most of it is mediocre. Not broken, not ugly, not wrong. Just... mid. The AI did exactly what it was asked to do. The problem is that what it was asked to do wasn't very interesting.
This is the taste gap. And it's widening every day.
The people who are producing genuinely remarkable work with AI aren't the ones with the fastest workflows or the most keyboard shortcuts. They're the ones who know what good looks like. They can articulate the difference between a landing page that converts and one that just exists. Between code that's architecturally sound and code that merely passes tests. Between writing that moves people and writing that fills space.
Taste is the new leverage. And unlike coding speed, you can't automate it.
Why most AI output is mediocre
Let's be honest about something. The average AI-generated output is not impressive. We've all seen the blog posts that read like they were written by a committee of middle managers. The code that works but that no senior engineer would actually ship. The designs that look like everything and nothing at the same time.
The instinct is to blame the models. "AI isn't good enough yet." But that's increasingly wrong. The models are extraordinary. The problem is upstream.
The problem is the prompts.
Most people interact with AI the same way they write text messages: abbreviated, context-free, and vague. "Write me a landing page." "Build a login flow." "Make this better." These prompts produce exactly the kind of output you'd expect from instructions that thin.
I've watched developers spend thirty seconds typing a prompt, get back something generic, spend another thirty seconds tweaking the prompt, get back something slightly less generic, and repeat this cycle for twenty minutes. They would have gotten a better result in two minutes if they'd just taken the time to articulate what they actually wanted upfront.
The irony is severe. We have the most powerful creative tools in human history, and we're bottlenecking them with terse, low-context input because we can't be bothered to explain what we're actually thinking.
This isn't laziness, though. It's a design problem.
The input quality problem
Here's something that doesn't get discussed enough: the medium through which you communicate with AI fundamentally shapes the quality of that communication.
Most people interact with AI through a keyboard. And keyboards are slow. The average person types around 40 words per minute. Skilled typists hit 70 or 80. But even the fastest typists are operating at a fraction of the speed of human thought.
So what happens? You abbreviate. You take the rich, nuanced idea in your head and you compress it down to whatever you can be bothered to type. The detailed architectural vision becomes "build me an API." The careful design sensibility becomes "make it look modern." The specific creative direction becomes "write something engaging."
Every time you compress an idea to fit the speed of your fingers, you lose information. Context, nuance, preference, constraint, intention. All of it gets stripped away in the translation from thought to typed text.
This is the input quality problem. The bottleneck isn't the AI. It isn't your taste. It isn't even your ability to think clearly. It's the narrow pipe between your brain and the machine.
Think about it this way. Imagine you hired a brilliant architect to design your house. But instead of sitting down for a two-hour conversation about how you live, what you value, how light moves through your current space in the morning, you handed them a Post-it note that said "modern, 3 bed, nice kitchen." You'd get a perfectly competent house that has nothing to do with you.
That's what most people are doing with AI. Not because they lack taste, but because the input channel is too slow to express it.
Speaking is thinking at full bandwidth
The average person speaks at about 150 to 179 words per minute. That's roughly four times faster than typing. But the difference isn't just speed. It's qualitative.
When you speak, you think differently. You don't edit yourself into brevity before you've finished the thought. You let ideas develop. You add the "oh, and another thing" that you would have deleted from a typed prompt because it felt like too much effort to type out. You include the context, the reasoning, the "what I'm really going for here is..." that transforms a generic prompt into a specific one.
I've seen this play out hundreds of times. A developer types: "Refactor this function to be more efficient." They get back a generic optimization. The same developer, speaking, says: "So this function is called about ten thousand times per second in our hot path, and right now it's allocating a new array every time. I want to refactor it to use an object pool, but I need to be careful about thread safety because we're running this across multiple workers. Also, the function signature can't change because there are about forty call sites." The AI produces something genuinely useful, because it was given enough to work with.
That second prompt isn't better because the developer thought harder. It's better because they had the bandwidth to express what they were already thinking.
This is the part that surprises people. You don't need to become a better thinker to get better AI output. You need a wider pipe between your brain and the prompt. Most of your taste is already there. It's just getting compressed into nothing by the time it reaches the AI.
The developers who get this
The best AI-augmented developers I know have shifted their entire workflow around this insight. They don't type faster. They communicate more.
One engineer I know narrates his entire coding context before asking the AI for anything. He'll spend sixty seconds explaining the codebase architecture, the current problem, the approaches he's already considered and rejected, and what "good" looks like for this specific situation. His AI-generated code rarely needs more than minor edits. His colleagues, typing terse prompts into the same tools, spend more time fixing AI output than they saved by generating it.
Another developer treats every AI interaction like a design review. She doesn't say "add error handling." She describes the failure modes she's worried about, the user experience she wants when things go wrong, the logging requirements for debugging in production, and the specific edge cases she's seen in similar systems. The result isn't just error handling. It's error handling that reflects her judgment.
These people aren't prompt engineers in the "magic words" sense. They aren't using special formatting or secret tokens. They're simply expressing their full thinking instead of an abbreviated version of it. They have taste, and they've found a way to actually transmit that taste to the AI.
The pattern is consistent: more context in, better output out. Not more prompts. Not more iteration. More context per prompt.
Thinking out loud as a creative advantage
There's another dimension here that goes beyond efficiency. Speaking your ideas out loud changes the ideas themselves.
Psychologists have studied this for decades. Verbal externalization of thought, literally saying what you're thinking, activates different cognitive processes than silent internal deliberation. When you explain your reasoning out loud, you catch gaps you wouldn't have noticed. You make connections between ideas that were sitting in separate mental compartments. You refine your thinking in real time.
This is why rubber duck debugging works. It's why the best brainstorming happens in conversation, not in isolation. It's why therapists ask you to talk through your problems instead of just thinking about them.
Applied to AI interaction, this means that speaking your prompts doesn't just increase the quantity of information you transmit. It increases the quality of your thinking. The prompt gets better not just because you said more words, but because the act of speaking forced you to clarify what you actually wanted.
I've experienced this myself. I'll start speaking a prompt with a vague idea of what I want, and by the time I've finished talking, I've refined the idea into something much more specific than what I would have typed. The speaking is the thinking. The prompt is a byproduct of the thought process, not a translation of a pre-formed thought.
This is a genuine creative advantage. The people who think out loud while prompting AI are doing two things simultaneously: communicating with the AI and refining their own ideas. The people who type are doing one thing at low bandwidth and skipping the other entirely.
The real workflow shift
All of this points to a fundamental change in what it means to be productive in the AI era.
The old model: think, then type fast, then iterate on output. The new model: think out loud, transmit your full context, get it right the first time.
The first model optimizes for speed of input. The second optimizes for quality of input. And when AI is your execution layer, the quality of your input is almost everything.
This is where tools start to matter, but not in the way most people think. The important tool isn't the AI model. It's whatever sits between your brain and the model. The input layer.
I use Aqua Voice for this. It's designed specifically for this kind of high-bandwidth interaction with AI tools, not transcription in the traditional sense, but voice as a thinking and communication medium. It reads what's on your screen, so when I'm speaking a prompt while looking at code or a design, it already knows what I'm referencing. I say a variable name and it comes out spelled correctly because it can see my editor. I don't have to describe the context that's literally right in front of me.
But the specific tool matters less than the principle: find a way to transmit your full thinking to the AI, not an abbreviated version of it. If you're compressing your ideas to fit the speed of your keyboard, you're leaving your best work on the table.
Taste can't be automated, but it can be expressed
The conversation about taste in tech is ultimately a conversation about what humans are still for. If AI can execute, what's left for us?
The answer is judgment. Direction. Knowing what good looks like and being able to articulate it clearly enough that a machine can build it.
But here's the thing about articulation: it depends on bandwidth. A film director with extraordinary visual taste is useless if they can only communicate through handwritten letters to their cinematographer. A product leader with perfect intuition for user experience is hamstrung if they can only express it in bullet points on a Jira ticket.
The people who will thrive in the AI era are the ones who solve both sides of the equation. They develop taste, yes. They study good design, read widely, build intuition through experience. But they also solve the transmission problem. They find ways to get their full, uncompressed thinking into the AI's context window.
Speed still matters, of course. But the speed that matters has changed. It's not how fast you can type code. It's how fast you can transmit a complete, nuanced, high-context idea to an AI that can execute it.
The developers writing the best AI-assisted code aren't the fastest typists. They're the ones who've figured out how to express the full texture of their judgment: the constraints, the preferences, the tradeoffs, the "I know this sounds picky but..." details that separate great software from adequate software.
Taste was always the differentiator. We just couldn't see it when execution speed was the bottleneck. Now that AI has removed that bottleneck, taste is all that's left. The question is whether you can express yours.
Try Aqua Voice free and see what happens when you stop compressing your ideas.