After reading yet another AI hype article--this one by Kevin Roose at the New York Times gift link here--I feel compelled to respond.
Let's look at the title:
We could easily swap out "Not a Coder" with just about any profession. Not a writer? Not a doctor? Not a lawyer? Not an engineer?
The idea that you can whole swath replace an entire field or area of expertise is wrong on a number of levels. So here we go.
1. Current AI Is Not Intelligence
All the evidence we have is that current AI does not appreciate actual intelligence. See Gary Marcus's prescient take in 2022, Deep Learning is Hitting a Wall, for more on this. What has happened is that existing LLMs turn out to do really well with huge amounts of data and compute power.
2. AI vs LLMs
It's also important to note that AI and LLMs are not synonymous, even though it is common to see journalists and others in popular media conflate the two terms. "AI" is a very large field and applies to much more than just Large Language Models. For example, DeepMind's AlphaFold used deep learning to predict protein structures (see the Nature article) and this resulted in the 2024 Nobel Prize in Chemistry for members of the team.
But that is not what LLMs do. Rather, they ingest large amounts of data--think downloading most of the internet such as Reddit and Wikipedia, not to mention copyrighted books--run them through algorithms, do fine-tuning, and the result is a probabilistic mathematical function that predicts the next word in a sequence. In other words, LLMs try to predict the next response to any user input. There is not currently any actual "intelligence" there.
However, modern LLMs are powerful enough that they can seem intelligent and indeed do well on standard measures of intelligence, such as taking the SAT or answering math questions. Yet if I preloaded all SAT questions ever into a computer and then asked it to answer them, and it got the right answers, is that intelligence or just memorizing? It's not really intelligence.
Similarly, LLMs are a long way away from actual AGI (Artificial General Intelligence), a theoretical form of AI that has or exceeds human reasoning.
So AI is a real thing, a deep field with lots of applications, of which LLMs are just a subset, albeit the most popular one amongst the general public at the moment. Take a lot of data and run it through algorithms, and the result is something that tries to predict the right response.
Andrej Karpathy's Intro to Large Language Models video is one of the best resources out there to actually explain how LLMs work, if you want to dive deeper.
3. Vibe Coding
Vibe coding means using an LLM in a modern text editor to generate all your code for you. Rather than actually code, you say things like, "Build me a website that helps choose lunch." And away the LLM will go, churning out code for you. From there, you can ask it to refine and use specific technologies or features.
It is possible in some contexts to have a working website this way, provided it is doing routine tasks and you don't have to really trust it. Why? Because the LLM has been trained on billions of lines of code by human programmers, so it is quite good at trying to predict how to do that. However, two major problems emerge.
First, what happens when you want to do something that needs to be 100% correct? For example, building a website that handles payments in some way? Or manages user information that must be private? LLMs still wildly hallucinate and if you are not knowledgeable yourself, there is no way to tell when your code has deep-seated errors. Second, if you've used an LLM to write hundreds of lines of code you don't understand, there is no way to go in and meaningfully change things yourself. An LLM will want to rewrite and repredict most of it, not laser focus on a particular area. And if you don't know what the code is doing, how can you even tell the LLM where to start?
This speaks to the larger truth that most of programming is reading and understanding existing code. It is not starting with a blank canvas and creating things from scratch. Programmers typically work in teams on complex codebases with millions of lines of code. If just one person puts in a serious mistake, it can cause the whole artifice to stop functioning correctly or, worse, to have hard-to-find bugs that result in big problems.
If I were to ask 10 experienced developers today to build the same membership website that handles user accounts and payments, they would all do it slightly differently. Which way is right or wrong? If you have no programming knowledge yourself, it is hard to weigh in on decisions. Hard to evaluate any of them? If it's hard to do this even with experts, how can you expect a statistical function to do better on its own that you lack the capacity to evaluate? That's a long way of saying, someone somewhere needs to actually understand--or have the ability to understand--what is actually happening.
4. Using LLMs Today
I've spent most of this post talking against LLMs, but it's important to note that I personally use them daily, not as a replacement for my thinking or writing or programming, but as a drunken partner. By that, I mean LLMs are fantastic for generating ideas based on a topic. For example, "I want to build a membership website with payments and user accounts, what are 5 different ways I could architect it." This is useful and something I have done recently, because it provides an overview of existing approaches. From there I can dig deeper, asking questions about design decisions, and even asking for specific pieces of code. But the "drunken" part of my description is important: I don't trust the LLM. And I shouldn't. It is a guessing partner that can help me be creative, sometimes can find bugs or help with things, but it all has to be run through a human filter that has some domain knowledge.
In this way, I think of LLMs as similar to Steve Jobs's description of computers as, "A bicycle for your mind." With them you can travel much further, but they don't replace your mind. And I suspect they never well.
Final Thoughts
I'll end this piece with something I've been thinking a lot about recently. Let's assume an Artificial General Intelligence (AGI) is actually created that far exceeds human capabilities. How would we even know that? Even if it came to seemingly correct decisions, if we do not understand its reasoning--and with current LLM approaches it is impossible to derive how, exactly, the models reasoned on something--then what value does it have?
We are already in a world awash with more facts and knowledge than ever before, and yet our everyday and political discourse has never been dumber. I highly doubt there is an actual AGI out there. Instead I think there are increasingly powerful tools that enable educated humans to be better (and learn more) about what they are trying to master.
Final, Final Thoughts
As a fun exercise, try using an LLM to write an article like the author's mentioned at the top of the page. If you prompt it with, "Write a New York Times article in the style of Kevin Roose that explores vibe coding and the idea that you don't have to be a programmer but can instead just use artificial intelligence." The result won't be as good as what he wrote, but it will be pretty close. And more importantly, how many readers will really be able to tell the difference between the two? Especially if they are just going to copy and paste the article into an LLM and ask for a short description of it, rather than read the whole thing themselves. 😛
Top comments (5)
This hit home for me, honestly I use LLMs all the time but never trust them fully. You think there's any way to actually know when we can trust the outputs, or is it just always gonna need a real person to check it?
That's the big question. We can trust that the code passes tests, which the AI might write as well. Probably there is a way to measure and track performance, too.
But ultimately, LLMs are not intelligent, they are doing statistical guessing. So "at some point," someone with expertise will have to step in and sort stuff out.
Put it this way, I just came back from DjangoCon Europe and the consultancies were reporting that clients used AI to go further in building prototypes or features, but it always ends up in spaghetti code that costs even more for the consultants to later fix.
The bigger chunk of code AI writes, the bigger probability is that it will hallucinate in some bugs. All code written by AI must be checked by humans.
Thanks for writing this post.
"Vibe Coding" Isn't Right: The phrase "Vibe Coding" doesn't really make sense grammatically. It sounds like the person is coding the vibe, but that's not what's happening. The AI is actually turning the vibe or intent into code, not the person.
A better name would be "Code Vibing", this suggests the user is vibing with the machine, which fits better and sounds more grammatically correct.
Yeah, today's AI isn't really intelligent in the way we think of human intelligence. But it does help us work faster. For example, we used to write regex manually, now we just vibe what we want, and boom, it's done. That saves time and kills off boring, repetitive tasks. At the same time, AI still struggles with complex regex.
Sure, LLMs still have a long way to go, but they're NOT getting better fast. For instance, while OpenAI hasn't made huge leaps since 2022, a lot of the recent focus has been on multimodal abilities and trying to stay ahead in the AI race. Honestly, some of that feels more like leaderboard chasing than real progress.
But still, it's pushing us to be more competitive, and that's a good thing.
True, LLMs don't think, they predict what text should come next based on patterns. Some newer ones use more advanced techniques like text diffusion (kind of like Stable Diffusion but for language), which is cool, but it's still just math. No true "intelligence" there.
They're like calculators on steroids, super powerful, but they don't know what they're doing.
Yup, LLMs are amazing for brainstorming ideas, drafting content, or exploring possibilities. But when it comes to writing reliable, production-ready code? Not so much. You still need an experienced human to refine, test, and ship.
A lot of people confuse AGI (Artificial General Intelligence) with ASI (Artificial Superintelligence). AGI just means an AI that can perform any intellectual task a human can. ASI, on the other hand, would be something beyond human, smarter, faster, more capable, better at everything. That's still science fiction... for now.
That said, we're working on AGI at Kevin RS, it's early, but we're on the path.
Final Thought
LLM Use Is Kinda Like Gambling Now: Let's be honest, using LLMs feels like gambling sometimes. You ask a question, get a half-useful answer, then rephrase and try again until it finally clicks. And you're paying for each attempt. That's not efficient, it's luck-based. It's like we're stuck in a productivity casino. The line between work and trial-and-error is blurring fast.
Creative Work Can't Be Fully Automated: LLMs still can't create real, deep creative work. Try giving any of my GitHub projects to an LLM and ask it to recreate them, even with a thousand prompts, it won't come close. Without deep context, domain knowledge, and design intuition, the results fall flat. Creativity isn't just pattern-matching, it's context, taste, and originality.
LLMs Are Power Tools, Not Replacements: What LLMs are really doing is amplifying us. They take care of the tedious stuff, like what unpaid interns usually do, and free us up to focus on the business logic, the architecture, the big picture. They make us more human, not less. We're the ones lighting up the world with ideas, the AI is just the flashlight.
AI can't handle ambiguity, it needs clear instructions. Humans can succeed in the gray areas.
LLMs don't have goals, they don't want anything. You do.
They hallucinate, confidently generating wrong answers. That's not intelligence, that's a flaw.
They don't understand consequences, they generate output without caring what happens next.
Using LLMs is like unlocking a new human superpower. They don't replace us, they boost us. But only if we know how to use them right.
You're not being replaced by AI. You're being replaced by someone who knows how to use AI better than you.
Till next time 👋!
What a wonderful comment! I basically agree with you. There is something there, for sure, but it is not a replacement, instead it's a whole new toolset.
As you said, "You're not being replaced by AI. You're being replaced by someone who knows how to use AI better than you." Yup. That sums it up I think!