DEV Community

Maxim Saplin
Maxim Saplin

Posted on

XYZ% of Code is Now Written by AI... Who Cares?

  • Microsoft CEO Satya Nadella said that "as much as 30% of the company’s code is now written by artificial intelligence" (Apr 2025).
  • Anthropic's CEO made a forecast that "in 12 months, we may be in a world where AI is writing essentially all of the code," (Mar 2025).
  • Google CEO stated that "more than a quarter of code they've been adding was AI-generated" (Oct 2024).

When I see this sort of title I often have a sense the XYZ figure has the connotation of software engineers replacement rate. Code written by AI is the code not written by humans, we don't need those 30% of humans to type on their keyboards. With the media's focus on sensationalism and competition for the reader's attention, I don't see why they wouldn't optimize for more drama...

While this sort of speculation is curious (how can those CEOs get measurements of the metric beyond making guesstimates based on some clues/heuristics?), I don't see much meaning beyond merely evaluating the rates of adoption of AI coding tools...

100% of Code is Generated, 70% of Code is Deleted After Review

Let me give you a recent example. I have worked on a small project creating a local python interpreter wrapped as MCP Tool, think of Code Interpreter for ChatGPT.

Why even bother, aren't there Python tools already? There are ones, yet it's either Python execution in local env which is dangerous OR relying on Docker or remote environments that need some effort to set up.

The idea was to wrap into an MCP Server the custom-made, sandboxed, local Python interpreter provided with HuggingFace's smolagents library.

After cloning smolagents' repo, investigating the codebase, and creating a small example of isolated use of the interpreter I've instructed Cursor's Agent to create a new MCP Server project. I showed it the example, and the interpreter code, and gave a link to MCP Server docs by Anthropic. The agent created a complete linter-warnings-free code base.

Yet in the next couple of hours, I have iterated on the produced code. I've removed most of the files and lines. Used AI actively, both the autocompletion and chat, i.e. typed not much Python by myself.

Can I state that 100% of code was AI-generated? Probably. Does this imply that:

  • I was not needed in the process of building software (100% replaced by AI).
  • Or did I get a 300x productivity boost since as an average human I can type 30 words per minute while SOTA models generate them at ~3000WPM (~150-200 tokens per second)

Here's the stats:

  • 1st version by Claude 3.7/Cursor Agent: 9 files, 1062 lines, 45 comments, 158 blanks
  • Final modified and published version: 4 files, 318 lines, 9 comments, 79 blanks

While iterating on the code base I used my brain cycles to make sense of what AI had produced, also gaining a better understanding of what actually needed to be built - and that takes effort and time. Sometimes writing code is easier than reading. Besides writing code (or better say low-level modifications) has a very important function of learning the code base and giving you time for the task to sink in and make sense.

After all, I dropped ~70% of AI-generated code. Does it tell much? Does it mean AI code is junk if it had to be thrown away? Generating in minutes reworking/debugging in hours? I don't think so. Yet the rework percentage isn't that telling metric alone, just like the percent generated metric.

One might say that the example is isolated. Creating from scratch some small project is a corner case not met that often in real life. That's true. Yet I think it makes a relevant point and puts some numbers. There's the same tendency to remove/rework a lot of generated code when maintaining a large code base. The larger the scope of the task, the more agentic the workflow, and the more lines/files are touched - the more you have to fix. For some reason, the best AI tools still have a hard time getting the "vibes" of the project - they struggle creating consistent changes that follow the "spirit" of the code base.

Building Software is not About Writing Code

It's about integrating and shipping code. Did you know that at some point Microsoft had a 3-year release cycle of Windows and "on average, a release took about three years from inception to completion but only about six to nine months of that time was spent developing “new” code? The rest of the time was spent in integration, testing, alpha and beta periods" 1, 2.

Writing code is just one very important part, yet it is not the only one. Did you know that (according to a recent Microsoft study) developers spend just 20% of their time coding/refactoring (that's where the XYZ% AI generated metric lands):

The Gap Between Developers’ Ideal vs Actual Workweeks

Working with teams and customers, building software I see many things where AI can barely help.

What if your stakeholders become unresponsive, play internal politics, and can't make up their minds about the requirements? Will ChatGPT (or some fancy "agent") chase the client, flash out all the contradictions in requirements (7 green lines, 1 must be transparent), communicate with the whole team and mitigate any of the core risks?

Even if you have what seems to be refined requirements... How much time will it take for every individual team member to embrace what is the "thing" he or she is trying to achieve? How much time will it take for the team to find the internal consensus on how to organize around the goal, break down the scope, and bridge business requirements to implementation detail? Will Gen-AI tools accelerate the team dynamics leapfrogging from forming and storing to norming and performance in days, not weeks?

I see it all the time: people are slow thinkers, there are natural constraints on how much info our brain can process, how many social connections we can build and maintain, etc. Generating lots of texts that few care to read (and fewer try to understand) doesn't solve anything.

Given the current state and trajectory of AI tools in software development, I see them as isolated productivity tools where human is the bottleneck. There's little progress with AI agents filling all the gaps a human worker does in a daily routine. Even at a higher level of AI autonomy people would still need time to make up their minds, evolve their perspectives, talk, and agree.

Productivity

Ultimately businesses seek for more work to be done with less effort/money. Adopt AI in dev teams, and cut costs/headcounts by some magic number (for some reason it's always 20-30%) - that doesn't seem to be working that way. There's no definitive demonstration of step changes in developer productivity across the industry. I like these 2 examples, studies into developer productivity with AI from last Autumn:

P.S> Did you know that as of April 2025, the popular AI coding assistant Aider has ~80% of its code generated with Aider ;)

P.P.S> Until the "P.P.S>" started the article was exactly 1200 words, it took me several days to contemplate and 4 hours to write. GPT 4.1 would have required 12 seconds to generate a similar size blog post :)

Top comments (7)

Collapse
 
ben profile image
Ben Halpern

It does kind of remind me of old school "lines of code" metrics in general.

Collapse
 
nevodavid profile image
Nevo David

I relate so much to tossing out big chunks of AI code and still needing to mess with it for hours even when the first draft lands fast. you think AI tools really free up time or just shift where the effort goes?

Collapse
 
maximsaplin profile image
Maxim Saplin

I do believe they free up time and allow to do much much more work than pre-AI. It's about building the intuition, what works and what doesn't. And avoiding the failure modes... Such as entering the "dumb" coding loop by copying/pasting back-end-force AI-gen and losing connection with what's happening.

With AI it's the verification problem. And efficiency and effectiveness is about learning on how to verify and integrate generated code. You can be sure that models will hallucinate or make mistakes in the most awkward way somewhere between the lines. And the question is how do you efficiently fix those issues, or intentionally build in a way when AI generated code is of less impact should it have some bugs...

There's one contradiction, I think it's related to the Illusion of explanatory depth bias... On the one hand devs are used to relying upon some 3rd parties, they take some libs as-is nobody expects one to be fully aware of inner working of React Virtual DOM to use it. One rarely goes under the hood. Without reliance upon someone's work one can't work on complex systems. With AI we might often fall for this over reliance fallacy without developing the discipline of systematic review.

Collapse
 
theamazingnana profile image
The_Amazing_Nana

Exactly, AI is just a tool, not the programmer or even the problem, is just a tool! The developer still need to uderstand the issues and analyze what the AI is coding, and needs to be able to handle this tool using the right prompts. Is amazing to study with this tool, it help a lot and easily understand our questions to answer in just a second. Developers need to stop making a fuss about it!

I really loved this article!

Collapse
 
darkwiiplayer profile image
𒎏Wii 🏳️‍⚧️

I've said this before and I can say it again:

If AI really gets so advanced that we don't need coders any more, then AI is advanced enough to do almost everything and can probably also control robots well enough that even manual labour is covered by AI.

At this point we're either living in an egalitarian Utopia where everyone just chills and pursues hobbies, even if those are things that AI could do better; or a techno-feudalist dystopia where you are either rich or don't get a say anyway.

So really, as a programmer specifically, there isn't that much to worry about, it's really just a political issue.

Collapse
 
lexlohr profile image
Alex Lohr

It just means the ratio of lines of code in question was not written with hindsight, reason or sanity. At least they warn potential customers.

Collapse
 
thomas-router profile image
Info Comment hidden by post author - thread only accessible via permalink
Thomas-Router

I will provide an example of AI-completed project. This is a small, amateur project, but it is doing something important. I created this by myself, and 99% of the code is AI-written, after I created detailed plans for each module.

I Built Dr. Headline – An Autonomous AI Agent Publishing Daily Factual Political News Briefings

dev.to/thomas-router/dr-headline-a...

This is an open-source, community-run, non-profit project. I am an amateur. I have a novel idea, but my technical expertise is not as deep as many of you. If we can work together, we will achieve something really great!

Some comments may only be visible to logged-in visitors. Sign in to view all comments. Some comments have been hidden by the post's author - find out more