DEV Community

Cover image for You’ve got Chat instead of a brain: how AI is messing up junior developers
Rita {FlyNerd} Lyczywek
Rita {FlyNerd} Lyczywek

Posted on

You’ve got Chat instead of a brain: how AI is messing up junior developers

I wasn’t sure if I should write this, and share these thoughts mmm... but hey, soon no one will be reading humans anyway.

No original ideas, just AI-generated content everywhere. So yeah, a bit of a rant this time, but what can you do, that’s the vibe these days. Let’s look at what happens when you rely too much on AI early in your career. And why companies, and finally managers, are starting to notice. And we are not so happy about it.

Disclaimer: I’m not anti-AI. I use tons of AI-based tools myself 😉


It’s 2025. You’re entering IT.

You’ve done some courses, maybe a bootcamp.

You know your way around Python, SQL, and you’ve started doing take-home tasks.

And then you discover ChatGPT. Well, not just now – it’s been around since late 2023 – but let’s say you’re just now discovering ✨~ vibe coding ~✨ & 10xDevelopers. You start asking the chatbot everything: how to write a function, how to set up tests, how to fix this weird error. At first – wow. And then… well.

AI does too much, we think too little

Generative AI in dev work is hot. Everyone’s using it, because, let’s be honest - it works. Code comes out faster, you can test ideas on the fly.

But more and more people are doing something else: instead of understanding what they’re writing – they just throw in a prompt and see what the model spits out. The result? The code works, but no one knows why. And if it doesn’t work?
Next prompt.
And another.

Then just panic 😱

Chat writes your functions, Copilot reviews the code. One AI codes, another runs the checks.
Time to pop some champagne, right?

Mentors and team leads keep saying the same thing:

  • juniors write code they can’t explain,
  • copy stuff blindly without knowing if it’s safe,
  • have no clue how the systems they build actually work.

Of course, it’s not a new problem. Copy-pasting StackOverflow answers without thinking has been around for years. But I haven’t seen this level of mental shutdown in over a decade.

StackOverflow vs now

In real life: someone builds a login feature but doesn’t know the difference between GET and POST. JWT? Rings a bell, but not sure where.

Or they use AI to generate unit tests – without understanding what those tests are even proving. Whether they’re covering test paths at all. Not to mention the AI mocking an entire class and then testing the mock itself. Yikes.


When AI breaks, so does the dev workflow

In March 2025, ChatGPT went down hard. Then again a few days ago.

What happened? Chat was offline for just a few hours. Reddit lit up with posts like “Guess I’ll grab coffee.” Half-joking, half-not.

But the real issue?
Without AI, work stops.
Not because things are slower – but because people literally don’t know what to do without hints.

That’s a real problem – especially for folks just learning to code. No base knowledge: no grasp of the language, data structures, or how to debug anything.

AI isn’t the problem.

Treating it as a replacement for knowledge is.


Vibe coding – this is no longer fun

“It works, but no idea why.”

Again – not new.

What’s new? The scale.

my code doesn’t work idk why

But shhh – we don’t talk about how much AI is doing for us.

Teams are starting to feel the pain.

Whole companies are.

This “vibe coding” is everywhere – code that looks fine, runs okay, but underneath? It’s chaos. No tests, no structure, no understanding.

This isn’t about juniors being lazy.

74 GB per day – that's how much info we consume on average

The kind of info a well-educated person in the Middle Ages would take a lifetime to absorb.

Evolution doesn’t move that fast. The brain isn’t lazy – it’s overloaded.

What I mean is:

🧠 AI today gives us a false sense of competence.

Someone using ChatGPT can produce code that looks like mid-level dev work – but can’t maintain it, extend it, or secure it.

Security isn’t just a prompt

AI-generated code often has security flaws. Not because AI is dumb – but because it has zero context:

  • it doesn’t know your compliance rules,
  • it doesn’t know your architecture,
  • it doesn’t know your threat model.

You might get an encryption function that looks fine, but:

  • uses an outdated algorithm,
  • has no input validation,
  • works only in example X.

And you won’t notice – unless you already know how secure code works.

Open source licenses and other traps

AI often generates code it saw in training – including stuff from GitHub, StackOverflow, and elsewhere.

What does that mean? It might suggest code with unclear licensing. Drop it into your project – and now you’ve got legal issues. Not technical – legal.

A snippet under GPL? You can’t just use that in a closed-source app without specific steps.

AI won’t tell you that.

Wasn’t copy-pasting StackOverflow the same?

Well… not quite. You weren’t feeding it four full files and saying “write me X.” Using code from the internet, snippets, blog fragments – that at least forced you to understand what you were pasting. Change function names, tweak logic, test it.

Who are companies hiring now?

We’re seeing a clear shift in hiring. Code alone isn’t enough.

What matters now:

  • architectural and systems thinking,
  • deep understanding of security,
  • resistance to “AI blindness” – blindly trusting model output.

Some companies are already testing this: can the candidate code, or do they just know what to prompt?

There’s a big difference.

Live coding from a mockup. Screen sharing. And yes (gasp) interviews in the office again.

What now?

Don’t stop using AI.

Just use it smartly.

It’s a great tool for exploring options, generating boilerplate, automating the boring stuff.

Writing emails, improving communication, helping you learn – yes.

But it won’t replace understanding algorithms, systems, or cause-effect logic.

Companies investing in juniors (yes, those still exist – stop saying they don’t) are already shifting their approach to training and hiring.

More real-world tasks, less theory.

Fewer “what does this function do,” more “what happens if this breaks?”

Final thought

If you’re just starting out Today - treat AI as a co-pilot, not an autopilot.

Learn to code.
Learn to think.
Learn to search, not just ask.

Debug manually before asking for help. Try to understand before you copy.

Because tech will change.

But your ability to think clearly, think logically and understand what you’re doing - that stays.

And that’s what will separate people who “asked the AI” from those who truly know how to code – and talk to machines for real.


PS: I published this article in polish on my blog:
Chyba masz Chat zamiast mózgu… czy AI psuje programistów?

Top comments (3)

Collapse
 
anchildress11 profile image
Ashley Childress

I can definitely see how this is a huge problem and it's really up to us to stop it. I'm working with my first mentee this summer and I've made a point to showcase the practical use cases of AI.

As an example, the day that we were supposed to pair, but I got pulled into a prod call instead? AI was a perfect solution. A few years ago, the best you would have gotten is "do a search for and learn what you can" and maybe we'll talk about it later. What I did instead: "Open your copilot, put it in Ask mode, and ask it questions about the codebase. When you think you understand it - prompt it to quiz you with simple multiple choice. If you really start feeling like you've got it down - go ahead and flip the Edit on and have it update the readme. Proof it so you know it matches what you learned and then send me a PR".

Did he get it right that round? No, not even by half 😆 Did he learn something along the way? Absolutely, and he's a little better at utilizing those tools the correct way. Bonus that I was quizzed for a solid hour the next day on responsible AI and it's use cases in the engineering world.

These are the sorts of things that schools should be teaching right now. AI isn't going anywhere - and what happens in 5 or 10 years when nobody is teaching these sorts of things? Scary to think about...

Collapse
 
olaqnysz profile image
Ola Kunysz

I think we need mentoring now more than ever. And some new procedures on how to introduce juniors to the job. Ofc it’s easier to „delegate” things to LLM than human, but soon enough we will be lacking specialists.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.