Originally published at codeboosted.com
The rise of AI coding tools has fundamentally changed how I approach software development. I've been thinking a lot about when I should and shouldn't use AI to generate code. Here's what I've figured out so far...
Three key questions
Before reaching for AI to generate code, I ask myself: will it save me time? When AI generates code, I still need to review it and tweak it if something's wrong. I also need to test it, debug it, and fix any issues that come up. Plus, there's the ongoing maintenance to consider. All of this factors into the time equation.
Do I think AI will nail the task (or get close)? If not, I might spend a lot of time debugging and adjusting the code. While AI can often make adjustments itself when given the problem information, sometimes it goes in circles and never quite gets there.
What's the risk profile for the code? What could happen if the generated code has security vulnerabilities, performance issues, or bugs in mission-critical areas?
Where AI code generation seems to work well
From my experience, AI code generation has been pretty reliable for these types of tasks:
- Boilerplate / repetitive code - like managing field values and errors in a form
- Refactoring code - such as extracting code from large files
- Implementing tests - generating test cases with mock data
- Documentation - like Docusaurus or Storybook docs for React components. AI seems particularly good at explaining components and their props
- Comments - especially JSDoc comments for reusable React components & hooks
When starting to write boilerplate/repetitive code, I'm finding Cursor reads my mind and offers autocomplete. I still find this freaky!
Using AI for features I'm confident about implementing
This might sound counterintuitive, but I've found a sweet spot in using AI to generate code I'm very confident about writing myself. This works well because I can write a solid prompt that includes what to include and what to avoid. I'm also confident that I can quickly review the generated code since I already have it visualized in my head. AI can generate the code much faster than I can type it.
Using AI for features I don't know how to implement
Let's face it - I don't know everything! Sometimes I'm uncertain about how to implement a feature, or I'm not sure where to start.
I use AI to help remove that uncertainty. It's become an efficient way for me to learn new things.
If I'm still uncertain, I'll ask a human. I won't implement a feature without being certain about the implementation - whether it's AI-generated or not.
Once I've removed the uncertainty about the implementation, I'll use AI to generate the code.
Using AI with extra care
These days I use AI heavily, even in sensitive areas. However, I've developed some additional safeguards and validation processes.
Security-critical code
For non-trivial features, I usually create a task list that I'll eventually give to an AI tool. When security is critical, I ask AI about security concerns and best practices for the feature and include these in the task list. I take the time to understand what needs to be done from a security perspective, asking AI more questions if needed. I also ask a security expert to review the tasks to spot any flaws or suggest better approaches.
With a comprehensive task list, I let AI generate the code, task by task.
As usual, I review the code and manually test it. I also get a security expert to review and test it. We still do penetration testing as well.
Other critical code
Code can be critical in other ways. For example, a feature's performance might be critical, or it might be a mission-critical feature in a complex part of an app.
I follow the same process as for security-critical code:
- Use AI to form a comprehensive task list that I understand
- Let AI generate the code, task by task
- Thoroughly review and test the code, bringing in an expert for additional review and tests
When AI generates code I don't understand
When reviewing AI-generated code, sometimes I don't fully understand it. In these cases, I ask AI to explain the code (often it does anyway). I keep asking questions until I fully understand - it's a great learning opportunity.
When I disagree with the implementation, I ask AI "What about X? Do you think Y would be better?" This often triggers AI to suggest a slightly different implementation, or it might reveal a misunderstanding on my part. Either way, I need to be comfortable with the implementation.
AI struggles with using new frameworks and libraries
I've noticed that AI tools currently struggle with Tailwind 4. They seem to default to using Tailwind 3, and even if you include v4 in the prompt, they sometimes get confused and generate a mixture of v3 and v4 code. I often need to step in, which slows down the development process. I am, of course, adding rules/knowledge into the AI tools to mitigate this, but I'm finding I'm also avoiding upgrading to the latest framework & library major versions immediately and waiting a few months for AI to gain the necessary knowledge.
I'm also leaning toward popular frameworks rather than new ones, even if I think they might be technically better - for example, I'm choosing Next.js over TanStack Start until TanStack Start gets more traction (TanStack Start is great, by the way!). The popularity of a library/framework has always been important to me, but it's even more crucial now.
Skill & experience considerations
As an experienced engineer, I use AI a lot to generate code. However, I think it's great for more inexperienced engineers too - as long as they take the time to ask AI questions about the generated code to understand it and ultimately own it. It's a really efficient way to learn. The learning process means inexperienced engineers will be slower than experienced ones, but they'll level up quickly.
I think it's trickier for non-professional engineers who aren't motivated to understand the code. The "risk" question is really important here. Personally, I wouldn't want to build apps for other people to use if there are security issues that could negatively impact them.
The bottom line
AI does write most of my code now, with well thought through tasks and guard rails applied for riskier code. I think the key is knowing when to slow down and apply those guard rails.
I'm surprised by how much I'm enjoying this approach, given that I've loved coding from a young age. I'm finding it frees me up to focus on the creative, complex, and critical aspects of software development that require human expertise.
Also, things will probably continue to change rapidly - my thoughts and approach might be completely different next week!
Top comments (0)