← Back to all posts

✨ Working with AI: The Stuff That Actually Matters

8 min read

A practical playbook for shipping real work with AI, covering tool choice, linting, planning, reviews, and tests.


TL;DR

Pick tools that let you swap models. Turn linting up to strict. Skip dashboards, keep your infra in code. Write a real agents file. Plan hard before you let anything generate code. Run specialized review passes for security, quality, and consistency. Wire those reviews into CI. Write a lot of tests. Push static analysis into the IDE. Ship in small batches. Treat AI as a collaborator, not an autopilot.


I'm not going to tell you which AI tool to use. By the time you finish this post, three new ones will have launched, two will have pivoted to blockchain, and one will be acquired by a company that makes dental software. The tools move faster than JavaScript frameworks, and that's saying something.

What I can share is what has stuck for me after a lot of hours in the trenches: a handful of patterns that keep paying off regardless of which model is hot this week. Will they hold forever? No idea. But they've held up for a while, and they seem to matter whether you're reaching for Claude, GPT, or whatever ships next Tuesday.

1. Choose Tools That Let You Switch Models

This one is non-negotiable for me. Models have personalities. One will write Python like it's trying to impress Guido van Rossum. Another will actually read that crusty legacy codebase from 2007 and make sense of it. A third will quietly be the best at tests.

You want to be able to say "okay, this model is better for this job" and just do it. Getting locked into a single model is how you end up trying to hammer screws, and nobody wants to watch that.

2. Set Up Strict Linting (Seriously)

Without strict linting, AI writes code in whatever style it feels like that morning. Monday it's tabs, Tuesday it's spaces, Wednesday it's invented an indentation scheme that defies the laws of physics.

Strict rules force the model onto YOUR rails. Your codebase stops looking like five different people wrote it with five different opinions about brace placement. Your future self, reading this code at 11pm on a Thursday, will be grateful.

3. Avoid Dashboards Like They're Selling Timeshares

Hot take: dashboards are where productivity goes to die.

Write scripts. Use infrastructure as code. Automate with real, version-controlled, reviewable code. AI can read your Terraform files, understand your setup scripts, and work inside your repo. AI cannot click its way through seventeen nested menus in some web UI to flip one toggle.

And when things break at 2am (and they will), which would you rather have: a script you can run, or a memory of which submenu of which submenu holds the thing you need?

4. Write a Proper agents.md File

Think of this as the constitution for how you work with AI. It's where you spell out the rules of the house.

"We use TypeScript strict mode."

"All error handling logs."

"No, we're not switching to that new framework you just read about."

Models will actually read this and follow it, most of the time. Without one, you're hoping the AI will psychically infer your team's conventions. It won't.

5. Plan Hard Before You Let AI Touch Anything

I know, planning isn't fun. You want to jump in and start generating code like you're speedrunning a hackathon.

Don't.

Sketch the architecture. Write down the requirements. Walk through the edge cases. Figure out what you actually need before asking AI to build it. Because here's the catch: AI is very good at building what you ask for, even when what you ask for is completely wrong.

Garbage in, garbage out, except now the garbage comes with perfect syntax highlighting and a confident comment block explaining why it doesn't work.

6. Use Different Review Agents for Different Problems

Not all reviews are the same. You wouldn't ask your security expert to critique your CSS animations. (Although, honestly, some CSS is a genuine threat to my mental health.)

Run separate review passes:

  • Security reviews catch the scary stuff (SQL injection, XSS, that API key you almost committed)
  • Quality reviews find code smells, complexity nightmares, and functions that should have been split up three refactors ago
  • Consistency reviews keep everything aligned with the same patterns
  • Performance reviews find bottlenecks before your users do

Each pass has a job. It's like having a team of specialists who never get tired, never need coffee, and never passive-aggressively comment "interesting approach 🤔" on your PRs.

7. Put Those Reviews in CI/CD

Automate this. If it's not automated, it's not happening consistently.

Wire your pipeline so every push runs through the review agents. Catch issues before they reach production. Get feedback during development instead of finding out three sprints later that nobody has been checking for security holes.

Your future self, debugging a production incident at midnight, will have strong opinions about whether you did this or not.

8. Write Tests Like Your Job Depends On It (Because It Does)

AI is going to change your code. Sometimes in the way you expected. Sometimes in a way that makes you wonder if it's secretly trying to get you fired.

Tests are the safety net. They're how you know AI didn't "improve" your authentication by quietly removing the password check. They're how you catch regressions before your users do.

Write unit tests, integration tests, regression tests, security tests. Test everything. The model doesn't know which part of your codebase is load-bearing and which part is decorative. It will confidently refactor both, and only your tests will save you.

Think of tests as a contract: "AI, you can change whatever you want, but these things MUST still hold." No tests, no contract. And suddenly your payment system is giving everyone a 100% discount.

9. Put Static Analysis in Your IDE Right Now

Catch things immediately, right where you're working. Not after you commit. Not after you push. Not after the CI pipeline churns for seven minutes. Right now.

Your IDE should be the first line of defense, yelling at you (politely, through squiggly lines) the moment something goes sideways. Issues die before they ever touch your repo.

Static analysis won't catch everything. You still need dynamic tests for runtime behavior. But it catches a lot, and catching a lot early beats catching everything late, every time.

10. Ship in Small Batches

Do not ask AI to generate 2,000 lines of code and hope for the best. That's not development, that's prayer with extra steps.

Generate code incrementally. Add a little, test it, confirm it works, then add more. Small batches give you:

  • Easier issues to spot
  • Easier problems to fix
  • Easier changes to understand
  • Easier rollbacks when something's wrong

Big batches turn you into a bug archaeologist digging through a haystack made of other potential bugs. Not fun.

11. AI Is a Pair Programmer, Not a Replacement

Always review what AI generates. Always understand what it's doing. Always ask whether this is actually the right solution.

AI is very good at writing code. It's less good at knowing whether that code should exist in the first place. That part is still your job.

Think of AI as an extremely productive junior developer who has memorized every syntax rule but occasionally makes surprising architectural choices. You wouldn't merge a junior's PR without reading it. Same deal here.

The Bottom Line

Tools will change. Models will improve. New frameworks will launch. Fine.

But these principles? They've held up for me. They're about process, not tools. They're about working with AI, not just using whatever's newest.

Set up the environment right. Plan before you code. Review everything. Test relentlessly. Ship incrementally. And remember: AI is there to amplify your judgment, not replace it.

Now go build something cool. And for the love of all that is holy, set up the linting.