✨ Working with AI: The Stuff That Actually Matters
Practical playbook for collaborating with AI effectively—from tool choice and linting to planning, reviews, and testing.
TL;DR
Use tools that let you switch models. Set up strict linting. Avoid dashboards, use infrastructure as code. Write comprehensive agent guidelines. Plan extensively before execution. Use specialized review agents for security, quality, and consistency. Integrate reviews into CI/CD. Write tons of tests. Add static analysis to your IDE. Work in small batches. Treat AI as a collaborator, not autopilot.
Look, I'm not going to tell you which AI tool to use. By the time you finish reading this article, three new ones will have launched, two will have pivoted to blockchain, and one will be acquired by a company that makes dental software. The tools change faster than JavaScript frameworks (and that's saying something).
But here's what I've learned after countless hours of working with AI: there are patterns that have stayed consistent for me. Will they hold up forever? Who knows. But they've worked well for a while now, and they seem to matter regardless of whether you're using Claude, GPT, or whatever's hot next Tuesday.
1. Choose Tools That Let You Switch Models
This one's crucial. Some AI models are like that friend who's amazing at trivia but can't parallel park to save their life. They excel at specific tasks and fumble others spectacularly.
You need the flexibility to say "okay, Model A is writing Python like it's trying to impress Guido van Rossum himself, but Model B actually understands this weird legacy codebase from 2007." Being locked into one model is like only having a hammer when sometimes you really, really need a screwdriver.
2. Setup Strict Linting (Seriously, Do It)
If you don't enforce linting rules, AI will write code in whatever style it feels like that day. Monday it's tabs, Tuesday it's spaces, Wednesday it's invented its own indentation system that defies the laws of physics.
Strict linting makes AI follow YOUR patterns. It keeps things consistent. Your codebase won't look like five different people wrote it while having completely different opinions about where braces should go. Trust me, future you will thank present you for this.
3. Avoid Dashboards Like They're Selling Timeshares
Here's a hot take: dashboards are where productivity goes to die.
Create scripts. Use infrastructure as code. Automate everything you can with actual, version-controlled, reviewable code. AI can read your terraform files, understand your setup scripts, and work within your codebase. AI cannot navigate some random UI dashboard that requires seventeen clicks through nested menus to change one setting.
Plus, when something breaks at 2 AM (and it will), would you rather have a script you can run or remember which submenu of a submenu has that one toggle you need?
4. Write a Proper agents.md File
Think of this as the constitution for your AI collaboration. It's where you lay down the law about how things should be done in your codebase.
"We use TypeScript strict mode."
"All error handling must include logging."
"No, we're not switching to that new framework you just read about."
AI models will read this and actually follow it (most of the time). Without it, you're basically hoping the AI psychically divines your team's conventions. Spoiler: it won't.
5. Do Extensive Planning Before You Let AI Touch Anything
I know, I know. Planning isn't sexy. You want to jump in and start generating code like you're speedrunning a hackathon.
Don't.
Map out your architecture. Write down your requirements. Think through the edge cases. Figure out what you actually need before asking AI to build it. Because here's the thing: AI is really good at building what you ask for, even if what you ask for is completely wrong.
Garbage in, garbage out. Except now the garbage comes with perfect syntax highlighting and comprehensive comments explaining why it doesn't work.
6. Use Different Review Agents for Different Problems
Not all code reviews are created equal. You wouldn't ask your security expert to review your CSS animations, right? (Actually, maybe you would. Some CSS is definitely a security threat to my sanity.)
Set up different review passes:
- Security reviews catch the scary stuff (SQL injections, XSS vulnerabilities, that API key you almost committed)
- Quality reviews find code smells, complexity nightmares, and functions that should've been split up three refactors ago
- Consistency reviews make sure everything follows the same patterns
- Performance reviews identify bottlenecks before your users do
Each review focuses on specific concerns. It's like having a team of specialists who never get tired, never need coffee, and never passive-aggressively comment "interesting approach 🤔" on your PRs.
7. Integrate Reviews Into Your CI/CD Pipeline
Automate this stuff. Seriously. If it's not automated, it's not happening consistently.
Set up your pipeline so that every time code gets pushed, it runs through your review agents. Catch issues before they make it to production. Get instant feedback during development instead of finding out three sprints later that nobody's been checking for security vulnerabilities.
Your future self, dealing with a critical production bug at midnight, will have some choice words about whether you automated these checks or not.
8. Write Tests Like Your Job Depends On It (Because It Does)
AI is going to change your code. Sometimes in ways you expect, sometimes in ways that make you wonder if it's secretly trying to get you fired.
Tests are your safety net. They're how you know AI didn't "improve" your authentication system by removing the password check. They're how you catch regressions before your users do.
Write unit tests, integration tests, regression tests, security tests. Test everything. Because the AI doesn't know which part of your codebase is the load-bearing wall and which part is decorative. It will confidently refactor both, and only your tests will save you.
Think of tests as a contract: "AI, you can change whatever you want, but these things MUST still work." No tests? No contract. And suddenly your payment processing system thinks everyone should get a 100% discount.
9. Add Static Analysis to Your IDE Right Now
Catch issues immediately, right where you're working. Not after you commit. Not after you push. Not after the CI/CD pipeline runs for seven minutes. Right now.
Your IDE should be the first line of defense. It should yell at you (politely, through squiggly lines) when something's wrong. This way you fix issues before they go anywhere near your repository.
Static analysis won't catch everything—you still need dynamic testing for the runtime stuff—but it'll catch a lot. And catching a lot early is way better than catching everything late.
10. Work in Small Batches
Don't ask AI to generate 2,000 lines of code and hope for the best. That's not development, that's prayer with extra steps.
Generate code incrementally. Add a little, test it, make sure it works, then add more. Small batches mean:
- Easier to spot issues
- Easier to fix problems
- Easier to understand what changed
- Easier to roll back if something's wrong
Large batches mean you're playing "find the bug" in a haystack made of other potential bugs. It's not fun.
11. AI Is Your Pair Programmer, Not Your Replacement
Always review what AI generates. Always understand what it's doing. Always question whether this is actually the right solution.
AI is really good at writing code. It's less good at knowing whether that code should exist in the first place. That's your job.
Think of AI as an extremely productive junior developer who knows every syntax but sometimes makes interesting architectural decisions. You wouldn't merge their PR without reviewing it, right? Same deal here.
The Bottom Line
Tools will change. Models will improve. New frameworks will launch. That's fine.
But these principles? They've stayed consistent for me so far. They're about process, not tools. They're about working with AI effectively, not just using whatever's newest.
Set up your environment right. Plan before you code. Review everything. Test relentlessly. Work incrementally. And remember: AI is here to make you more productive, not to replace your judgment.
Now go forth and build something cool. And for the love of all that is holy, set up that linting.