Sitemap
AI Learnings

Various learnings about AI development

My AI coding workflow

3 min readAug 17, 2025

--

I’ve been AI coding for at least a year and a half and wanted to document the setup that’s been working well for me so far. It might help you get more out of these AI models.

  • I use Anthropic’s Claude Sonnet model (v4 at the time of this writing) for all coding. Every other family of model sucks for coding in comparison. I don’t use their more powerful Opus model since it hasn’t shown me a material difference in output. If I feel like an ask is complex, I’ll ask Sonnet to “think hard.”
  • For technical questions that don’t require codebase context (programming language or framework questions), I’ll use Gemini’s web UI (using their Flash 2.5 model). This way, I don’t pollute Claude’s context window with tangential information that’s not directly relevant to my project.
  • Outside of work, I use the VSCode editor with the Github Copilot agent. For work, I’ll use VSCode with Claude Code. Claude Code is more delightful to use and is better at staying on track using todo-lists. However, Copilot is much cheaper and with spec-driven development, both agents stay on track equally well. I don’t think Cursor’s custom autocomplete model is worth the higher price and it doesn’t have anything else to differentiate from VSCode + Copilot.
  • Spec-driven development all the way. It’s the best approach for AI coding today. I start every single feature this way and there’s no going back to primitive prompting.
  • I’ll vibe code (i.e., largely accept AI generated code without scrutiny) if I don’t know the shape of the solution that I want. As in, I don’t know the abstractions I’ll need (like when building my first AI agent), and I’m optimizing more for a working solution than a maintainable implementation. Once I have a better grasp on the problem and solution, I’ll make sure my plan/spec reflects the learnings and then start over in a fresh git branch with that plan. Vibe coding produces over-engineered code (features you don’t want, over-abstraction, unoptimized for your constraints, and a complex control flow); if you don’t throw the code away, it’ll take you days to clean it up and simplify.
  • In Claude Code, I’ll use intentionally placed CLAUDE.md files at the root of directories that contain the business logic for major features; this will carefully manage my context window to delay summarizations. My repo’s root /CLAUDE.md only contains critical information that shortcuts the model having to reprocess the codebase to build up key context (folder structure, core abstractions, framework and language used, etc). I keep testing guidelines in a tests/CLAUDE.md that’s at the root of the folder containing my tests. This means that I can be more verbose about testing guidelines and that’ll only get pulled in when tests are being modified. Sadly, copilot’s ability to juggle multiple copilot-instructions.md files is not as good as Claude Code. When I’m done implementing a feature, I’ll use the summarized spec in these separate CLAUDE.md files as additional historical context on the feature.
  • When I’m not vibing, I review all of the code produced by the model and make sure I understand what’s happening. If I sense that something is not right or that a key use case is not considered, I’ll ask the model to confirm if my hunch is right (instead of making the change myself). I prefer this approach most of the time because I want to see if I’m missing some key detail. As in, I think I’m right, but I want the model to double check me and make the fix if I’m correct.
  • When the model does something that I don’t like (like add noisy comments or put sleep() calls in my tests), I ask it to add a note about that in its memory (copilot-instructions.md or CLAUDE.md). I like this approach of having the model do it because it’ll add language that it likes to indicate the importance of the constraint. It adds annotations like “CRITICAL” to those items.

--

--

Joel Kemp
Joel Kemp

Written by Joel Kemp

Senior Staff Software Engineer @Spotify.

No responses yet