My AI Coding Workflow in 2026: What I Actually Use and What I Skip
I run more than 10 production apps as one developer. Here's how AI fits into my actual daily workflow: the tools, the patterns, what moved the needle, and what I stopped using.
My AI Coding Workflow in 2026: What I Actually Use and What I Skip
A year ago I was skeptical of "AI coding tools will 10x your productivity" claims. Not because I thought the tools were bad (I'd been using them), but because productivity claims for developers usually ignore the part that's actually slow: thinking, not typing.
I was wrong, but not in the way I expected.
The gain isn't typing less. It's that certain categories of cognitive work, the boring-but-careful kind, got dramatically cheaper. And for a solo developer running more than 10 production apps, that's what actually mattered.
Here's what my actual workflow looks like now, what changed, and what I still don't use AI for.
The Setup#
I run everything through Claude Code in the terminal, integrated into my editor. I also use the Claude API directly in several of my products. That's basically it: no AI-powered IDE plugin, no Copilot.
I'm not anti-Copilot. I tried it. For me, the inline autocomplete model creates a particular kind of friction: accepting suggestions at the character level slows me down more than it helps. Claude Code in agentic mode (where I describe a task and it operates across files) is a different category of tool.
What I Actually Use It For#
Boilerplate and scaffolding I'd write from memory anyway#
If I'm adding a new Supabase table, I know the pattern: migration file, RLS policies, TypeScript types, server action, maybe a client hook. I've written this dozens of times. It's not hard, it's just repetitive and error-prone at the edges (forgetting to add a policy for the service role, off-by-one in a policy condition, etc.).
Now I describe the table and the access rules once, Claude writes the full scaffold, and I audit it. The audit is the important part; I still read everything. But writing it myself offered no signal; I already knew what to write. The AI handles the transcription, I handle the judgment.
This is probably where I save the most time across 10+ apps.
Cross-file refactors#
"Rename this type everywhere, update all the callsites, fix the affected tests" is the kind of task that used to take 30 minutes of careful grep-and-replace. Now it takes two minutes plus a diff review.
The risk here is real: a refactor that touches 20 files can introduce subtle bugs. So I always review the diff carefully and run tests before accepting. But the cognitive load of doing the actual work dropped to nearly zero.
Writing tests for code I've already written#
I'm not disciplined enough to write tests first. I write the code, it works, and then I consider whether the logic is complex enough to warrant a test. When the answer is yes, AI is excellent at this: give it the function, describe the edge cases you care about, and it writes the test suite.
The output isn't always perfect. It sometimes over-tests obvious cases or misses what I actually care about. But it's faster to edit a test suite than to write one from scratch, and the framework is usually right.
Explaining code I'm about to delete#
I have a lot of code I wrote 18 months ago that I no longer remember clearly. Before I touch it or delete it, I ask Claude to explain what it does and why. This sounds trivial but it's genuinely useful: it's cheaper than re-reading carefully, and it sometimes surfaces edge cases I'd forgotten about.
First drafts of SQL migrations#
Writing migrations is high-stakes and tedious. I describe what I want (add a column, create an index, backfill data), Claude writes the SQL, I check it against what I know about the data, then run it on staging first. This hasn't bitten me yet, but I still treat the review step as non-negotiable.
What Moved the Needle Most#
The honest answer is the boring middle-of-the-implementation work.
Not the architecture decisions. Not the tricky business logic. Not debugging weird production errors. The part that takes time but doesn't require judgment: converting an interface to a new shape, updating every page that imports a component, writing 15 nearly-identical test cases.
For a solo developer managing 10+ products, the constraint isn't ideas or architecture. It's throughput on implementation work that doesn't require my specific attention. AI compressed that category significantly.
What I Don't Use It For#
Debugging production issues#
When something breaks in production, I want to read the logs, trace the execution, and understand what actually happened, not have an AI guess. I've tried feeding error logs and context to Claude and getting an explanation. It's sometimes useful as a second opinion, but I don't rely on it. Production bugs usually involve state and timing that the AI doesn't have access to.
Architecture decisions#
I talk through architecture with Claude sometimes. It's useful for pressure-testing an idea or listing tradeoffs I haven't considered. But I don't let it make the call. Architecture decisions have long tails; a wrong choice now costs months later. I want to own those.
Anything involving real user data or credentials#
This should be obvious, but I don't paste customer data, API keys, or environment variables into any AI tool. I also don't let AI-generated code touch anything production-sensitive without me reading every line.
Performance optimization#
"Why is this query slow?" is usually a question about your data distribution, your indexes, your specific query plan, not something an AI can reliably answer without running EXPLAIN ANALYZE on the actual database. I've seen AI-suggested indexes that would have made things worse. Benchmarks and query plans, not vibes.
The Honest Productivity Number#
I've thought about how to quantify this and I'm not going to try. "10x" is a marketing claim. "Measurably faster at shipping" is accurate.
What I can say: I shipped more features across my products in the last 12 months than in the 24 months before that, with roughly the same working hours. Some of that is experience. Some is better tooling overall. Some is AI. I can't isolate the AI contribution cleanly, and anyone who claims they can is probably rounding in their favor.
What I do know: if you took away my current workflow, I wouldn't go back to the old one. That's a meaningful signal.
The Part Nobody Talks About#
The biggest change isn't speed. It's that the cost of starting something dropped.
When writing a new migration, a new server action, or a new API route has low enough friction, I just do it. I don't batch tasks or wait until I have a bigger block of time. The activation energy for small-but-correct changes got low enough that I make them when I notice them.
That's the compounding effect. Not that any single task is much faster, but that the threshold for doing the right thing at all got lower. Over 10+ apps across 12 months, that adds up.
What I'd Tell Someone Starting#
Don't evaluate AI coding tools on autocomplete. Evaluate them on agentic tasks: things that span multiple files, require reading context across a codebase, and produce a diff you can review. That's where the productivity gain is real.
Keep the review step. Not because AI makes obvious mistakes, but because you need to understand what's in your codebase. The developer who lets AI write a feature and never reads the output is setting themselves up for a bad debugging session.
Use it on the boring middle. Architecture is yours. Debugging is yours. The 45 minutes of careful-but-mechanical implementation work between those two things: that's where AI earns its keep.