My current vibe coding workflow

business
productivity
AI
At last something that works for me
Author

Norman Simon Rodriguez

Published

10 January 2026

After a fair bit of trial and error (something I’ve written about previously here), I’ve finally landed on a vibe coding workflow that feels stable, and it’s doing wonders for my productivity by clearing out the mental clutter and giving me back that sense of control I’d lost to more haphazard AI setups.

The core philosophy is pretty straightforward: I’ve made a point of separating the ‘thinking’ from the ‘doing’ by using different tools for each stage of the process.

I treat ChatGPT as my senior advisor, which is where all the high-level heavy lifting happens. It’s essentially like having a seasoned lead engineer to bounce ideas off, so I use it to poke at requirements, weigh up architectural options, and stress-test my design decisions before a single line of code is written. I’m not looking for production snippets here; I’m looking for a solid plan of attack so I can decide exactly what needs building without the AI trying to ‘helpfully’ guess my intentions halfway through.

The handover

Once the intent’s crystal clear, I get ChatGPT to help me distill that plan into a sharp, narrow prompt for Gemini CLI, which I treat as my execution engine. Gemini’s the one that actually writes the code, handles the refactors, and manages the git operations. This separation is vital because it stops that awkward back-and-forth improvisation that usually leads to a messy codebase.

The workflow follows a tight, rhythmic loop:

  1. I solve the problem conceptually with ChatGPT.

  2. I refine the solution until there’s zero ambiguity.

  3. I generate a Gemini CLI prompt and let it loose.

  4. I review the local changes, commit, and go again.

A Mini-Moog. It makes great loops, provided there’s a human in the loop. Source: Wolfgang Stief.

Small steps, fewer headaches

A massive part of this is scope control, so I’ve become quite disciplined about committing small, discrete changes rather than asking for sweeping updates. It’s much harder for a model to go rogue or break the world when it’s only working on a tiny, well-defined task, plus it makes my life significantly easier during reviews. Frequent commits aren’t just a side effect here; they’re the glue holding the whole thing together.

I also have a specific rule for when I need a piece of code explained: I never ask ChatGPT. I want to keep its context window pristine and focused on the big picture, so I’ll just pop open a Gemini or Claude tab in Chrome to use as a local explainer. It’s a bit like having a specialist consultant you call in for a five-minute chat so you don’t distract the lead architect with questions about why a particular regex looks like a cat walked across the keyboard.

Managing the memory

Since Gemini CLI is stateless, I manage project context through a project_overview.md file that gets updated with every single commit. It’s a concise bird’s-eye-view map of the project’s features, constraints, and quirks, and I make sure Gemini reads this along with the folder tree every time I start it up. It forces a certain level of documentation because if a detail isn’t in that file, as far as Gemini CLI is concerned, it doesn’t exist.

Human in the loop

It’s worth noting that this isn’t a ‘get out of jail free’ card for people who can’t code, as human judgement is still the star of the show. These tools are fantastic for amplifying competence, but you still need the expertise to spot a dodgy design or a security flaw.

When the system’s humming, it gives me exactly what I’m after—clarity, control, and decent results. I’m still waiting for a single tool that combines all of this into one interface, but until that unicorn arrives, this partitioned approach is the most reliable way I’ve found to work.