TL;DR:

You try an AI coding assistant, get bad results and swear the tool is garbage.

Meanwhile your competitor quietly figures out how to use it well, ships faster, and eats your lunch.


Everyday Reality Checks

Bad Outcome Actual Cause Why Your Complaint Is Silly
Burnt toast The dial was on 9 The toaster obeyed orders.
Shrunken sweaters Dryer set on “high heat” Settings, not sorcery.

If you can grasp that burnt toast is on you, you can grasp that bad AI output is, too.


We’ve Been Here Before

CloudFormation → CDK: The Same Song, Different Verse

For years, I wrote AWS CloudFormation YAML templates. Trial and error, reading docs, finding wrong info in the docs, debugging mysterious stack failures. People whined: “CloudFormation is hard and isn’t intuitive!”

I did the work anyway. Got it working. Could spin up entire systems with a git push.

Then CDK arrived. Better way to write infrastructure as code, but you had to think differently than when working with Cloudformation.

Cue the predictable chorus:

  • “Engineers JUST learned CloudFormation, now you want them to learn a NEW tool?!”
  • Engineers writing CDK like CloudFormation (skipping L2/L3 constructs) and complaining “CDK doesn’t save me any time!”
  • The classic: “I can already do everything in CloudFormation! Why learn ANOTHER tool?”

Reality check: By doing the work to understand CDK, I built and deployed scalable, durable AWS infrastructure faster and with fewer lines of code than CloudFormation ever allowed.

Customers didn’t care about my tool preferences. They wanted reliable systems shipped sooner.

Same pattern. Same choice. Blame the tool, or do the work and ship.


The AI coding assistant “disaster” myth

  • “Cursor was a disaster.”
  • “It wrote messy code.”
  • “It ignored our logging utility.”

Translation: I fired a vague, kitchen-sink prompt with zero context or tests and got spaghetti.

Garbage in, garbage out


A workflow that actually works

  1. Plan first: Provide context and motivation. Ask the assistant to produce a plan in Markdown with a task list.
  2. One task at a time: Laser-focused prompts tied to a single acceptance test.
  3. Define success: Unit tests or explicit pass/fail checks.
  4. Rules file: Document recurring mistakes; watch them vanish.

Results: cleaner commits, fewer rewrites, faster throughput.

Same AI engine, different user.


“But Cursor/Copilot/Claude Code should be smart enough to do what I want it to!” - The Worst Excuse

End users can whine about UX. You’re an engineer paid to ship.

Every minute you spend moaning about “AI tools that don’t work” is a minute a competitor spends mastering it.


Vague instructions, vague results (manager analogy)

Bad Manager: “Just make the dashboard better. Users hate it. Fix it by Friday.”

Team scrambles, guesses at requirements, builds three different versions. None hit the mark. Manager explodes: “This team is incompetent!”

Good Manager: “Users can’t find the export button. It’s buried in a dropdown. We need it prominent on the main toolbar. Here’s the design mockup and user research. Questions?”

Team ships exactly what’s needed. On time. First try.

We universally understand this: The bad manager owns the failure. Vague instructions → missed targets → thrashing. Give context, motivation, and success criteria → team delivers.

Now apply identical logic to AI coding assistants:

  • Vague prompt: “Add a new page for a user to upload an image for analysis”
  • AI delivers: Generic, unusable code that handles nothing specific.
  • Your response: “AI is trash!”

vs.

  • Clear plan: “Users want to be able to upload an image, and get detailed analysis of metadata on the image, and a caption of what is seen in the image, as well as object detection. Create a plan to implement this. Use similar patterns seen on other pages under client/src/pages when creating this new page. Create new routes under server/routes/image that handle the upload of the image. Use exiftool for metadata, Florence 2 for image captioning, and YoloV8 for object detection. Create a plan in markdown with the motivation, and a task list that includes specific success criteria.”
  • Step by step execution: “Execute task 1…I see task 1 is completed, now execute task 2…”
  • AI delivers: Exactly what you asked for.

Same dynamic. Same ownership. Blame the tool, or take responsibility for the input.


From blame to ownership

  • Assume your input is wrong first.
  • Instrument early: logs, tests, telemetry.
  • Iterate prompts: treat AI like any other configurable tool.
  • Document learnings: update rules, share patterns.

Do the work once; future you will thank you.


Do This Today

  1. Try my (constantly updating) Cursor workflow.
  2. Build one feature with the step-by-step workflow.
  3. Measure throughput vs. your current baseline.
  4. Decide based on data, not anecdotes.
  5. Take ownership. Ship for customers. Stop blaming the tools