How Working With Ai Got Easier

2025-12-12

← Back

(And Fixed the One Thing That Was Actually Broken)

I have spent a lot of time working alongside AI - not just asking questions, but actually collaborating with it across real projects.

Here is the odd thing I noticed:

As the AI got better, working with it sometimes got harder.

The answers were often correct. Sometimes even impressive. But friction kept showing up anyway.

I would find myself saying things like:

  • That is not what I meant.
  • I asked for a fix, not improvements.
  • This is good... but we are getting ahead of ourselves.

At first, I assumed the problem was me. Maybe my prompts were not clear enough. Maybe I needed to be more specific. Maybe I just had to learn how to ask better questions.

But the moment things really changed came from a different frustration.

I realized I felt like I had to retrain the AI every time I started a new chat.

Same work. Same expectations. But each session felt like starting over - re-explaining how I wanted things done, what boundaries mattered, what helpful meant.

So instead of fixing the answers again, I asked a different question:

How do I stop having to retrain you every time?

That question shifted everything.

The issue was not memory. It was missing structure.

So we did something unexpected.

We stopped improvising and wrote the rules down.

Not prompts - rules and guardrails. Roughly a hundred of them over time. About scope, pace, authority, assumptions, when to stop, and when not to help.

Once those existed, the experience changed fast.

I began doing very concrete things:

  • Declaring whether I wanted analysis or execution
  • Saying when not to optimize or extend
  • Setting clear stopping points
  • Being explicit about who controlled pace and direction
  • Granting (or withholding) permission to redesign or refactor

What surprised me was how quickly the friction disappeared.

Not gradually. Almost immediately.

Same AI. Same technical capability. Completely different experience.

That is when the pattern became obvious.

The problem was never that the AI was doing too much. It was that it was being helpful without shared rules.

Once those rules were explicit, collaboration became easy - and scalable.

What really stuck with me is how familiar this felt.

Human teams work the same way. When expectations are implicit, the most capable person in the room ends up steering - often unintentionally. When roles, authority, and boundaries are explicit, people relax and the work improves.

That is the real lesson I took away:

Working well with AI is not about smarter prompts.
It is about designing the collaboration.

If you use AI regularly and sometimes feel like you are correcting direction more than solving problems, this might resonate.

Curious if others have noticed the same thing.