Designing the Collaboration, Not the Rules
I have actually tried taking the rules to another AI.
A while back, I asked for a clean list specifically so I could upload them into a different system at work. It did not recreate this partnership, but it did remove several of the same pain points — especially during code review. For example, it started defaulting into a much stricter mode instead of broad, helpful expansion.
But the bigger unlock was not the rules themselves.
It was how they were created.
All of this started when I said something like:
I feel like I have to retrain you every time I start a new chat.
That reframed the problem. Instead of fixing answers, we zoomed out and treated the collaboration itself as the thing to design.
From there, a loop emerged:
- I would describe a repeated friction I was experiencing
- I would ask the AI, “How do we prevent this from happening again?”
- It would propose a candidate rule
- I would tweak it to match how I actually work
- It would restate it clearly
- Once it proved useful, I would say, “Save it”
That loop repeated over time and eventually produced a fairly large ruleset.
One important caveat, though: even with the same rules, different AIs will not behave identically. Each system makes constant micro-adjustments based on interaction history, enforcement, and feedback. That is why dumping my rules wholesale into another AI helped — but did not recreate the same dynamic.
Which is also why I think the better approach is to collaborate with your AI to build your rules, based on the friction you are actually seeing.
I have thought about listing my rules at some point, but if I do, it would be as examples — not as a template to copy. The real value turned out to be the process of turning repeated friction into explicit guardrails, one at a time.
That shift — from correcting outputs to designing how you work together — is what made the difference.