What It Looked Like When It Mattered

2025-12-19

← Back

What It Looked Like When It Mattered

Over the past week, I worked with an AI assistant to write a long-form white paper about human–AI collaboration — specifically, why durable collaboration doesn’t come from clever prompts, but from explicitly negotiated rules and guardrails.

By the time this story begins, that work was finished.

  • The paper was written.
  • The rules we describe in it already existed.

What remained was a practical question:

Where do we put this, so the right people actually see it?

That question, it turns out, is where things got interesting.

The Goal Was Clear

From the beginning, my goal was consistent: get this in front of AI developers. Not reviewers. Not academics. Builders — the people actually designing the AI systems that we users interact with in the real world. I’ve named a stress point many users feel, and I think developers need to see what I found. Because the problem isn’t going away. It’s getting louder.

The academic shape of the paper wasn’t in conflict with that goal. In fact, I believed it strengthened it. Rigor buys credibility. It slows readers down in a good way. My colleagues’ reactions confirmed that. They didn’t skim it and shrug — they said, “I want to read this when I have time.”

That was exactly the response I hoped for.

The Shift to Publishing

When I shifted from writing to deciding where to publish, my focus stayed on audience and reach. I assumed that if a proposed path carried meaningful friction—gatekeeping, delays, dependencies—that would surface early, while alternatives were still easy to evaluate.

The first path we explored was arXiv.

At face value, it didn’t seem unreasonable. I was told that developers read papers there, and the paper’s structure clearly fit alongside academic work. I wasn’t familiar with the mechanics, but I trusted that if there were constraints, they would become obvious quickly.

So I followed the process.

  • I created an account.
  • I started a submission.
  • I selected what appeared to be the right category.

That’s when the friction appeared.

arXiv requires endorsement for certain categories, including Human–Computer Interaction. To submit, I needed an endorsement from someone who had already published multiple papers in specific categories within a defined time window.

This wasn’t a minor detail. It meant finding someone qualified, explaining the work, and waiting for a response—all after the submission process was already underway.

The timing mattered. Earlier that same day, I had been physically in the office with colleagues who might have helped. The next day, I would be working from home, and the calendar was closing fast. If this slipped by even a day or two, it would likely miss the holiday window and lose momentum.

This didn’t make arXiv the wrong platform.

But it changed the cost—and that cost surfaced later than it should have.

Up to that point, I assumed we were still aiming for the same thing: reaching the right audience with minimal delay.

What I was now facing was a different reality—one I hadn’t noticed until the cost surfaced.

Was this path being chosen because it served the goal, or because it matched the shape of the work?

The friction itself wasn’t the problem.

The timing of when it appeared was.

This wasn’t about quality.

It wasn’t about rigor.

It was about carrying assumptions from the writing phase into the publishing phase without noticing the shift.

That’s when frustration surfaced.

Not politely.

Not passively.

Directly.

And, in my own opinion, rudely.

I was frustrated and I let it show. An AI may not respond to rudeness, but it does recognize frustration. That recognition often activates a more deferential response. And in this case, it delayed our realignment.

That matters — this is a lived example, not a polished one.

I wasn’t reacting to arXiv as an institution. I was reacting to the fact that the cost of that path showed up late, when time and energy were already thin. I said plainly that this should have come up earlier, so the trade-off could have been weighed while options were still open.

Reframing the Question

Once I noticed we were not aligned and named it, it activated the rules exactly as intended:

  • Assumptions were surfaced.
  • The goal—developer reach over formal validation—was reasserted.
  • Channel constraints were acknowledged explicitly.
  • Deference was rejected in favor of decisive guidance.

The publication strategy shifted. Not as a fallback option, but to where it should have landed from the outset. Developer-native paths:

  • Hacker News
  • Targeted distribution on X, with explicit tagging of AI builders
  • High-signal Reddit communities
  • Private circulation to practitioners

We agreed to defer academic publication until January, when time pressure was gone and endorsement friction could be handled deliberately instead of reactively.

The paper itself didn’t need to change.

What changed was clarity.

The work already had rigor. It didn’t need academic placement to be legitimate. It needed to be visible to the right people.

Why This Example Matters

This wasn’t just a disagreement about publishing strategy.

It was a live test of the paper’s central claim.

The collaboration didn’t fail because anyone was wrong.

It didn’t fail because the goal was unclear.

It bent because an assumption went unchallenged, and because friction showed up later than it should have.

What mattered is what happened next.

The conflict didn’t collapse into disengagement.

It didn’t spiral into repeated justification or defensiveness.

Once the misalignment was named, the rules we had already negotiated shaped the response. Pushback was allowed. Goals were restated. The publication plan changed without reopening the work itself.

That stability is not accidental. It is the product of explicit governance.

The conflict wasn’t pleasant.

But it was contained.

If this system only worked in theory, this moment would have broken it.

If it only worked in ideal conditions, this would have exposed that.

Instead, it did what it was designed to do: surface friction early enough to change direction without breaking trust.

That is what it looked like when it mattered.


If you would like to see the white paper, it’s available as a PDF download here:

Download the full white paper (PDF)

https://bertstevens.net/assets/pdf/Durable-Human-AI-Collaboration-Governance-Whitepaper.pdf