AI Governance Field Notes - Issue #1 - Why This Exists
Welcome to AI Governance Field Notes.
This newsletter exists because too many important conversations about AI are happening either too loudly, too technically, or too late.
In my work with professional service firms, I keep encountering the same quiet tension:
AI is already affecting how work is produced, reviewed, and delivered—but the way firms think about AI hasn’t caught up to the way it’s actually being used. Not because people are careless or uninformed, but because the landscape is moving unevenly, guidance is fragmented, and responsibility still rests squarely on human judgment.
Field Notes is where I slow that conversation down.
Here, I’ll be sharing observations from the work itself—patterns I’m seeing across firms, questions that surface repeatedly in private conversations, and moments where guidance, expectations, and reality don’t quite line up. Sometimes that will come from regulatory shifts or industry recommendations. Sometimes it will come from a single sentence a managing partner says that perfectly captures the problem.
This isn’t a training series, a product feed, or a commentary on tools. It’s a place to think clearly about governance, responsibility, and decision-making in a world where AI capabilities are changing faster than firm policies ever have.
The schedule is intentionally irregular. These notes are published when there’s something worth pausing for.
I genuinely welcome suggestions, questions, and topics you’re wrestling with. Many of the most useful conversations in this space start with someone saying, “I don’t even know how to frame this yet.” If that’s you, you’re exactly in the right place.
I’m glad you’re here.
Talk soon,
Rex
Rex C. Anderson, Principal
Desert Sage AI
Scottsdale Arizona
Responses