Skip to content
2026-01-02cuecruxstewardshipgovernanceagents

The Principal Steward Role: How I’m Building CueCrux With an AI Board

Why CueCrux is built as an agency-driven system, and what stewardship means when agents do the operational work.

I spent the best part of 2025 thinking about a problem that sounds abstract until it lands on your desk at 9 a.m.

We are drifting into a world where “truth” is treated like a tidy object: one fact, one answer, one confident conclusion. That model was already fragile in human organisations. With AI, it becomes dangerously scalable.

Not because machines are evil. Because confidence is cheap, and reuse is automatic.

So I built CueCrux, and I gave myself a deliberately specific title: Principal Steward.

Not CEO. Not “thought leader” (I’d rather be audited than applauded). Principal Steward.

Here is what that means in practice.

A company designed around agency

CueCrux is not being built like a normal company with a normal org chart.

It’s being built like an agency-driven system.

One person can now create an unreasonable amount of output with the right tools, the right constraints, and the right discipline. I’ve used agents heavily to build the platform, and it’s taken tens of thousands of questions, long dialogues, and more time than I care to admit thinking about what should be true before anything ships.

The point isn’t that humans disappear.

The point is that the heavy lift shifts. Humans move up the stack.

We become governors, reviewers, and custodians of the rules, rather than the people hand-cranking every part of production.

That is where the Principal Steward role sits.

What the Principal Steward actually does

My job is not to “run the platform day to day” in the traditional sense.

My job is to:

  • set the constraints the system must obey
  • review the audit surfaces and failure modes
  • keep the legal, human, and governance layer coherent
  • decide what is allowed to change, and what must not
  • make sure the platform doesn’t become a confidence machine that slowly forgets how to doubt

CueCrux has multiple control planes. That’s intentional. Compartmentalisation is how you stop a complex system becoming a single point of failure.

The agents act as a middle layer between me and the underlying parts of the platform: ingestion, operations, audits, and control. They each own specific duties, so review is structured rather than emotional.

Over time, I expect human staff will join, not to replace the agents, but to steward parts of the platform in the same way: owning boundaries, quality, and governance.

“Unbiased and uncorruptable” is the wrong promise

I want to be careful here, because this is where tech companies usually get religious.

CueCrux is not “perfectly neutral”. Nothing is.

The promise is different:

CueCrux is designed to resist capture, surface bias, and make manipulation expensive.

If something is being gamed, or if a narrative is being laundered through repetition, the system should not quietly absorb it and present it as certainty. It should flag it, challenge it, and show you what it’s leaning on.

In other words:

We don’t aim for “no bias”.

We aim for visible structure, auditability, and correction paths.

That is why WatchCrux exists as an independent operator. It supervises the platform, runs structured audits, and reports health and drift even if the main service falls over. Oversight must survive failure.

What I actually want users to get

I’m building CueCrux for people who need answers they can rely on, without pretending the world is static.

The core behaviour I care about is simple:

You ask a question.

You get an answer with evidence and reasoning signals.

You can see what the answer is leaning on, and what would make it change.

You can track it over time and be notified when it does.

Because things move too fast now for “publish and forget”. The cost of being wrong doesn’t always arrive as embarrassment. Sometimes it arrives as policy. Procurement. Medicine. Finance. Or a decision that becomes infrastructure.

So the platform is designed not just to answer, but to remember, replay, and update.

If that sounds like extra work, it is.

But it’s cheaper than building on answers that felt right for thirty days and quietly failed on day thirty-one.

Why I’m writing this down

I’m publishing this because I think this operating model is going to become normal.

Small teams, or even individuals, will run systems with agent boards doing much of the operational work. The human role will become less about producing output and more about stewardship: setting constraints, reviewing audits, and handling accountability.

CueCrux is my attempt to build that future in a way that stays legible and defensible.

And yes, “Principal Steward” still sounds slightly like I’m responsible for a National Trust property. Which is not entirely wrong. I’m just trying to stop the building collapsing while everyone argues about the wallpaper.