About
I work on AI, systems integration, and automation, with a background in design automation and engineering.
In engineering, answers are not accepted because they sound convincing. They are accepted because they survive testing, make their assumptions explicit, and fail in ways that can be understood in advance. That way of thinking shapes everything I build.
I’m interested in systems that remain reliable when they are reused, scaled, automated, and exposed to conditions their creators did not anticipate.
Context
I’m Myles Bryning, based in Cambridge, UK.
My early training was in engineering disciplines where confidence without verification is treated as a liability. You learn quickly that neat conclusions are often the most dangerous ones, because they hide where the load is actually carried.
Over time, that perspective carried across into software, automation, and AI systems.
The materials changed, but the failure modes didn’t.
What I care about
I’m consistently drawn to systems that:
- Show their working rather than asking for trust
- Expose uncertainty instead of smoothing it away
- Degrade safely when conditions change
- Resist confidence inflation as they scale and spread
These are not abstract preferences. They are responses to how complex systems actually fail in practice: quietly, politely, and long after the original decision has been forgotten.
CueCrux
I started CueCrux after watching increasingly confident answers circulate without any way to inspect what they depended on.
Modern AI systems are persuasive by default. They are fast, fluent, and often reassuring. When they are correct, they feel authoritative. When they are wrong, they tend to fail without friction and at scale.
From 2025 onward, this problem accelerated as fabricated but plausible content became easier to produce and harder to distinguish from material that had been tested, reviewed, or earned its confidence over time.
At that point, treating answers as untested artefacts stopped being theoretical.
CueCrux exists to apply an engineering discipline to answers: traceability, explicit assumptions, visible uncertainty, and the ability to understand what would cause a conclusion to stop holding.
My role is not to defend answers, but to ensure the system behaves responsibly as confidence accumulates.
Closing
I write to think clearly about these problems.
CueCrux exists to test whether those ideas can survive contact with reality.