The Thesis

In online environments, binary verification systems frequently fail despite being technically sound. The failure occurs because they answer the wrong question — optimizing for strict factual accuracy while completely ignoring the contextual risk of the narrative. A model can be technically correct and operationally useless — or worse, technically correct and quietly harmful, when it does not match the institutional risk it actually carries.

The critical question is rarely, "Is this accurate?" It is, "Is this safe to act on, in this context, for this user?"

My practice is built around that distinction. Strategy comes before code. Productization is a rigorous discipline, not an engineering side-effect. Governance is structural, not a compliance veneer added at the end. The three are inseparable; treating them as separate stages is the most common reason AI investments do not return what was promised.

The Three Phases

Strategy, productization, governance.

The phases are sequential in framing and parallel in practice. Most engagements span more than one; some begin in any of them.

— Phase One

Strategy

The decisions made before any code is written. What is worth building, what evaluation looks like, what the risk profile actually is.

This is where most AI investments are won or lost. I work with leadership teams to clarify what success looks like in operational terms, what the right evaluation metrics are for the actual risk surface, and where the proposed architecture has structural weaknesses that will not surface until production. The output is not a deck — it is a defensible decision.

In practice

A regulated-sector platform team has been told by a vendor that their LLM evaluation is "97% accurate." A two-week strategy engagement establishes that the test set excludes the highest-risk category of user query, that the accuracy metric is structurally insensitive to the harms the system actually creates, and that the organisation needs a different framework before scaling. The engagement ends with a written architecture review, a recommended evaluation framework, and a clear go / no-go on the deployment.

Deliverables
  • Architectural & evaluation review
  • Risk-aware evaluation framework
  • Risk & governance assessment
  • Build / buy / partner analysis
  • Strategy memo & written recommendation
  • Fractional Chief AI Officer engagement

Who this is for Boards, executive teams, and product leaders making capital decisions on AI investments — particularly in regulated or high-stakes environments where the cost of a wrong call is measured in compliance exposure, reputation, or harm.

— Phase Two

Productization

The discipline of turning research and prototypes into systems that survive contact with real users — taught and applied.

A working prototype is not a product. The gap between the two is where most AI initiatives stall. I work directly with engineering and product teams on the mechanics — evaluation pipelines, release governance, observability, regression discipline — and I run programmes that build this capacity inside organisations rather than holding it outside them. Twenty years of implementing software in regulated environments inform this work; the editorial discipline is the same whether the product is a banking system or a clinical-decision tool.

In practice

A founder cohort enters a six-week productization sprint with prototypes that work in demos but fail under uneven inputs. The sprint establishes evaluation harnesses, regression testing, and release-readiness criteria specific to each prototype. The cohort exits with shipped, monitored, governed products — and a productization playbook each team continues to use.

Deliverables
  • Embedded productization advisory
  • Evaluation harness design & build
  • Release governance & regression discipline
  • Founder cohorts & productization sprints
  • Executive education programmes
  • Curriculum design & delivery

Who this is for Founders, engineering leaders, and learning-and-development teams responsible for moving AI work from demonstration to deployment, and for building the institutional capability to do it repeatedly.

— Phase Three

Governance

For domains where the cost of being wrong is high, technical and policy questions are inseparable. I work on both.

Governance is not a compliance overlay. It is a structural property of the system — present in the choice of evaluation, in what gets logged and reviewed, in who is accountable when an output causes harm. My doctoral research at UMBC's KnACC Lab is directly on this terrain: how AI systems should evaluate content in environments where being wrong is expensive, and what proportionate, risk-aware evaluation looks like in practice. That work informs the advisory.

In practice

A platform faces regulatory pressure on its content-moderation pipeline. A governance engagement maps the system's structural blind spots — including where the evaluation framework systematically silences legitimate speech and misses genuine harm — and produces a revised governance architecture that is both more defensible to regulators and demonstrably less harmful to affected users.

Deliverables
  • AI policy advisory & review
  • Platform governance frameworks
  • Health & financial AI evaluation
  • Content moderation architecture review
  • Expert review & written testimony
  • Standards & regulator engagement support

Who this is for Platforms, regulators, foundations, and public-sector bodies whose AI systems sit on the policy boundary — and who need work that is technically rigorous and policy-fluent at once.

Engagement Formats

Three commitments, scoped to the question.

Engagements are scoped, not priced from a menu. The format is decided after the question is understood. These are the three shapes the work most often takes.

— Single Engagement
The Advisory Day & Strategy Sprint
One day to two weeks · written deliverable

A focused engagement on a specific decision: an architecture review, an evaluation framework design, a risk & governance assessment, a build / buy / partner analysis. Ends with a written memo or framework and a clear recommendation. For when there is a defined question and a real deadline.

— Embedded
Fractional & Retained Advisory
Three to twelve months · ongoing engagement

Embedded inside a leadership team — fractional Chief AI Officer, retained strategic advisor, board-level AI counsel. Continuous involvement in decisions, evaluation discipline, and governance architecture. For institutions whose AI strategy is too consequential to outsource entirely and too specialised to build alone.

— Programmatic
Cohorts, Curricula & Productization Sprints
Six weeks to a full programme · group format

Executive education, founder cohorts, productization sprints, and tailored curricula — delivered for organisations and institutions building internal AI capability. Combines the editorial discipline of the strategy work with twenty years of curriculum design and 3,000+ trained practitioners. For organisations building capacity, not just outputs.

Foundations

What the work draws on.

The advisory is grounded in three things — and the combination is the point. Each on its own is common; together, in one practice, is rare.

Twenty years of implementing software in regulated environments — UK, Nigeria, US — meets active doctoral research on the structural failures of current AI evaluation, framed inside the editorial discipline of someone who has stood in front of regulators, founders, and ministers, and explained why systems fail and what changes when they do not.

Doctoral Research
Beyond Epistemic Conformity
UMBC · KnACC Lab. Narrative-aware credibility and risk assessment in online health discourse. IEEE ICDH Best Student Paper, 2025. See the research →
Industry Practice
Twenty years of regulated software
Real Asset Management (accounting and project management software), Lehman Brothers (e-trading mortgage systems), Kaupthing Singer & Friedlander (capital markets, AML), and ten+ years founding and running iBez Consulting serving clients across government, healthcare, education, and enterprise sectors.
Public Practice
100+ speaking engagements since 2014
Keynotes and panels at IEEE, AMIA, AfricaNXT, Africa Women in IT, Founder Institute, and government and ministry-level convenings. See the speaking →
Begin a Conversation

If your AI question is the kind where being wrong is expensive, let's talk.

Send a brief description of the question and the timeline. I will reply within five working days with a frank read on whether — and how — I can be useful.

Send a Message →