Advanced Analytica Advanced Analytica: IBOM
BACK
Your AI Is Guessing. Here Is How to Make It Know. — Most enterprise AI failures are not model failures. They are specification failures.
AI Agents Governance Specification

Your AI Is Guessing. Here Is How to Make It Know.

Jonny Bowker
Share

Most enterprise AI failures are not model failures. They are specification failures.

Your AI is guessing. Here is how to make it know.

Many experienced consultants and client services directors know the pattern: a forty-page requirements document, expanded from a statement of work that took months to negotiate, is outdated before the first sprint starts and ambiguous enough to generate multiple interpretations across multiple teams.

Specifications are supposed to be the contract between what stakeholders want and what gets delivered. In practice, they often become the most fragile artefact in the delivery lifecycle: the document everyone references but nobody fully trusts.

Most organisations that have adopted AI in the past three years have now run into the same failure mode. The tools work. ChatGPT, Claude, Gemini, and Copilot are genuinely capable systems. Teams use them, get useful outputs, demonstrate value, and expand adoption. Then the move from pilot to production exposes the weakness. Outputs become inconsistent. Governance questions become uncomfortable. Teams that were saving time are suddenly spending it reviewing outputs that cannot be trusted for anything that matters.

The problem is not the technology. The problem is the brief.

The sixty-year lesson nobody applied to AI

In the 1960s, NASA engineers faced a version of this problem with sharper consequences.

How do you ensure that a system built by hundreds of engineers, across multiple organisations, under extreme time pressure, produces exactly the right output under all conditions?

The wrong answer meant mission failure. In some cases, it meant lives lost.

Their answer was formal specification. Before a single component was built, they defined precisely what the system must do, under what conditions, and how success would be verified. The specification was not a planning document. It was a governing contract. If the implementation deviated from the specification, the deviation was a defect. If the specification was ambiguous, the ambiguity was corrected before building began.

That discipline, refined over decades and later formalised academically as spec-driven development, became the benchmark in safety-critical engineering. Teams that adopted it reduced defects, shortened delivery timelines, and cut rework. The specification was the difference between a rocket that worked and one that did not.

Now it is the difference between an agent that works and one that does not.

For decades, this discipline stayed inside software because only compilers and interpreters could execute a specification. You could not hand a formal specification to a machine and ask it to produce a marketing campaign, a research report, or a strategic analysis.

Until now.

What changed

AI agents are universal executors. Give them a strong enough specification and they can produce research findings, strategic analyses, compliance assessments, marketing campaigns, governance documents, legal reviews, training programmes, and software.

The mechanical limit that kept specification discipline inside software has gone.

Most organisations, however, still use AI as a prompting tool. They describe what they want, review what they get, iterate conversationally, and call the result done. That can work for simple tasks. For anything that requires consistency, governance, domain specificity, or institutional knowledge, it fails in predictable ways.

Without a specification, the AI is always guessing at intent. It produces outputs that are statistically reasonable for the general case. It does not know what reasonable means for your specific organisation, your clients, your constraints, or your definition of done.

The result is familiar:

  • constant supervision
  • inconsistent quality across teams and sessions
  • weak auditability
  • limited trust for high-value work

The promise of AI-enabled scale remains unrealised not because the models are inadequate, but because the briefing discipline is absent.

Insight

The model is the executor. The specification is the intelligence.

The architecture problem

There is a structural reason this keeps happening. Most organisations are deploying horizontal AI: general-purpose models designed to serve any user in any domain with any request. These systems are optimised for breadth. They are not optimised for your business.

What enterprise AI actually requires is vertical AI: systems built for a specific business, in a specific domain, governed by that organisation’s knowledge, policies, decisions, and constraints.

Vertical AI does not guess at intent. It operates from specification. It knows your business because it was built from your business.

Horizontal AI is built for everyone. That means it is optimised for no one.

The gap between horizontal and vertical AI is not a technology gap. It is a methodology gap. ChatGPT, Claude, Gemini, and Copilot are not the enemy of vertical AI. They are the raw material. Specification discipline is what turns general capability into specific, governed intelligence.

At Advanced Analytica, that is the point of the method: use general-purpose AI to build specific AI. Use horizontal tools to construct vertical operating models. The specification is the instrument of that transformation.

The spec-driven methodology

The Spec-driven Methodology extends proven engineering discipline into the wider set of professional domains where AI agents can now execute work. The principles are unchanged. What has changed is the executor and the scope of application.

In this model, every piece of AI-executed work begins with a specification that defines precisely:

  • what is to be produced
  • who it is for
  • what constraints apply
  • how success will be judged

The specification is written before execution begins. It is validated by the people who will live with the output before the AI is instructed to produce anything. Everything the AI produces is then verifiable against the specification. Iteration improves the specification, not just the output.

This matters wherever the briefing problem exists:

  • research commissions that answer the wrong question
  • marketing briefs that generate conflicting interpretations
  • strategy documents that become obsolete before delivery begins
  • legal reviews that miss edge cases nobody specified

In each case, the failure is not in execution. It is in the brief.

The IBOM approach

The practical organisational expression of this methodology is the Intelligent Business Operating Model, IBOM®. IBOM® is the structured, machine-readable representation of how a business thinks, decides, acts, and governs itself.

It is not a chatbot. It is not a workflow automation tool. It is not a platform category shortcut.

It is the governing intelligence layer that sits between a business and its AI.

IBOM® contains the organisation’s domain knowledge, governance rules, workflow logic, integration architecture, and the specifications of the digital workers deployed within it.

Those digital workers are what we call Digividuals: intelligent agents built for specific roles in specific domains, governed by IBOM® and operating from specification rather than prompt.

A Digividual does not guess at your intent. It operates from it.

The five-stage discipline

The construction of an IBOM® follows a disciplined sequence:

  1. Deconstruct the client’s problem and existing knowledge.
  2. Specify what the system must know, do, and never do.
  3. Validate the specifications with the client before build begins.
  4. Develop the system under expert supervision.
  5. Operate the deployed system with governance and controlled extension.

Nothing is built before the specifications are signed off.

This is not a stylistic preference. It is the same engineering principle that governed spacecraft and later safety-critical software: ambiguity in the specification becomes a defect in the output.

Eliminate the ambiguity first.

What this means in practice

For consultants and professional services firms, the implication is immediate. The briefing problem is not just a technology problem. It is a delivery model problem. Organisations that master specification discipline in their AI programmes will operate at a different leverage ratio: faster delivery, higher consistency, less rework, and AI that can be trusted with work that actually matters.

For client services directors, the governance dimension is even more pressing. Horizontal AI deployed without specification discipline is ungovernable by definition. It produces different outputs from the same input on different days. It has no reliable audit trail. It cannot demonstrate conformance to an agreed standard.

For regulated industries, that is not a risk to be managed. It is a risk to be eliminated.

For enterprise buyers evaluating AI programmes, the diagnostic question is simple:

What is the specification?

If the answer is a prompt, a system message, or a set of guidelines sitting in a document somewhere, the implementation is still a vibe-coding exercise at enterprise scale. The outputs may be plausible. They will not be governed.

The question is not whether your AI is capable.

The question is whether it knows your business.

A final thought

Spec-driven development is not a new idea. It is a sixty-year-old idea that AI agents have made universally applicable for the first time.

The NASA engineers who inspired it, the software teams who refined it, and the safety-critical industries that depended on it were solving the same problem enterprise AI programmes face today: how do you ensure that a complex system produces the right output, consistently, under all conditions, in a way that can be verified and governed?

Their answer was the specification.

It remains the answer.

NASA would not launch without a specification.

Neither should your AI.

Read the white paper

If you want the longer argument in white paper form, read From NASA to AI Agents.

About the author

Jonny Bowker is the founder of Advanced Analytica Ltd, an AI strategy and architecture consultancy specialising in Intelligent Business Operating Models, spec-driven AI development, and the design and operation of intelligent digital workers. Advanced Analytica works with enterprise clients across professional services, financial services, and regulated industries.

“Most enterprise AI failures are not model failures. They are specification failures.”
Jonny Bowker
Opinion

Related Posts

View All
Next step

Ready to put your knowledge to work?

Tell us what you’re building, where AI touches your brand, and what needs to be governed. We’ll help you clarify the problem and define the right next steps.

Get in touch.

This must be a business email address.

Advanced Analytica

To succeed in a data-driven environment, organisations need more than traditional approaches. They need solutions that connect decision makers with the right information, expert judgement, and operational control when it matters most.

Advanced Analytica works with organisations to protect and capitalise on AI and data, manage risk, improve transparency, control cost, and strengthen performance. Drawing on enterprise-level expertise and more than two decades of data management experience, we turn data, AI, and organisational knowledge into governed strategic assets.