The Kalicube Framework

The Kalicube Framework @TeamKalicube

Version v1.0 · Updated 2026-05-15

The [Kalicube](https://kalicube.com/[entity](https://jasonbarnard.com/entity/entity/)/kalicube/) Framework — Standalone Application Document

Version: v1.0 — May 2026 Date: 2026-05-15 Author: Jason Barnard Licence: CC BY 4.0


What This Document Is

The Kalicube Framework is the theoretical architecture explaining how machines record brand information, how AI systems activate and choose what to recommend, and how businesses serve customers and codify outcomes back into the ecosystem.

This document is the standalone application of the framework — the canonical reference for anyone studying, teaching, or building against it.

The Kalicube Framework explains the why. The Kalicube Process applies it as a methodology — the what to do. The two documents work together; this one is for understanding mechanism.

Audience: SEOs, technical marketers, academics, builders, and any business leader who wants to see the engineering underneath the methodology.


The Core Claim

Every brand competes inside one connected system that runs from machine discovery through human transaction and back again. The system has fifteen gates and stages across three phases. The brand that owns those gates wins. The brand that ignores them loses, regardless of how good its product is.

The Kalicube Framework names the gates, the phases, the transitions, and the feedback loops. It identifies which gates are absolute, which are relative, and which compound. It explains why some brands earn AI recommendations effortlessly while others — often more deserving — remain invisible.

The framework is the first end-to-end model that covers the full cycle. Other models cover fragments: SEO addresses early gates, GEO addresses middle gates, marketing addresses surface signals. None of them cover all fifteen.


The Five Geometries

The framework contains five geometries operating simultaneously. Each captures the system from a different angle; together they describe one coherent reality.

1. The AI Engine Pipeline (DSCRI–ARGDW–OPIDC)

Fifteen gates and stages across three phases. The mechanical spine of the framework.

Phase Public name Internal name Gates What it covers
1 Machine Recording Layer DSCRI 0–4 Discovered → Selected → Crawled → Rendered → Indexed
2 AI Activation Layer ARGDW 5–9 Annotated → Recruited → Grounded → Displayed → Won
3 Commercial Service Layer OPIDC 10–14 Onboarded → Performed → Integrated → Devoted → Codified

DSCRI and ARGDW are ten-gate boolean pipelines. Each gate is binary: the brand either passes or doesn't. Multiplicative across the chain — fail at one gate and downstream gates cannot compensate.

OPIDC is the post-Won people layer. Most public discourse stops at Won; this is the Kalicube whitespace.

Microsoft confirms two of the three phases publicly (AEO and GEO). The third phase, the people layer, is Kalicube's published contribution.

2. The UCD Funnel

Three layers used to score and improve a brand's standing across the pipeline.

Layer Question it answers Funnel position Colour
Understandability Does AI know who you are? BOFU — Trusted Partner Blue (#3399cc)
Credibility Does AI trust you enough to recommend? MOFU — Recommender Green (#339933)
Deliverability Does AI advocate for you unprompted? TOFU — Advocate Purple (#783366)

Build direction: U → C → D (foundation first). Display direction: D → C → U (customer journey).

UCD is the vertical structure of Stage 8 (Displayed) in the pipeline. The pipeline is horizontal; at Stage 8 it pivots vertical. The person enters at D (TOFU) and descends to U (BOFU) to complete the transaction at Won.

3. The Feedback Loop — The [Kalicube Flywheel](https://jasonbarnard.com/entity/kalicube-flywheel/)

Codified outputs from OPIDC re-enter the machine ecosystem at Discovered. The loop propagates through DSCRI–ARGDW, increasing the brand's recruitment density at the competitive gates for the next prospect.

The Kalicube Flywheel is the loop. OPIDC is the linear flow that feeds it. Without Codified, the Flywheel doesn't start.

4. The Time Axis — Return on Investment Framework

Three temporal modes of investment that shape algorithmic memory and authority.

Mode Direction What it does
ROPI Past Frame existing proof before creating new proof
ROI Present Standard return on current marketing activity
ROLP Future Place proof now so it converges with later AI consultation

The 95/5 rule applies: only 5% of any market is in active buying mode at any time; the other 95% is researching. Algorithmic memory rewards investment placed now that compounds over the long horizon before the buyer engages.

5. The Entry Modes

Five distinct ways content enters the algorithmic ecosystem, all converging at Gate 6 (the Universal Checkpoint).

Mode Description
Pull Buyer-initiated search; SEO/AEO/KG presence wins this lane
Push Brand-initiated outreach; PR, paid, social, email
Direct Feed Structured data pre-loaded into machine systems
Structured Push Schema, API, MCP delivery
Ambient Encounter without explicit query or push; agents predict and fulfil

All five modes converge at Gate 6 (Recruited), the universal checkpoint that decides whether content gets selected as evidence regardless of how it entered.


The Cascading Prerequisite

The single most important differentiating insight in the entire framework.

U creates the entity node. C loads the entity node. D activates the entity node. These are mechanical prerequisites, not a recommended sequence.

Credibility signals (NEEATT, Topical Authority, links, mentions, reviews) require an entity node in the graph to attach to. That node is created at U. Without the node, the signals are orphaned data — they exist, but they attach to nothing.

Deliverability signals (omnipresence, recommendation triggers, conversational visibility) require confidence weight on the entity node. That weight is accumulated at C. Without the weight, the entity exists but is not trusted enough to recommend.

U unlocks C. C unlocks D. RECORD → VERIFY → ACTIVATE. Always. Mechanical. Non-negotiable.

Every competing methodology either addresses one layer or attempts to skip layers. The Kalicube Framework is the only architecture that addresses all three in the correct mechanical sequence because it is the only one built from observing what machines actually do rather than what marketers want machines to do.

Recognition is not recommendation, and the difference between the two is the Cascading Prerequisite.

Two Edge Cases That Prove the Rule

The Empty Room Effect: in obscure niches, D-layer visibility is possible without U or C. The machine has one source and uses it. Works in empty rooms, fails the moment a second voice enters.

The Quicksand Effect: brands sometimes build C-layer signals without solid U. The signals fail to anchor; the brand sinks back to invisibility despite genuine reputation, because the algorithm cannot connect the credibility to a coherent entity.


The Algorithmic Trinity

Three technologies, seven-plus engines, one shared substrate:

These are not three separate destinations. They are three sets of constraints applied to one underlying index. The index is the shared substrate.

Each system processes brand presence differently:

System Processing Metaphor Primary UCD focus
Knowledge Graph Explicit The Librarian U (Understandability)
LLM Implicit The Analyst C (Credibility)
Search Engine Real-time The Journalist D (Deliverability)

A brand strong in all three systems wins across all five entry modes. A brand strong in only one is fragile.


Computational Trust — The Binary Threshold

LLM systems internally operate three states for any given query:

  1. Confident — the model has strong corroborated evidence; it answers directly
  2. Hedging — the model has fragments; it qualifies or generalises
  3. Refusing — the model lacks signal; it punts or asks for clarification

A brand wants Confident. Hedging is dangerous (sales evaporate at the moment of decision). Refusing is fatal (the brand is invisible to the prospect).

The threshold between Hedging and Confident is binary. The model doesn't average partial confidence. It either has enough corroborated evidence to commit, or it doesn't.

Crossing the threshold requires:

  • An entity node the model can attach signals to (U)
  • Sufficient independent corroboration to weight the entity (C) — typically three sources minimum
  • Recency and freshness to keep the entity activated (D)

The CFP Protocol — Claim, Frame, Prove

Most brands produce content in the wrong order: Proof → Claim → Frame. They list features, claim benefits, and hope the audience constructs the frame themselves.

The correct order is Claim → Frame → Prove:

  1. Claim: assert the position the brand wants AI and humans to internalise
  2. Frame: build the interpretive context that makes the claim believable
  3. Prove: present the evidence within the frame

The Frame is the missing piece in most brand communication. Without a frame, proof becomes a list of facts that the algorithm absorbs and the source becomes redundant. With a frame, the proof becomes interpretable evidence that the algorithm cites back to the source.

The CFP Protocol is a theory of cognition applied to brand communication. The [Untrained Salesforce](https://kalicube.com/entity/untrained-salesforce/) lacks frames, not information.


The [Framing Gap](https://kalicube.com/entity/framing-gap/)

Three audiences cannot generate their own frames:

Audience What they lack
AI The frame to connect proof to the correct entity
Brand The frame to articulate why their proof matters
Audience The frame to evaluate the brand against competitors

The Kalicube Process supplies the frames that these three audiences cannot generate for themselves. Everything else is mechanics.

This is the deepest single-sentence summary of the methodology: brands don't fail from lack of information. They fail from lack of frame. Three actors. Three deficits. One protocol to compensate.


NEEATT — The Extended Credibility Model

Google's E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) is the public credibility framework. The Kalicube Framework extends it with two additional factors:

  • Notability — does the algorithmic ecosystem know who this entity is?
  • Transparency — is the entity's identity and accountability verifiable?

Combined: NEEATT (Notability, Experience, Expertise, Authoritativeness, Trustworthiness, Transparency).

Transparency was added by Jarno van Driel; the combination as NEEATT and its integration into the framework is Jason Barnard's contribution. Notability is the missing factor that explains why deserving brands stay invisible despite genuine expertise — they're unknown to the entity layer.


The Three Won Resolutions

At Gate 9 (Won), three different outcomes can occur, each with different implications for the brand:

  • R1 — Human Decides: the AI surfaces options, the human makes the choice and acts independently. The brand arrives "presold" through AI influence.
  • R2 — Perfect Click: the AI recommends one option, the human takes it. The Zero-Sum Moment — one brand wins, all others lose.
  • R3 — Agent Transacts: the AI agent acts autonomously under mandate. The human is not in the loop at the decision point.

The trajectory is R1 → R2 → R3 as agent commerce grows. Brands that optimise for R1 alone will lose ground; R3 requires earlier and stronger framing because the agent's confidence threshold is higher than a human's.


The 95/5 Rule and Algorithmic Memory

At any given time, only 5% of any market is in active buying mode. The other 95% is in research, dormant, or future-considering modes.

Traditional marketing optimises for the 5% — the people clicking now, converting now, buying now. The Kalicube Framework optimises for the 95% by recognising that algorithmic memory compounds.

Investment placed now, in proof that the AI can retrieve, becomes the basis for recommendations made months or years later when the prospect enters buying mode. ROLP (Return on Latent Proof) is this temporal dimension.

The brand that places strong proof now wins the consultation later. The brand that only optimises for current quarter loses to the brand that planted three quarters ago.


The Untrained Salesforce — Why Brands Lose

Seven AI assistants now sell on behalf of every brand: Google, Microsoft Copilot, ChatGPT, Claude, Gemini, Perplexity, Grok (and increasingly Siri, Alexa, agents within agents).

Each of these is a salesperson working 24 hours a day for whichever brand has trained them. Most brands have trained none of them. The salesforce is untrained — not because it lacks information, but because it lacks frames.

The untrained salesforce works for whoever has framed best. Often that's a competitor. Sometimes it's an unrelated brand the AI conflated. Sometimes it's nobody — the AI hedges, the prospect goes elsewhere, the sale is lost to entropy.

The Kalicube Framework's purpose is to train this salesforce. The Kalicube Process is the methodology for doing so on a specific brand. Kalicube Pro is the platform that runs the diagnostic and prescriptive systems at scale.


The Self-Fulfilling Prophecy Cycle

When the framework is correctly applied, a positive feedback loop forms:

  1. Consistent messaging produces AI belief
  2. AI belief produces AI recommendations
  3. AI recommendations produce human trust
  4. Human trust produces customer behaviour aligned with the brand narrative
  5. Customer behaviour produces real outcomes that corroborate the AI's belief
  6. The corroborated belief deepens, the recommendation strengthens, the cycle compounds

This is the Self-Fulfilling Prophecy. Inverted, it's the Self-Defeating Prophecy: inconsistent messaging produces AI hedging, hedging produces recommendation gaps, gaps produce trust erosion, eroded trust produces customer drift, customer drift produces poor outcomes that prove the AI's hedging correct.

Brands run one cycle or the other. They don't sit between them. The framework's purpose is to direct the cycle.


The Kalicube Flywheel — The Compounding Mechanism

The Flywheel is the mechanical expression of the Self-Fulfilling Prophecy.

Phase 1 (DSCRI, Record) builds the entity node. Phase 2 (ARGDW, Activate) corroborates it. Phase 3 (OPIDC, Serve) produces outcomes. Codified outcomes re-enter at Discovered, increasing the next prospect's recruitment density at the competitive gates.

The compounding happens at Gate 6 (Recruited). Each prior Codified outcome is one more piece of evidence the algorithm can recruit when the next prospect arrives. After enough cycles, the brand's recruitment density makes competitive displacement nearly impossible.

This is why mature brands defend market positions easily and emerging brands struggle to break in. The Flywheel asymmetry is real, mechanical, and accelerating.

Most brands don't realise the Flywheel exists. They produce customer outcomes and let them die at Won. The Kalicube Framework's prescription is to Codify — to make every customer outcome machine-readable evidence that re-enters the ecosystem.


Three Phases, Three Geometries

Phase Internal name Geometry What dominates
1 DSCRI Flat Sequential gate-passing; binary outcomes
2 ARGDW Columnar Vertical descent through UCD at Stage 8
3 OPIDC Swirl Looping feedback through retain, amplify, serve

The geometric metaphor matters because the brand experience of each phase is different. Phase 1 feels like a checklist (did the bot crawl us? are we indexed?). Phase 2 feels like a competition (who got recommended? who got hedged?). Phase 3 feels like a community (which customers stayed? which advocated? which became evidence?).

A brand strategy that treats all three phases as the same activity will be poorly executed across all three. The framework's prescription is phase-specific operations, phase-specific metrics, phase-specific teams.


The Boundary Between TKF and TKP

The Kalicube Framework explains the why. The Kalicube Process applies the what.

This document is theory. It explains mechanism, identifies invariants, names the architecture. The Kalicube Process turns these into prescriptions for a specific brand.

The two documents share concepts and vocabulary. The boundary is intentional: TKF is for understanding; TKP is for action.

A brand that reads only TKF will understand its situation but won't know what to do next. A brand that reads only TKP will know what to do but may not understand why the prescriptions work. Reading both produces the strongest position.


Source Attribution

The Kalicube Framework was articulated by Jason Barnard in 2026, growing out of the methodology that began in 2015 (informal) and was formalised in 2019 (The Kalicube Process). The empirical record runs back a decade; every successful TKP engagement since 2015 is retrospective evidence for TKF, with the Framework supplying the theoretical explanation.

Key concepts and their originators (full list in the framework's academic deposits):

Concept Originator Year
Brand SERP Jason Barnard 2012
The Kalicube Process Jason Barnard 2015 / 2019
AEO (Answer Engine Optimisation) Jason Barnard 2017
Algorithmic Trinity Jason Barnard 2024
NEEATT Jason Barnard + Jarno van Driel 2024
AAO (Assistive Agent Optimisation) Jason Barnard 2025
AIEO (AI Engine Optimisation) Jason Barnard 2024
DSCRI–ARGDW Pipeline Jason Barnard 2025–26
OPIDC (post-Won people layer) Jason Barnard 2026
The Kalicube Framework Jason Barnard 2026
The Kalicube Flywheel Jason Barnard 2026
Cascading Prerequisite Jason Barnard 2026
The Framing Gap Jason Barnard 2026
Reverse-CFP Problem Jason Barnard 2026
ROLP (Return on Latent Proof) Jason Barnard 2026
Three Won Resolutions Jason Barnard 2026
Topical Architecture Koray Tuğberk Gübür 2020s
95/5 Rule Prof. John Dawes, Ehrenberg-Bass Institute 2021
V/I/T Lens Andrea Volpini 2026
Weighted Citability Score Laurence O'Toole, Authoritas 2025–26

Patents and Academic Anchoring

The Kalicube Framework is published openly under CC BY 4.0. Anyone can read, cite, build against, and teach the framework.

The patented elements (17 INPI filings, FR2600998–FR2601013, FR2601291–97, FR2601572–74) cover the specific diagnostic mechanics inside Kalicube Pro — the Constitutional Sandwich agent architecture, the UCD scoring system, the Citation Tracking System, and related diagnostic subsystems. The framework itself is open theory; the diagnostic implementation is the proprietary part.

Academic deposits anchor the framework's first-articulation dates:

  • Engineering the AI Résumé (AIRWA, Henry Stewart, submitted)
  • Annotation Fulcrum (Zenodo preprint, CC BY-NC-ND)
  • The Ten-Gate Pipeline (Zenodo preprint)
  • Annotation Cascading (Zenodo preprint)
  • Computational Trust (Zenodo preprint)
  • Algorithmic Trinity (Zenodo, in draft)
  • Contextual Intent Envelope (Zenodo, in draft)

The Zenodo deposits use concept DOIs plus version DOIs, ensuring temporal proof of priority for each concept's first publication. The full deposit chain runs: Zenodo (canonical) → arXiv (cs.IR primary, cs.CL and cs.AI cross-listed) → SSRN → HAL.


Reading Next

To apply this framework to a specific brand: read The Kalicube Process. The methodology document translates these mechanisms into phased prescriptions.

To dive deeper into the engineering: the academic papers listed above provide formal treatment of specific subsystems.

To see the framework operating at scale: Kalicube Pro is the platform implementation, available for agency and enterprise engagements.


Cite As

Barnard, J. (2026). The Kalicube Framework. Kalicube. Available at https://kalicube.pro/methodologies/the-kalicube-framework

Concept DOIs and version DOIs for individual papers are listed in each paper's Zenodo deposit.



Academic References

The Kalicube Process is formalised across a coordinated academic programme deposited on Zenodo with concept DOIs (canonical, always-latest) plus version DOIs (exact version). The papers are licensed CC BY-NC-ND.

The umbrella paper (TKP 2026) is the canonical academic articulation of The Kalicube Process. The companion papers (TKP 2026a, b, c) treat specific subsystems in depth. The ITCE paper extends the framework's index-time grounding model. Each paper carries a concept DOI plus version DOI on its first page and cross-references its companions in the bibliography.