The deep brand context layer for AI agents. Across every surface where a machine speaks for your brand.
Brands are losing control of their voice. Not at the edges. Everywhere. Every AI agent generating a sales reply, a campaign asset, a support ticket, a developer doc, an investor update — every one of them is reaching for context the brand never provided. The output is plausible. It is also generic, off-tone, and increasingly the front door of the company.
The advertising industry is solving its slice of this through the Ad Context Protocol (AdCP): a public file at /.well-known/brand.json that gives ad-tech agents logos, colors, and a thin tone object. It is a real piece of infrastructure. It is also, by design, not enough.
Encoded Brands operates the layer underneath that. A hosted, authenticated, paid vault of deep brand context — voice modulated by audience, claims with verification, anti-AI patterns, narrative posture, surface-specific guidance — that any agent on any surface can pull from. We are compatible with AdCP. We are not bound by it. We work wherever a machine speaks for a brand.
The product is four parts: the Encoder (extracts brand DNA into structured context, available self-serve at $99 to $499 and enterprise at $25K and up), the Vault (hosted, gated, MCP-served), the Monitor (drift detection across live agent output), and the Encoded Brain (the corpus and intelligence that gets sharper every session, separated into three tiers so customer content never leaks across engagements). The business runs on a self-serve funnel into enterprise consulting fees, recurring vault MRR, and usage-based monitoring. The defensible asset is the accumulated pattern intelligence, not the file format.
Brand has always been tacit. That worked when humans were the ones writing the next email, the next deck, the next reply. It stops working the moment they aren't.
A brand is a set of decisions a person makes when nobody is watching. Use this word, not that one. Lead with this proof point, not that promise. Speak this way to a customer in week one, that way to a customer in year three. Those decisions live in the heads of people who have been at the company long enough to absorb them. They have never been written down in a way a machine could obey.
For most of the last forty years, that was fine. Brand guidelines lived in PDFs. Tone of voice lived in 90-minute onboarding workshops. The actual carriers of the brand were people, and people could be trained.
That model is collapsing in real time. Three things happened at once:
We call the result brand drift. Every output is a little off. The aggregate effect, six months in, is a company whose voice has been flattened toward the mean of the internet. Their differentiation, the actual thing they spent twenty years building, is being averaged out one agent call at a time.
AI agents are generating content for your brand right now. Creative tools are picking logo variants, choosing color palettes, and writing copy in what they think is your voice. They pull this from wherever they can find it. You have no control over what they find. You have no way to tell them what not to do.— Ad Context Protocol, brand protocol overview, May 2026
The quote above is from the ad-tech industry's own protocol body. They are correct about the problem. Their solution covers about a tenth of it.
There is one credible piece of infrastructure being built for this. It is built for paid media. It is real. It is also limited.
The Ad Context Protocol is an open standard published by AgenticAdvertising.org. It is at version 3.0. It is backed by Apache 2.0 licensing, monthly working groups, an industry org with paying members, SDKs in three languages, a certification program, and a launch partner (Zefr) that publicly shipped the first AdCP-based YouTube buying integration on April 30 of this year.
Mechanically, AdCP defines a file called brand.json hosted at /.well-known/brand.json on a brand's domain. Any well-behaved agent in the advertising ecosystem reads it. The file contains:
voice, attributes, dos, dontsbrand_agent — an MCP server the brand can host for richer, dynamic context
The brand_agent field is the important one. It is AdCP's own admission that a static file cannot carry the depth a real brand needs. They built the slot. They have not filled it.
AdCP is, in its own words, "an open agentic advertising standard." The scope is paid media: media buying, creative production, signals, governance, and brand identity as consumed by ad-tech agents. The file is technically readable by any agent, but the entire ecosystem — working group, registry, certification, SDKs, partner roster — is built for the advertising stack.
That leaves a long list of surfaces where AI agents are speaking for the brand today, with no protocol, no canonical source, and no oversight:
Every response your chatbot writes. Every off-tone apology. The single highest-volume surface where brand voice fails in real time.
AI-generated proposals, decks, follow-ups, account research. The voice that reaches your highest-intent prospects.
AI-authored API references, tutorials, error messages. The first thing a buyer's engineer sees of you.
AI-generated campaign pages, A/B variants, regional adaptations. Where paid traffic actually lands.
AI-summarized memos, all-hands content, manager comms. The voice your culture absorbs.
AI-drafted earnings narrative, IR replies, policy submissions, government affairs. The voice that costs money when it's wrong.
How ChatGPT and Perplexity describe you when nobody is looking. The framing you do not get to write.
Every custom GPT, every Claude project, every Copilot inside the company. Each one needs the same brand truth.
Even within paid media, the depth gap is real. AdCP's tone object is four flat strings. It cannot express how a brand sounds to a prospect versus a customer versus a regulator. It cannot pair a claim with evidence. It cannot encode the patterns the brand refuses to sound like. The depth has to live somewhere. AdCP put a door there. We are walking through it.
When a brand has an agent, the agent is the authoritative source for brand identity data.— AdCP brand.json spec, v3.0
The line above is the strategic permission slip. AdCP defers to whoever operates the brand agent. Today, that is nobody. The registry shows 2,763 brands listed and zero — zero — with a published brand.json at their own domain. The community-contributed stubs are AI-scraped placeholders. The brand-side adoption curve has not started.
Which means the role is open.
We are not a standards body. We are not a competitor to AdCP. We are the depth operator that the standard, by design, leaves room for.
Encoded Brands operates the brand context vault. A hosted, authenticated, MCP-served body of deep brand intelligence that AI agents query when they need to speak as the brand. We sell to the brand. We are read by everyone the brand authorizes.
brand_agent on AdCP's spec. Any ad-tech agent that already reads brand.json can follow the pointer to us, authenticate, and pull richer context. We participate. We do not fight.AdCP is the skin. Logos, colors, the four-line tone object, the public file every agent can read. We are the soul. The narrative, the modulation, the claims, the patterns the brand refuses to be. Skin is published. Soul is hosted, authenticated, and paid for.
Four products. One business. Each one feeds the next.
A proprietary strategic compiler. Takes messy inputs — brand decks, founder interviews, sales calls, ten-year-old guidelines — and extracts the brand's actual DNA into a structured, machine-readable corpus. This is where the consulting practice meets the software product. Half methodology, half compiler.
Hosted MCP server. AdCP-compatible brand_agent for paid media. Direct MCP endpoint for everything else. Public baseline returns the basics. Authenticated callers get the deep layer: audience modulation, claims, anti-patterns. One service, many brands, multi-tenant.
Drift detection. Watches the actual outputs of authorized agents — ad creative, support replies, sales drafts, anywhere our context is being consumed — and reports back where the live output diverges from the vault. Closes the loop. Becomes the upsell.
The corpus that makes the Encoder smarter every session. Three logically separate tiers, each with different rules: Public (our frameworks and methodology), Patterns (anonymized cross-client learning, the long-term moat), and Client Vaults (each brand's content, isolated, never read into another session). The bright line between Patterns and Vaults is the architecture that protects every customer's trust.
The product surfaces are linked. The Encoder feeds the Vault. The Vault feeds the agents. The agents feed the Monitor. The Monitor feeds the Brain. The Brain feeds the next Encoder session. We sell three of them as products. The fourth (the Brain) is the engine of all of them.
A full picture of how context moves from a founder's head to a live agent and back again.
How a brand becomes encoded, step by step. The first session takes weeks. The hundredth will take days.
We collect every artifact the brand has produced about itself: founder decks, board decks, brand guidelines, the last twelve months of sales calls, the website, the last three campaigns, the help center, support transcripts, the last earnings call, the last all-hands. The Encoder reads all of it. The point is not to summarize. The point is to surface where the brand sounds distinctively itself and where it sounds like everyone else.
A structured, one-question-at-a-time pushback session with the founder and core team. The Encoder rejects vague platitudes. "Innovation" is not a value. "Authentic" is not a voice descriptor. We push until we get operational specificity: "We sound impatient with received wisdom but never with the customer." "We make concessions to clarity but never to brevity." "We can use jargon with developers and never with buyers."
The Encoder compiles the intake and interview output into the vault's structured corpus. This includes the AdCP-conformant brand.json public layer (logos, colors, basic tone, the standard fields) and the deep layer behind authentication: audience-modulated voice rules, claims with verification triggers, anti-AI patterns, surface-specific guidance, representation framing for AI search.
Every output is scored against a 25-point integrity rubric. If the brand name were removed, could the document only belong to one company? Are the dos and donts deep enough to bind an agent's behavior? Is every claim paired with verifiable evidence? Below 15 of 25, we reject as template slop. The session restarts. This is the bar.
We host the vault. We publish the brand's brand.json at their domain (with a pointer to our brand_agent) or accept their existing one. We hand over MCP credentials for the agents the brand wants to authorize. The Monitor goes live in shadow mode and starts watching.
Quarterly review. The Monitor's drift reports drive the cadence. Where is the brand actually slipping? Which agents are pulling context inconsistently? Where is the corpus thin? The Encoder returns. The Brain absorbs the patterns. The next brand benefits.
A reasonable customer will ask, before signing anything: "You're using our work to train your system. What stops our content from showing up in another brand's session?" The answer matters. We built the architecture around it.
The Encoded Brain is three logically separate stores. They have different rules for who writes to them and who reads from them.
The bright line is between Tier 3 and Tier 2. It runs in one direction only. Tier 3 content does not flow into Tier 2 automatically. It flows through human review, with anonymization rules that have to actually pass, and only after a pattern has been observed often enough to no longer identify any single source. If a CMO says "no patterns, period" — fine. We carve out Tier 2 contribution in the contract. The encoding still happens. The Brain just doesn't learn from that engagement. That is the customer's right.
For the self-serve Encoder (the $99 and $499 tiers we will offer publicly), Tier 2 is not in the access path at all. Self-serve customers get Tier 1 (methodology, rubric, frameworks) and their own Tier 3 (their content, their vault). The pattern intelligence is reserved for enterprise engagements where the contract, the legal coverage, and the strategist relationship are all real. This is also how we make the enterprise price defensible: the methodology is in the box at $499. The accumulated cross-brand intelligence is what you buy at $25K and up.
The defensibility is not the file. It's not the methodology. It's the corpus that gets sharper every time we use it.
Each Encoder session does three things. It compiles a brand. It scores itself against the rubric. It writes back into the Encoded Brain whatever pattern produced the score. After a hundred sessions, the Encoder knows how a fintech brand fails differently from a CPG brand, how a B2B SaaS brand modulates voice between sales and onboarding, which X-not-Y rules produce drift in support and which ones hold up.
This is the part of the business that does not exist on day one. It accrues with each session. By session 50, the Encoder pushes back on patterns we have seen fail before. By session 200, no human strategist alive has the same pattern library. That is the long-term moat. The Vault is the recurring revenue. The Brain is the company.
Five revenue lines, arranged so each step is roughly 5-10x the previous one. The funnel actually compounds when the step gaps stay reasonable.
brand_agent MCP server on our infrastructure. Multi-tenant isolation. Capped call volume (5K reads/mo at entry, scales up). Standard schema. Self-service onboarding, community support, no SLAs. The on-ramp from one-time Encoder purchase to recurring revenue. Many customers will live here forever and that is fine — they are cheap to serve and they are public brand_agents on real domains.The steps are roughly: $99-$499 one-time → $49-$199/month → $25K-$100K one-time + $2K-$5K/month. Each jump is 5-10x, not 50x. A self-serve customer that buys the Encoder and stays on the standard Vault for two years has paid us $1,500 to $5,000 cumulative. That is real money at volume, and it is also the customer who eventually upgrades to enterprise when their team grows or their first AI customer support agent goes off-brand in a memorable way.
The tier separation also explains the price gaps. At $499 you get the methodology applied to your brand. At $25K and up you get the methodology plus the accumulated pattern intelligence of every previous enterprise engagement, plus a strategist accountable for the result. Same product surface, different intelligence behind it. The methodology is in the box. The Brain is what you pay enterprise for.
The Vault is high-margin at scale and infrastructure-heavy to get there. A hosted multi-tenant MCP server with authentication, Tier 3 namespace isolation, audit logs, and uptime guarantees is real software that needs a real engineer to build and a real bill to pay. Realistic ramp: one contract engineer in month one to ship v0 (brand_agent + auth + Tier 3 isolation), a second engineer when we cross 20 paying Vault customers, the Monitor and audit logging as v1 work after Cannes. At $2K-$5K per enterprise Vault customer, 10-15 of them covers a full-time engineer plus infrastructure. The math works at scale. The runway to get there is a real number we have to fund.
We are not betting on becoming a public registry. We are not betting on running a foundation. We are not betting on Zefr or any single ad-tech partner. We are betting on being the deep brand context operator that fifty brands have hired by Cannes 2027 and three hundred have hired by Cannes 2028. Operators win this category. Standards bodies get acquired by operators.
Five weeks to Cannes. Sequenced honestly. Built around what we can actually defend in a press conversation and what we can actually ship with the team we have.
brand_agent + auth + Tier 3 namespace isolation. Monitor, audit logs, and custom schema extensions are v1 work after Cannes. Named individual signed by end of week. Without this hire, "stand up the Vault" is fiction.brand_agent implementation spec end-to-end. Done as of this memo. The MCP tools we need to implement are get_adcp_capabilities, get_brand_identity, and (optionally) get_rights. The depth fields beyond AdCP's schema are ours to define.brand_agent MCP server. Multi-tenant. Authentication and Tier 3 isolation on day one. Two pilot brands deployed by end of month.brand_agents deployed on real domains. Pilot customers from Nick's and James's networks. Even three deployments puts us as the largest depth-layer deployer in the world — AdCP's registry has zero real deployments today.brand_agents already deployed on real domains. Self-serve product in beta. Talking with the AdCP working group about depth extensions for late 2026."