How a Company Brain structures AI agents across GTM
Build a Company Brain that keeps every AI agent aligned. An open-source AI agent team structure for B2B startups across marketing, sales, PMM, and CS.
If you’re running one AI agent for content and another for sales enablement, you’ve already hit the problem: they don’t agree on anything. Your marketing agent uses one set of personas. Your sales agent describes the product differently. Nobody told the CS agent about the new positioning.
This is the AI agent team structure problem. It’s not about which model you use or how good your prompts are. It’s about what your agents know, and whether they all know the same things. The fix is a Company Brain: a shared, structured knowledge base that every agent reads from.
I built an open-source template called Bravenger to make this easy to set up. Here’s how it works and why the structure matters.
The problem with one-off agents
Most teams start by giving each agent its own instructions. A system prompt for the marketing agent. A different one for sales. Maybe a CLAUDE.md file per project.
This works fine when you have one agent doing one job. It breaks when you have five agents that all need to reference the same ICP, the same competitive positioning, and the same messaging framework.
I wrote about how a single CLAUDE.md file structures one GTM workflow. Bravenger takes that idea and scales it across an entire org.
What a Company Brain looks like
Bravenger organizes shared company knowledge into three layers.
Foundation. ICPs, personas, use cases, and company goals. This is the ground truth. Every agent reads from the same persona definitions, the same use case descriptions, the same strategic priorities. When your VP of Sales persona shows up in a blog post and a battle card, they’re the same person with the same pain points.
Positioning and messaging. Category definition, differentiators, value propositions, objection handlers, and messaging by funnel stage. All grounded in the foundation layer. When an agent writes outbound copy, it pulls from the same positioning your content agent uses for blog posts.
Governance. This is the layer most teams skip and the one that matters most. Bravenger defines an 11-step mandatory read order, citation requirements, discipline-specific guidelines, and a conflict resolution hierarchy. Every agent output must cite the source file it drew from. A linter validates that those citations point to real files.
Five agents, one knowledge base
The template ships with system prompts for five GTM disciplines: marketing, sales, product marketing, product, and CS. Each prompt activates when you tell Claude Code which discipline you’re working in.
The routing happens through CLAUDE.md. Say “marketing agent” and it loads the marketing guidelines, which define output formats (blog posts at 800 to 1,500 words, social posts at 100 to 280 characters), required source files, and discipline-specific rules.
Say “sales agent” and it loads a different set: outbound sequences of 3 to 5 emails, battle cards, call prep briefs. Different formats, same underlying knowledge.
The AI agent team structure isn’t about building five separate agents. It’s about building one knowledge base that five agents read from, with governance that keeps them aligned.
What governance actually prevents
Without governance, agents drift. I’ve seen it in my own work. Two posts written a week apart using different descriptions of the same audience. A sales email that contradicts the positioning on the website.
Bravenger prevents this with three mechanisms:
Citation enforcement. Every factual claim needs a [Source: filename#section] tag pointing to a real file in the Brain. A Node.js linter checks that every citation resolves. If an agent makes a claim it can’t cite, you catch it before it ships.
Forbidden language scanning. Your brand defines words you never use. The linter catches them in any Brain file. Same idea as the “never use em dashes” rule I enforce through my own CLAUDE.md, but applied across every agent’s source material.
Read order. Agents must read files in a specific sequence: voice first, then audience, then content rules. Context builds progressively. Nothing gets skipped.
How to use it
Fork the repo. Replace the example company (NovaCRM, a fictional AI-powered CRM) with your own data. The example includes three personas, three use cases, a full messaging framework, and all five discipline guidelines. Use it as a template for structuring your own.
Start with the foundation layer. Get your ICPs and personas right. Everything else builds on top.
Then add positioning. Then governance rules. Run npm run lint to validate frontmatter schemas, citations, and forbidden language.
The repo is at github.com/wpsands/bravenger. MIT licensed. If you’re running AI agents across your B2B GTM org and they keep contradicting each other, this is the fix: a shared Company Brain with structure and enforcement, not just more prompts.