Cognitive Exchange AI
Not an agent. An AI brain. Seven deadly sins. Twelve pillars. 300+ typed kinds. Open source, sovereign, self-compounding. Intelligence compounds when exchanged.
The Problem
You wrote a great system prompt. Added tools. Maybe even wired up RAG. It works -- until it drifts, hallucinates, or another team builds the same thing from scratch because nothing is reusable. Your institutional knowledge lives in Slack threads and someone's laptop.
The gap is not intelligence. LLMs are already smart. The gap is infrastructure -- there is no typed, governed, composable system that turns raw LLM power into a brain that remembers, compounds, and belongs to you.
CEXAI fills that gap.
Most AI agents are a system prompt plus a few tools. CEXAI is seven specialized departments, each running on a deadly sin, producing typed knowledge assets that compound over time.
Core Properties
8 functions × 12 pillars × 300+ artifact kinds = a factory floor for intelligence. Spawn a new department in minutes. Every piece clicks into every other piece.
Runs on Claude, GPT, Gemini, or Ollama. Your knowledge stays in your repo, under your git history, on your hardware. No vendor owns your brain.
Every conversation, decision, and artifact compiles into typed, governed, searchable knowledge assets. Day 30 is dramatically smarter than day 1. Your repo becomes a proprietary digital asset — an AI brain that appreciates with every commit.
The Maturity Gap
A basic LLM agent is one prompt and a handful of tools. A CEX nucleus is a full business department: superintendent LLM, specialized sub-agents, toolbox, knowledge library, workflows, quality controls, and cultural DNA.
| Capability Axis | Basic Agent | CEX Nucleus |
|---|---|---|
| Knowledge (P01, P10) | Context stuffing | Typed RAG + entity memory + chunk strategies + prompt cache |
| Model (P02) | Single provider | Fallback chain across Claude / GPT / Gemini / Ollama |
| Prompt (P03) | One system prompt | Templates + chains + compiler + version control |
| Tools (P04) | Flat list | MCP servers + API clients + browser tools + search pipelines |
| Output (P05) | Free text | 300+ typed artifact kinds + formatters + parsers |
| Schema (P06) | None | Input / output schemas + validators + interface contracts |
| Evaluation (P07) | Ship and pray | Quality gates + scoring rubrics + LLM judges + benchmarks |
| Architecture (P08) | None | Agent cards + component maps + decision records |
| Config (P09) | Env vars | Typed configs + rate limits + feature flags + secrets |
| Feedback (P11) | None | Guardrails + bug loops + learning records + regression checks |
| Orchestration (P12) | None | Workflows + dispatch rules + composable crews + DAGs |
See It in Action
N01 Intelligence does not run a single LLM call. It activates a sin-driven agentic business unit:
That is not a chatbot. That is an intelligence department that compounds every time it runs.
The Artificial Sins
Every nucleus runs on one of the seven deadly sins. This is not decoration -- it is cultural DNA that determines what the nucleus optimizes for when instructions are ambiguous. Sins are what separate a generic LLM call from a specialist with convictions.
| Nucleus | Role | Sin Lens | What the Sin Does | Tier |
|---|---|---|---|---|
| N01 | Intelligence | Analytical Envy | Envies every competitor and benchmark -- to surpass them all | Sonnet |
| N02 | Marketing | Creative Lust | Writes copy that seduces. Hooks you cannot ignore. | Sonnet |
| N03 | Engineering | Inventive Pride | Refuses to ship mediocrity. The best of its kind or not at all. | Opus |
| N04 | Knowledge | Knowledge Gluttony | Insatiable hunger for facts, citations, context. Never enough. | Sonnet |
| N05 | Operations | Gating Wrath | Merciless at the quality gate. Breaks what must break. | Sonnet |
| N06 | Commercial | Strategic Greed | Extracts every revenue opportunity. No margin left on the table. | Sonnet |
| N07 | Orchestrator | Orchestrating Sloth | Never builds -- only dispatches. Laziness as leverage. | Opus |
N00 Genesis is the pre-sin archetype. Contributors build N08+ as vertical specializations (healthcare, fintech, legal, edtech).
The Pipeline
The 8F pipeline is how CEXAI thinks. Research, copy, code, analysis, deployment -- every LLM interaction decomposes into eight functions. This is the force multiplier that turns five words of user input into professional output.
Taxonomy
Every artifact lives in one of twelve pillars. Pillars are taxonomic axes, not departments -- every nucleus exercises every pillar. That is why a nucleus is a full department, not just an agent.
| Pillar | Name | Example Kinds |
|---|---|---|
| P01 | Knowledge | knowledge_card, rag_source, glossary_entry, chunk_strategy |
| P02 | Model | agent, model_provider, boot_config, mental_model |
| P03 | Prompt | system_prompt, prompt_template, chain, tagline |
| P04 | Tools | mcp_server, browser_tool, api_client, webhook |
| P05 | Output | landing_page, formatter, parser, diagram |
| P06 | Schema | input_schema, validator, type_def, interface |
| P07 | Evals | quality_gate, scoring_rubric, llm_judge, benchmark |
| P08 | Architecture | agent_card, component_map, decision_record |
| P09 | Config | env_config, rate_limit_config, feature_flag |
| P10 | Memory | entity_memory, knowledge_index, memory_summary |
| P11 | Feedback | guardrail, bugloop, learning_record |
| P12 | Orchestration | workflow, dispatch_rule, crew_template, dag |
The X
Intelligence compounds faster when shared. The X in CEXAI serves double duty: formally, it means Exchange -- the protocol for sharing typed knowledge assets between instances. Practically, it becomes your brand when you bootstrap with /init.
Every artifact is a self-describing exchange unit. It carries its kind, quality score, pillar, and nucleus origin in YAML frontmatter. Import an artifact into any CEXAI instance, run cex_doctor.py, and it validates automatically.
| Exchange Unit | Scope | Example |
|---|---|---|
| Knowledge Card | Single typed fact | kc_react_hooks_patterns.md |
| Builder | Full production capability (12 ISOs) | workflow-builder/ |
| SDK Provider | New runtime adapter | provider_ollama.py |
| Vertical Nucleus | Entire domain department | N08_healthcare/ |
What stays private. Brand config, memory, runtime state, and secrets never leave your instance. The exchange is about cognition, not identity.
Anti-fragile by design. CEXAI sits one layer above the LLM. When a better model drops, your artifacts improve -- they are the knowledge, not the model. When a runtime shuts down, switch providers in one YAML file. Your brain is yours.
Invest, don't spend. Every commit deposits into a compounding knowledge account. A recorded consultation becomes a reusable agent. A sales playbook becomes a governed workflow. Typed knowledge is LLM-agnostic -- it survives model upgrades, provider switches, and team turnover. The repository is the asset; when the business is sold, the AI brain transfers with it.
Clone the repo. Answer six questions. Your AI brain starts compounding in under five minutes.
Get Started NowWhat You Need
CEXAI runs through Claude Code. Install it, clone the repo, and Claude handles the rest -- dependencies, configuration, everything.
The CLI that powers CEX. Desktop app or terminal.
npm i -g @anthropic-ai/claude-code
For the 144 CLI tools. Claude Code installs pip deps for you.
python --version
Version control. Every artifact is committed and tracked.
git --version
Anthropic API key, OpenAI, Google, or Ollama for fully local.
ANTHROPIC_API_KEY
Three Steps
Clone the repo. Install deps. Takes 60 seconds.
Type /init. Answer 6 questions about your brand. CEX injects your identity into every nucleus.
Describe what you want in plain language. The 8F pipeline turns it into a professional, typed artifact.
Get Started
# 1. Clone
$ git clone https://github.com/your-org/cex.git && cd cex
# 2. Install dependencies
$ pip install -r requirements.txt # Core (pyyaml, tiktoken)
$ pip install -r requirements-llm.txt # LLM providers (optional)
# 3. Bootstrap your brand -- answer ~6 questions
$ python _tools/cex_bootstrap.py
# Or: type /init in Claude Code with CEX loaded
# 4. Build your first artifact
$ python _tools/cex_8f_runner.py "create knowledge card about product pricing" \
--kind knowledge_card --execute
# 5. Validate
$ python _tools/cex_doctor.py # Builder integrity
$ python _tools/cex_flywheel_audit.py audit # Full system (109 checks)
Multi-Runtime
CEXAI is provider-agnostic by construction. Same artifacts, same pipeline, same governance -- every runtime. Lock-in is not in the vocabulary.
| Runtime | Auth | When to Use |
|---|---|---|
| Claude (Anthropic) | API key or Max | Default -- highest quality, largest context (200K / 1M) |
| Codex (OpenAI) | ChatGPT-plus or API | GPT-5 runtime via OpenAI CLI |
| Gemini (Google) | oauth or API key | Free tier via gemini-2.5-flash-lite |
| Ollama (local) | None | Fully offline on your GPU |
Routing lives in one YAML file. Change providers, set fallback chains, pin per-nucleus models -- no code changes.
Orchestration
$ bash _spawn/dispatch.sh solo n03 "build agent card" # One builder
$ bash _spawn/dispatch.sh grid MISSION_NAME # Up to 6 parallel
$ bash _spawn/dispatch.sh swarm agent 5 "scaffold 5 agents" # N same-kind
$ python _tools/cex_crew.py run product_launch --execute # Multi-role crew
Portability
CEX artifacts compile down to any format an LLM consumes. One source of truth, every destination.
$ python _tools/cex_compile.py --target claude-md # -> CLAUDE.md
$ python _tools/cex_compile.py --target cursorrules # -> .cursorrules
$ python _tools/cex_compile.py --target customgpt # -> CustomGPT JSON
System Design
Layer 0 -- BUILDERS 300+ factories x 12 ISOs each = 3,600+ artifact constructors
Layer 1 -- PILLARS 12 pillars x 300+ kinds = the taxonomy
Layer 2 -- NUCLEI 8 nuclei (N00 archetype + N01-N07 operational)
Layer 3 -- PIPELINE 8-Function Pipeline (8F) = the assembly line
Layer 4 -- GOVERNANCE hooks + doctor + quality gates + flywheel audit
Layer 5 -- TOOLS 150+ Python CLI tools (cex_*.py) = the runtime
Layer 6 -- WIRING SDK + signals + decision manifests = the nervous system
FAQ
Eight questions. Zero fluff. Everything you need to decide if CEXAI belongs in your stack.
knowledge_card, agent, workflow). There are 300+ kinds. A builder is the 12-file specification that teaches an LLM how to produce a specific kind. Think of it as: the kind is the blueprint, the builder is the factory that follows the blueprint.Vocabulary
| CEXAI Term | Industry Equivalent | One-liner |
|---|---|---|
| Kind | Artifact type / schema | A typed template for a unit of knowledge. 300+ kinds today. |
| Pillar | Domain axis | One of 12 capability dimensions every nucleus exercises. |
| Nucleus | AI department | Autonomous LLM-powered business unit (N01-N07) with memory, tools, and sin lens. |
| 8F Pipeline | Reasoning loop | Eight functions: Constrain, Become, Inject, Reason, Call, Produce, Govern, Collaborate. |
| Builder | Factory | 12-file spec (one per pillar) that teaches an LLM to produce a specific kind. |
| ISO | Spec file | One of the 12 files inside a builder, each covering one pillar. |
| Sin Lens | Behavioral bias | Personality layer that determines what a nucleus optimizes for under ambiguity. |
| GDP | Decision protocol | User decides what, LLM decides how. |
| Exchange Unit | Portable asset | Any artifact with YAML frontmatter -- importable into any CEXAI instance. |
Learn More
| Resource | Description |
|---|---|
| quickstart.md | 5-minute setup guide |
| concepts.md | Core concepts: kinds, pillars, nuclei, 8F, GDP |
| cli-reference.md | All 144 CLI tools with examples |
| sdk-reference.md | Python SDK: CEXAgent, providers, memory |
| glossary.md | Canonical vocabulary (100+ terms) |
Open Source
See CONTRIBUTING.md. Every contribution passes:
{layer}_{kind}_{topic}.{ext}id, kind, pillar, title, quality validatedcex_hooks.py validate-allLicense