Why CEX The Sins 8F Pipeline Exchange Get Started Runtimes FAQ Vault Discord GitHub

CEXAI

Cognitive Exchange AI

Not an agent. An AI brain. Seven deadly sins. Twelve pillars. 300+ typed kinds. Open source, sovereign, self-compounding. Intelligence compounds when exchanged.

Version Python License LLMs Pillars Nuclei Kinds Builders Tools
GitHub Sponsors Ko-fi Polar
300+kinds
300+builders
3,600+ISOs
150+tools
12pillars
8nuclei

Every AI agent you build forgets you by morning

You wrote a great system prompt. Added tools. Maybe even wired up RAG. It works -- until it drifts, hallucinates, or another team builds the same thing from scratch because nothing is reusable. Your institutional knowledge lives in Slack threads and someone's laptop.

The gap is not intelligence. LLMs are already smart. The gap is infrastructure -- there is no typed, governed, composable system that turns raw LLM power into a brain that remembers, compounds, and belongs to you.

CEXAI fills that gap.

Most AI agents are a system prompt plus a few tools. CEXAI is seven specialized departments, each running on a deadly sin, producing typed knowledge assets that compound over time.

What makes this different

Composable

8 functions × 12 pillars × 300+ artifact kinds = a factory floor for intelligence. Spawn a new department in minutes. Every piece clicks into every other piece.

Sovereign

Runs on Claude, GPT, Gemini, or Ollama. Your knowledge stays in your repo, under your git history, on your hardware. No vendor owns your brain.

Self-compounding

Every conversation, decision, and artifact compiles into typed, governed, searchable knowledge assets. Day 30 is dramatically smarter than day 1. Your repo becomes a proprietary digital asset — an AI brain that appreciates with every commit.

A basic agent vs. a CEX nucleus

A basic LLM agent is one prompt and a handful of tools. A CEX nucleus is a full business department: superintendent LLM, specialized sub-agents, toolbox, knowledge library, workflows, quality controls, and cultural DNA.

Capability AxisBasic AgentCEX Nucleus
Knowledge (P01, P10)Context stuffingTyped RAG + entity memory + chunk strategies + prompt cache
Model (P02)Single providerFallback chain across Claude / GPT / Gemini / Ollama
Prompt (P03)One system promptTemplates + chains + compiler + version control
Tools (P04)Flat listMCP servers + API clients + browser tools + search pipelines
Output (P05)Free text300+ typed artifact kinds + formatters + parsers
Schema (P06)NoneInput / output schemas + validators + interface contracts
Evaluation (P07)Ship and prayQuality gates + scoring rubrics + LLM judges + benchmarks
Architecture (P08)NoneAgent cards + component maps + decision records
Config (P09)Env varsTyped configs + rate limits + feature flags + secrets
Feedback (P11)NoneGuardrails + bug loops + learning records + regression checks
Orchestration (P12)NoneWorkflows + dispatch rules + composable crews + DAGs

You say "research my competitors." Here is what happens.

N01 Intelligence does not run a single LLM call. It activates a sin-driven agentic business unit:

  • Analytical Envy (its sin lens) drives it to surpass every public source -- not match them, surpass them.
  • MCP servers + browser tools + API clients (P04) scrape, fetch, and cross-reference live data.
  • A crew of specialized sub-agents -- researcher, analyst, fact-checker -- divides the work in parallel.
  • The 8-Function Pipeline (F1 through F8) drives every artifact to a 9.0+ quality floor before it touches your repo.
  • Entity memory + knowledge index (P10) capture what it learned, so the next run starts smarter than the last one ended.
  • Peer-reviewed quality gates (P07, P11) block anything subpar from merging.

That is not a chatbot. That is an intelligence department that compounds every time it runs.

Seven departments. Seven deadly sins.

Every nucleus runs on one of the seven deadly sins. This is not decoration -- it is cultural DNA that determines what the nucleus optimizes for when instructions are ambiguous. Sins are what separate a generic LLM call from a specialist with convictions.

NucleusRoleSin LensWhat the Sin DoesTier
N01IntelligenceAnalytical EnvyEnvies every competitor and benchmark -- to surpass them allSonnet
N02MarketingCreative LustWrites copy that seduces. Hooks you cannot ignore.Sonnet
N03EngineeringInventive PrideRefuses to ship mediocrity. The best of its kind or not at all.Opus
N04KnowledgeKnowledge GluttonyInsatiable hunger for facts, citations, context. Never enough.Sonnet
N05OperationsGating WrathMerciless at the quality gate. Breaks what must break.Sonnet
N06CommercialStrategic GreedExtracts every revenue opportunity. No margin left on the table.Sonnet
N07OrchestratorOrchestrating SlothNever builds -- only dispatches. Laziness as leverage.Opus

N00 Genesis is the pre-sin archetype. Contributors build N08+ as vertical specializations (healthcare, fintech, legal, edtech).

Eight functions. Every task. No exceptions.

The 8F pipeline is how CEXAI thinks. Research, copy, code, analysis, deployment -- every LLM interaction decomposes into eight functions. This is the force multiplier that turns five words of user input into professional output.

F1ConstrainResolve kind, load schema, set limits
F2BecomeLoad builder identity (12 ISOs)
F3InjectInject KCs, memory, brand, context
F4ReasonPlan approach, resolve via GDP
F5CallDiscover tools, cross-reference
F6ProduceGenerate the artifact
F7GovernValidate against hard gates
F8CollaborateSave, compile, commit, signal

The 12 Pillars

Every artifact lives in one of twelve pillars. Pillars are taxonomic axes, not departments -- every nucleus exercises every pillar. That is why a nucleus is a full department, not just an agent.

PillarNameExample Kinds
P01Knowledgeknowledge_card, rag_source, glossary_entry, chunk_strategy
P02Modelagent, model_provider, boot_config, mental_model
P03Promptsystem_prompt, prompt_template, chain, tagline
P04Toolsmcp_server, browser_tool, api_client, webhook
P05Outputlanding_page, formatter, parser, diagram
P06Schemainput_schema, validator, type_def, interface
P07Evalsquality_gate, scoring_rubric, llm_judge, benchmark
P08Architectureagent_card, component_map, decision_record
P09Configenv_config, rate_limit_config, feature_flag
P10Memoryentity_memory, knowledge_index, memory_summary
P11Feedbackguardrail, bugloop, learning_record
P12Orchestrationworkflow, dispatch_rule, crew_template, dag

The X stands for Exchange

Intelligence compounds faster when shared. The X in CEXAI serves double duty: formally, it means Exchange -- the protocol for sharing typed knowledge assets between instances. Practically, it becomes your brand when you bootstrap with /init.

Every artifact is a self-describing exchange unit. It carries its kind, quality score, pillar, and nucleus origin in YAML frontmatter. Import an artifact into any CEXAI instance, run cex_doctor.py, and it validates automatically.

Exchange UnitScopeExample
Knowledge CardSingle typed factkc_react_hooks_patterns.md
BuilderFull production capability (12 ISOs)workflow-builder/
SDK ProviderNew runtime adapterprovider_ollama.py
Vertical NucleusEntire domain departmentN08_healthcare/

What stays private. Brand config, memory, runtime state, and secrets never leave your instance. The exchange is about cognition, not identity.

Anti-fragile by design. CEXAI sits one layer above the LLM. When a better model drops, your artifacts improve -- they are the knowledge, not the model. When a runtime shuts down, switch providers in one YAML file. Your brain is yours.

Invest, don't spend. Every commit deposits into a compounding knowledge account. A recorded consultation becomes a reusable agent. A sales playbook becomes a governed workflow. Typed knowledge is LLM-agnostic -- it survives model upgrades, provider switches, and team turnover. The repository is the asset; when the business is sold, the AI brain transfers with it.

Ready to stop building throwaway agents?

Clone the repo. Answer six questions. Your AI brain starts compounding in under five minutes.

Get Started Now

Prerequisites

CEXAI runs through Claude Code. Install it, clone the repo, and Claude handles the rest -- dependencies, configuration, everything.

>_

Claude Code

The CLI that powers CEX. Desktop app or terminal.

npm i -g @anthropic-ai/claude-code

Py

Python 3.10+

For the 144 CLI tools. Claude Code installs pip deps for you.

python --version

Git

Git

Version control. Every artifact is committed and tracked.

git --version

AI

Any LLM Provider

Anthropic API key, OpenAI, Google, or Ollama for fully local.

ANTHROPIC_API_KEY

How it works

1

Clone

Clone the repo. Install deps. Takes 60 seconds.

2

Bootstrap

Type /init. Answer 6 questions about your brand. CEX injects your identity into every nucleus.

3

Build

Describe what you want in plain language. The 8F pipeline turns it into a professional, typed artifact.

Five minutes to your first artifact

# 1. Clone
$ git clone https://github.com/your-org/cex.git && cd cex

# 2. Install dependencies
$ pip install -r requirements.txt             # Core (pyyaml, tiktoken)
$ pip install -r requirements-llm.txt         # LLM providers (optional)

# 3. Bootstrap your brand -- answer ~6 questions
$ python _tools/cex_bootstrap.py
# Or: type /init in Claude Code with CEX loaded

# 4. Build your first artifact
$ python _tools/cex_8f_runner.py "create knowledge card about product pricing" \
    --kind knowledge_card --execute

# 5. Validate
$ python _tools/cex_doctor.py                 # Builder integrity
$ python _tools/cex_flywheel_audit.py audit   # Full system (109 checks)

Your infrastructure. Your rules.

CEXAI is provider-agnostic by construction. Same artifacts, same pipeline, same governance -- every runtime. Lock-in is not in the vocabulary.

Claude Codex Gemini Ollama
RuntimeAuthWhen to Use
Claude (Anthropic)API key or MaxDefault -- highest quality, largest context (200K / 1M)
Codex (OpenAI)ChatGPT-plus or APIGPT-5 runtime via OpenAI CLI
Gemini (Google)oauth or API keyFree tier via gemini-2.5-flash-lite
Ollama (local)NoneFully offline on your GPU

Routing lives in one YAML file. Change providers, set fallback chains, pin per-nucleus models -- no code changes.

Four dispatch modes

$ bash _spawn/dispatch.sh solo n03 "build agent card"              # One builder
$ bash _spawn/dispatch.sh grid MISSION_NAME                        # Up to 6 parallel
$ bash _spawn/dispatch.sh swarm agent 5 "scaffold 5 agents"        # N same-kind
$ python _tools/cex_crew.py run product_launch --execute            # Multi-role crew

Write once. Deploy to any LLM tool.

CEX artifacts compile down to any format an LLM consumes. One source of truth, every destination.

$ python _tools/cex_compile.py --target claude-md     # -> CLAUDE.md
$ python _tools/cex_compile.py --target cursorrules   # -> .cursorrules
$ python _tools/cex_compile.py --target customgpt     # -> CustomGPT JSON

Architecture

Layer 0 -- BUILDERS       300+ factories x 12 ISOs each = 3,600+ artifact constructors
Layer 1 -- PILLARS        12 pillars x 300+ kinds = the taxonomy
Layer 2 -- NUCLEI         8 nuclei (N00 archetype + N01-N07 operational)
Layer 3 -- PIPELINE       8-Function Pipeline (8F) = the assembly line
Layer 4 -- GOVERNANCE     hooks + doctor + quality gates + flywheel audit
Layer 5 -- TOOLS          150+ Python CLI tools (cex_*.py) = the runtime
Layer 6 -- WIRING         SDK + signals + decision manifests = the nervous system

You were going to ask anyway

Eight questions. Zero fluff. Everything you need to decide if CEXAI belongs in your stack.

Is CEXAI an agent framework?
No. CEXAI is a typed knowledge system. Agent frameworks give you tools to build one agent. CEXAI gives you the infrastructure to build an entire AI-powered organization: seven departments, each with memory, tools, sub-agents, quality controls, and cultural DNA. Think of it as the operating system, not the app.
Do I need to know Python?
No. CEX is designed for Claude Code -- you describe what you want in plain language, and the 8F pipeline handles the rest. The Python tools run automatically in the background. If you want to customize or extend, Python knowledge helps, but it is not required to get value on day one.
Can I use this with GPT, Gemini, or local models?
Yes. CEXAI is provider-agnostic. The default is Claude (highest quality), but every nucleus has a fallback chain configured in one YAML file. Swap to GPT-5 via Codex, Gemini via Google CLI, or run fully offline with Ollama. Your artifacts are the same regardless of which model produces them.
What is the difference between a "kind" and a "builder"?
A kind is a typed artifact template (e.g., knowledge_card, agent, workflow). There are 300+ kinds. A builder is the 12-file specification that teaches an LLM how to produce a specific kind. Think of it as: the kind is the blueprint, the builder is the factory that follows the blueprint.
Is my data private?
Completely. CEXAI runs in your repo, on your machine. Brand config, memory, runtime state, and secrets never leave your instance. The "exchange" protocol is opt-in: you choose what to share, and what stays private.
How is this different from using Claude Code alone?
Claude Code is the runtime. CEXAI is the knowledge layer on top of it. Without CEX, Claude Code starts every session from zero. With CEX, it loads 300+ typed builders, 12 pillars of accumulated knowledge, quality gates, and your brand context. Every session compounds on the last one.
Can I contribute a new kind or nucleus?
Yes. New kinds go through the builder pipeline (12 ISOs per kind). New nuclei (N08+) follow the bootstrap protocol: pick a domain, choose a sin lens, create 9 required assets, and validate. Vertical examples: healthcare, fintech, legal, edtech.
What does "intelligence compounds when exchanged" mean?
Every artifact you create is a typed, portable knowledge asset. Share a builder with another instance, and both get smarter. Import a vertical nucleus someone built for healthcare, and your brain gains an entire medical department. The more the ecosystem exchanges, the faster everyone compounds.

Glossary

CEXAI TermIndustry EquivalentOne-liner
KindArtifact type / schemaA typed template for a unit of knowledge. 300+ kinds today.
PillarDomain axisOne of 12 capability dimensions every nucleus exercises.
NucleusAI departmentAutonomous LLM-powered business unit (N01-N07) with memory, tools, and sin lens.
8F PipelineReasoning loopEight functions: Constrain, Become, Inject, Reason, Call, Produce, Govern, Collaborate.
BuilderFactory12-file spec (one per pillar) that teaches an LLM to produce a specific kind.
ISOSpec fileOne of the 12 files inside a builder, each covering one pillar.
Sin LensBehavioral biasPersonality layer that determines what a nucleus optimizes for under ambiguity.
GDPDecision protocolUser decides what, LLM decides how.
Exchange UnitPortable assetAny artifact with YAML frontmatter -- importable into any CEXAI instance.

Documentation

ResourceDescription
quickstart.md5-minute setup guide
concepts.mdCore concepts: kinds, pillars, nuclei, 8F, GDP
cli-reference.mdAll 144 CLI tools with examples
sdk-reference.mdPython SDK: CEXAgent, providers, memory
glossary.mdCanonical vocabulary (100+ terms)

Contributing

See CONTRIBUTING.md. Every contribution passes:

  • Naming: {layer}_{kind}_{topic}.{ext}
  • Frontmatter: id, kind, pillar, title, quality validated
  • Quality gate: peer-reviewed score ≥ 8.0
  • Pre-commit hooks: cex_hooks.py validate-all
  • 8F trace: every artifact shows F1→F8

MIT