How the Blaze Platform delivers persistent institutional knowledge across AI sessions, tenants, and solutions — turning every project into a smarter platform.
Every AI coding session starts from scratch. Decisions are lost, mistakes are repeated, context evaporates.
🧠
No Institutional Memory
Architecture decisions, failed approaches, and learned patterns vanish when a session ends. New sessions re-explore dead ends.
🔄
Repeated Mistakes
Without a guard against previously-failed approaches, AI agents propose solutions that were already tested and proven wrong.
🔍
Context Fragmentation
Knowledge scattered across chat logs, commit messages, and developer heads. No single source of truth for what happened and why.
Enterprise Impact: In autonomous SDLC operations, memory loss means compliance gaps, duplicated work, and architecture drift. A platform that forgets its own decisions cannot be trusted with enterprise software delivery.
The Solution: Persistent Knowledge Architecture
A multi-layered, tool-agnostic knowledge system that learns, remembers, and gets smarter with every project.
8
Structured knowledge files
3
Knowledge tiers
6
Integration layers
24
Tested search functions
✓
Always Available
Zero-infrastructure flat files in git. Works offline, works locally, works with any AI tool. No database required for core memory.
✓
Deep When Needed
Neo4j knowledge graph for compliance chains, evidence traversals, and cross-entity pattern detection. Available when deployed.
✓
Compounds Over Time
Every session, every tenant, every project contributes knowledge. The platform gets smarter with every deployment.
Architecture: 6 Integration Layers
Knowledge flows through hooks, plugins, libraries, agents, rules, and graph databases.
Storage
memory-bank/ (canonical)
blaze/state/session/
Neo4j Knowledge Graph
↓
Plugin
context-intelligence.ts
blaze-memory-search
blaze-context-status
↓
Library
memory-search.ts (24 tests)
↓
Hooks
session-start (<8KB context load)
session-end (auto-rotation)
↓
Agents
hypothesis-reasoning
critical-thinking
sdlc-orchestrator
compliance-manager
↓
Rules
memory-operations
session-lifecycle
↓
MCP Servers
Neo4j Knowledge Graph
Claude Context (embeddings)
Context7 (library docs)
The Knowledge Bank: 8 Structured Files
Platform-agnostic, version-controlled, team-shareable. Lives at memory-bank/ in the repo root.
File
Purpose
Injected Into Context
Frequency
projectContext.md
Tech stack, architecture, integrations
Full content, every session
When project changes
decisionLog.md
Architecture Decision Records
Last 5 entries, every session
When decisions made
debunkedHypotheses.md
Failed approaches — prevents re-exploration
ALL entries, every session
When hypotheses falsified
lessonsLearned.md
Post-incident learnings with root cause
Last 3 entries, every session
After significant events
patterns.md
Reusable code and workflow patterns
On-demand search
When patterns emerge
activeContext.md
Session handoffs, work-in-progress
Last 3 lines at start
Every session
platformState.md
Agent counts, infrastructure state
On-demand search
After deployments
CHANGELOG.md
Knowledge bank version history
On-demand search
On structural changes
Key Design: The highest-value knowledge (debunked hypotheses, recent decisions, recent lessons) is automatically injected into every AI session. No agent action required — it's always in context.
Session Lifecycle: Seamless Handoffs
Session Start
Hook: session-start
Lightweight Context Load (<8KB)
Branch, work item, version, uncommitted count, last active context
Plugin: context-intelligence
Knowledge Injection
Project context + last 5 decisions + ALL debunked hypotheses + last 3 lessons injected into system prompt
Agent: hypothesis-reasoning
Debunked Hypothesis Check
MUST verify proposed approaches against debunked list before proceeding
During + Session End
Plugin: every 30 seconds
Context State Persistence
Debounced writes of tool count, token estimates, session metrics
Plugin: on compaction
Pre-Compaction Snapshot
Full state preserved before context window compression
Hook: session-end
Rotated Session Log
Appends session summary with auto-rotation (max 10, older pruned)
When the platform discovers that an approach fails (e.g., "Fargate can't run hard anti-affinity"), every tenant benefits within seconds. No tenant re-discovers the same failure.
Bottom-Up Promotion
Confidence threshold> 0.8 required
Tenant thresholdSeen in 3+ tenants independently
AnonymizationRequired — strip all tenant identifiers
ReviewHuman approval required
When 3+ tenants independently discover the same pattern, it's promoted to platform knowledge. Every future tenant starts with that intelligence on day one.
Feature Flag
Tier
Default
Purpose
knowledge.inherit_platform_patterns
Org-lockable
true
Receive platform patterns
knowledge.contribute_patterns_upstream
Org-lockable
false
Opt-in to cross-tenant learning
knowledge.debunked_cascade
Immutable
true
Safety mechanism — always on
The Debunked Hypothesis Guard
A safety mechanism no other AI coding platform provides. Prevents agents from re-exploring failed approaches.
How It Works
Agent proposes a solution
hypothesis-reasoning or critical-thinking agent begins analysis
Debunked file loaded in context
ALL debunked hypotheses are injected into every session's system prompt
Agent checks before proposing
Agent MUST grep debunkedHypotheses.md for keywords before recommending
Match found → STOP
Agent alerts the user and provides the correct approach from the debunked entry
Real Examples
DEBUNKED: az boards CLI with Service Principal
Tested and proven incompatible. Correct approach: ADO REST API with Bearer tokens.
DEBUNKED: Fargate with hard pod anti-affinity
Pods enter permanent Pending state. Correct approach: keep on EC2 or use preferredDuringScheduling.
DEBUNKED: MCP server for ADO integration
No production-quality server supports SP auth. Correct approach: direct REST API.
Cascade: Debunked hypotheses cascade to ALL tenants immediately. No opt-out. No tenant wastes time re-discovering a known failure.
Neo4j Knowledge Graph
Flat files for speed. Graph database for depth. Compliance chains, evidence traversals, and pattern detection.
Downstream analysis across dependency graphs (5-depth)
Pattern Detection
Aggregate occurrences across 90-day windows, confidence scoring
Evidence Relationships
Variable-depth traversal from any evidence node
Knowledge Tier Nodes
Node Type
Scope
:PlatformPattern
Cross-tenant validated patterns
:PlatformDecision
Platform-level ADRs
:PlatformDebunked
Failed approaches (cascaded)
:TenantPattern
Per-tenant discoveries
:SolutionPattern
Per-solution domain patterns
Two-Store Architecture
⚡
Fast Layer: Flat Files
Git-tracked markdown in memory-bank/. Zero infrastructure. Instant reads. Injected into every session prompt. Works offline.
🕸
Deep Layer: Neo4j
Graph database for multi-hop traversals, compliance chains, and cross-entity pattern detection. Available when infrastructure is deployed.
Graceful Degradation: The platform works fully without Neo4j. Graph capabilities activate when infrastructure is available, unlocking compliance and pattern intelligence.
Blaze vs Native AI Tool Memory
Claude Code, Cursor, and Copilot provide basic memory. Blaze provides institutional intelligence.
Capability
Native AI Tools
Blaze Knowledge System
Project instructions
CLAUDE.md, .cursorrules
Uses native mechanisms
Native
Auto-learning from corrections
MEMORY.md (auto-written)
Complementary
Native
Structured Architecture Decision Records
None
decisionLog.md with bounded validity
Blaze
Lessons learned with root cause
None
lessonsLearned.md with prevention steps
Blaze
Debunked hypothesis guard
None
Agents MUST check before proposing
Blaze
Reusable pattern catalog
None
patterns.md with trigger/steps/outcome
Blaze
Structured session handoffs
--resume only
Handoff protocol with next steps
Blaze
Team-shareable memory
Machine-local only
Git-tracked, auditable
Blaze
Work-item-linked context
None
All entries linked to AB#/AS#/JIRA
Blaze
Knowledge graph traversals
None
Neo4j compliance chains, impact analysis
Blaze
Cross-tenant learning
None
Pattern aggregation + promotion pipeline
Blaze
Category-based retention
Unbounded growth
7d context to permanent decisions
Blaze
Enterprise Differentiators
Capabilities that set Blaze apart from any native AI tool memory.
🛡
Debunked Hypothesis Guard
Agents MUST check debunkedHypotheses.md before proposing solutions. Cascades to all tenants immediately. No other platform has this.
📊
Category-Based Retention
Context: 7 days. Patterns: 30 days. Decisions: permanent. Evidence: 365 days. Different knowledge, different lifespans.
Git-tracked, version-controlled, auditable. Every team member and every AI tool sees the same institutional knowledge.
📝
Work-Item Traceability
Every decision, lesson, and pattern linked to PM work items. Full traceability from memory to deliverable.
⚙
Tool-Agnostic Architecture
Canonical memory-bank/ at repo root works with OpenCode, Claude Code, Cursor, or any future AI tool. Zero vendor lock-in.
Continuous Platform Intelligence
The more projects Blaze runs, the smarter it gets. Network effects create a competitive moat.
The Learning Loop
Solution discovers a pattern
e.g., "Redis connection pooling reduces latency 40% for session-heavy workloads"
Pattern reaches confidence > 0.8
Detected 3+ times across sessions with consistent outcomes
Promoted to tenant scope
Tenant admin reviews and approves. Available to all solutions for that tenant.
Seen in 3+ tenants independently
Same pattern discovered by unrelated tenants. Anonymized and promoted to platform.
Cascades to ALL tenants
Every new tenant starts with this validated intelligence on day one.
Network Effects
Tenant 1
Discovers "Fargate can't handle hard anti-affinity." Debunked hypothesis recorded.
Tenant 2, 3, 4...
Never encounter this issue. The debunked hypothesis cascaded before they could try it.
Tenant 50
Starts with 200+ validated patterns, 50+ debunked hypotheses, and dozens of ADRs. The platform is 50x smarter than when Tenant 1 started.
The Moat: Every deployment makes the next one better. Competitors start from zero every time.
Comprehensive Test Coverage: 94%
TDD, BDD, E2E, and CDD — 405 tests across 10 suites plus 16 BDD Gherkin scenarios, all passing.
405
Tests passing
94%
Code path coverage
16
BDD Gherkin scenarios
6/6
Business outcomes verified
Coverage by Source File
Source File
Paths
Covered
memory-search.ts
29
27
93%
context-intelligence.ts
63
57
90%
context-intelligence-logic.ts
40
40
100%
load-memory-bank-light.sh
14
12
86%
session-end.sh
15
15
100%
Total
161
151
94%
Coverage by Test Type
Type
Tests
Coverage
TDD (unit tests)
214
~95%
BDD (Gherkin scenarios)
16
94%
E2E (business outcomes)
36
100%
Contract (file structure)
10
100%
CDD (compliance evidence)
5
85%
Zero external dependencies. All tests run locally with no database, no network, and no MCP servers required.
Business Outcome Verification
Every business outcome has concrete, passing tests that verify the system delivers its intended value. Tested against the real codebase — no mocks.
1. Session Continuity
Every AI session receives project context, recent decisions, all debunked hypotheses, and recent lessons — automatically.
6 testsPASS
2. Failure Prevention
Debunked hypotheses are searchable by keyword. "az boards", "Fargate anti-affinity", and "memory-bank-mcp" all return matches with correct approaches.
6 testsPASS
3. Knowledge Accumulation
Decisions contain Context/Decision/Rationale. Lessons have root cause analysis. Patterns have trigger/steps/outcome. Session log rotates at 10 entries.
5 testsPASS
4. Search and Retrieval
Agents search by category or across all files. Results truncate at 300 chars, cap at 10. Missing files handled gracefully. Non-existent terms return empty.
7 testsPASS
5. Platform Architecture
Symlink resolves correctly. Both paths list same files. Knowledge tiers YAML validates. Neo4j schema has PlatformPattern constraints. Works without infrastructure.
7 testsPASS
6. Compliance Evidence
Evidence directories exist. ADR files present. ADR-013 and ADR-014 recorded in decision log and cross-referenced with debunked hypotheses.
5 testsPASS
Trust Through Verification
The platform governs itself with the same rigor it applies to your code. Every claim is backed by evidence, not assertions.
Self-Governance in Action
SDLC on itself4-phase process used to build & package this system
TDD enforced16 tests written before 1 line of implementation
555:1 test ratio1 line of config → 555 lines of test protection
blaze-knowledge-intelligence.zip — self-contained ./setup.sh — scaffolds into any repo
Zero npm dependencies for core system
Neo4j optional (flat files work standalone)
Knowledge Intelligence at Every Level
Blaze doesn't just remember. It learns, inherits, compounds, and protects.