User Input
User picks an issue from the table UI and selects a repository folder. The issue can be written in any language — vague, blunt, or broken English. That's fine.
REAL EXAMPLE ISSUE
"target is wrong, fix it" — repo: /projects/backend
↓
TRIGGERS PIPELINE
STAGE 01
UNDERSTAND
UNDERSTAND
1
Issue Intake & Normalisation
Receive the raw issue text + selected repo path. Strip noise, detect language, normalise to English if needed. Tag it with a unique job ID.
What happens here
Raw issue text is cleaned and passed to the Understanding Agent with metadata: repo path, job ID, timestamp, and user-set priority.
LLM: Qwen 7B Local Python
2
Codebase 3-Layer Scan
The agent doesn't read the whole codebase. It queries a pre-built semantic index, follows import chains, and reads only relevant chunks. Like a senior dev who knows where to look.
The 3-layer funnel (see Codebase tab)
Layer 1: semantic search on index → candidate filesLayer 2: explore import/call relationships → trace the thread
Layer 3: read only the relevant 200–400 line chunks
ChromaDB Tree-sitter AST LLM: Qwen 7B Local
3
Web Context Fetch
If the issue references a library, error message, or concept the agent isn't sure about — it searches the web and reads the relevant docs or Stack Overflow thread before forming its understanding.
Triggers a web search when
— Issue mentions an external library or version— Error message looks like a known bug
— Confidence in understanding is below threshold
Web Search Tool Official Docs Scraper
4
Issue Understanding Report
Agent produces a structured understanding of what the issue actually means — in precise technical language. "target is wrong" becomes a specific, actionable problem statement.
Output of this step
{ problem: "The render target in shader.go line 142 is using the wrong framebuffer ID causing visual corruption on resize", affected_files: [...], confidence: 87 }
Confidence
> 70%?
> 70%?
✓ YES → continue
✗ NO → ask user 1 question
↓
UNDERSTANDING CONFIRMED
STAGE 02
PLAN
PLAN
5
Solution Research (Web + Docs)
Before planning, the Planner agent searches for the best known solution approach — official docs, changelog notes, relevant GitHub issues in the library's own repo.
Why this matters
A 7B model doesn't know about APIs released last month. But if we fetch the docs and put them in context — it does. This is how a small model thinks like a pro.
Web Search Docs Fetcher LLM: 32B Server
6
Execution Plan Generation
Planner generates a precise, step-by-step execution plan. Not vague instructions — exact files, exact functions, exact changes required, in the order they need to happen.
Plan structure
Step 1: Edit shader.go line 142 — change framebuffer ID from X to YStep 2: Update renderer_test.go — add resize test case
Step 3: Check if resize handler in main.go needs update
Dependency ordered Atomic steps
7
Risk Assessment
Before handing off, the planner identifies what could go wrong. Which other parts of the codebase might be affected. What tests exist. What tests need to be written.
Risk flags example
⚠ framebuffer change may affect 3 other modules that reference it⚠ No existing resize tests — Tester agent must write from scratch
✓ Change is isolated to rendering layer — low blast radius
↓
PLAN COMPLETE — HANDOFF
STAGE 03
HANDOFF
HANDOFF
8
AgentPayload Assembly
Everything from Stage 1 and Stage 2 is packaged into a single structured AgentPayload object. This is the complete context the Coder agent needs — and nothing more.
AgentPayload contains
— Original issue + normalised problem statement— Affected files with relevant code chunks (not full files)
— Step-by-step execution plan (dependency ordered)
— Risk flags + test requirements
— Web context fetched (docs, Stack Overflow snippets)
— Job ID, repo path, priority, timestamp
9
Queue & Fire Coder Agent
AgentPayload is pushed to the task queue. Coder Agent picks it up immediately and begins execution. From this point the Understand/Plan pipeline is done — its job is complete.
Why we separate these concerns
The Understand + Plan pipeline is expensive (web search, indexing, 32B model). It runs once. The Coder is fast and focused — it only needs the payload, not the history of how it was built.
SQLite queue
Async handoff
↓
📦 AgentPayload — handed to Coder Agent
PROBLEM
Precise technical description of the actual issue
FILES + CHUNKS
Only the relevant 200-400 lines, not whole files
PLAN
Atomic steps in dependency order
WEB CONTEXT
Docs + Stack Overflow fetched and summarised
RISK FLAGS
What might break, what tests are needed
JOB META
Job ID, repo, priority, timestamp