Skip to main content
LLM Council is a 3-stage deliberation system created by Andrej Karpathy where multiple LLMs collaboratively answer your questions. Instead of relying on a single AI model, you can assemble a “council” of top LLMs that discuss, peer-review, and synthesize the best possible answer. The key innovation is anonymized peer review - in Stage 2, models evaluate each other’s responses without knowing which model produced them, preventing bias and ensuring objective rankings.

How It Works

When you submit a question, LLM Council runs through three stages:
  1. Stage 1: Individual Responses - Your query is sent to all council members (e.g., GPT-5.1, Claude Sonnet 4.5, Gemini 3 Pro, Grok 4) in parallel. Each model provides its own answer.
  2. Stage 2: Peer Review - Each model reviews and ranks all responses (anonymized as “Response A”, “Response B”, etc.). This prevents models from playing favorites and ensures objective evaluation.
  3. Stage 3: Final Synthesis - A designated “Chairman” model takes all responses and peer rankings to produce a final, synthesized answer that incorporates the best insights from the council.

Prerequisites

Before setting up LLM Council, ensure you have:
  • Unbound Application: A configured application in your Unbound Security dashboard
  • Unbound API Key: Generated from your application settings
  • Node.js: Version 18+ for the frontend
  • Python: Version 3.10+ for the backend
  • uv: Python package manager (recommended) or pip
If you haven’t created an Unbound application yet, follow the Create Application guide first to get your API key.

Installation

Step 1: Clone the Repository

git clone https://github.com/websentry-ai/llm-council.git
cd llm-council

Step 2: Install Backend Dependencies

Using uv (recommended):
uv sync
Or using pip:
pip install -r requirements.txt

Step 3: Install Frontend Dependencies

cd frontend
npm install
cd ..

Step 4: Configure Environment Variables

Create a .env file in the project root:
API_PROVIDER=unbound
UNBOUND_API_KEY=your-unbound-api-key
To get your API key:
  1. Go to the Unbound Gateway Portal
  2. Navigate to your application
  3. Copy the API Key from your application settings
  4. Paste it in place of your-unbound-api-key above

Step 5: Configure Council Models (Optional)

Edit backend/config.py to customize which models sit on your council:
# Council members - model identifiers
COUNCIL_MODELS = [
    "openai/gpt-5.1",
    "google/gemini-3-pro-preview",
    "anthropic/claude-sonnet-4.5",
    "x-ai/grok-4",
]

# Chairman model - synthesizes final response
CHAIRMAN_MODEL = "google/gemini-3-pro-preview"
You can use any models available through your Unbound application. The Chairman can be one of the council members or a different model entirely.

Running the Application

Option 1: Use the Start Script

./start.sh

Option 2: Run Manually

Terminal 1 (Backend):
uv run python -m backend.main
Terminal 2 (Frontend):
cd frontend
npm run dev
Then open http://localhost:5173 in your browser.

Usage

  1. Create a New Conversation - Click ”+ New Conversation” to start
  2. Ask Your Question - Type any question in the input box
  3. Review Stage 1 - Click through tabs to see each model’s individual response
  4. Examine Peer Rankings - Stage 2 shows how each model ranked the responses, with aggregate rankings calculated
  5. Read the Final Answer - Stage 3 presents the Chairman’s synthesized response

Architecture

User Query
    |
Stage 1: Parallel queries -> [individual responses]
    |
Stage 2: Anonymize -> Parallel ranking queries -> [evaluations + parsed rankings]
    |
Aggregate Rankings Calculation -> [sorted by avg position]
    |
Stage 3: Chairman synthesis with full context
    |
Return: {stage1, stage2, stage3, metadata}

Security Benefits

Using LLM Council with Unbound provides:
  • Multi-Model Consensus: Reduce single-model hallucinations by cross-validating answers
  • Request Monitoring: All AI requests are logged and monitored through Unbound
  • Cost Controls: Set budgets and limits on API usage across all council members
  • Compliance: Ensure AI interactions meet your organization’s standards
  • Audit Trail: Complete visibility into which models were consulted and how they ranked each other

Tech Stack

  • Backend: FastAPI (Python 3.10+), async httpx
  • Frontend: React + Vite, react-markdown for rendering
  • Storage: JSON files in data/conversations/
  • Package Management: uv for Python, npm for JavaScript