How It Works
When you submit a question, LLM Council runs through three stages:- Stage 1: Individual Responses - Your query is sent to all council members (e.g., GPT-5.1, Claude Sonnet 4.5, Gemini 3 Pro, Grok 4) in parallel. Each model provides its own answer.
- Stage 2: Peer Review - Each model reviews and ranks all responses (anonymized as “Response A”, “Response B”, etc.). This prevents models from playing favorites and ensures objective evaluation.
- Stage 3: Final Synthesis - A designated “Chairman” model takes all responses and peer rankings to produce a final, synthesized answer that incorporates the best insights from the council.
Prerequisites
Before setting up LLM Council, ensure you have:- Unbound Application: A configured application in your Unbound Security dashboard
- Unbound API Key: Generated from your application settings
- Node.js: Version 18+ for the frontend
- Python: Version 3.10+ for the backend
- uv: Python package manager (recommended) or pip
If you haven’t created an Unbound application yet, follow the Create Application guide first to get your API key.
Installation
Step 1: Clone the Repository
Step 2: Install Backend Dependencies
Using uv (recommended):Step 3: Install Frontend Dependencies
Step 4: Configure Environment Variables
Create a.env file in the project root:
- Go to the Unbound Gateway Portal
- Navigate to your application
- Copy the API Key from your application settings
- Paste it in place of
your-unbound-api-keyabove
Step 5: Configure Council Models (Optional)
Editbackend/config.py to customize which models sit on your council:
Running the Application
Option 1: Use the Start Script
Option 2: Run Manually
Terminal 1 (Backend):Usage
- Create a New Conversation - Click ”+ New Conversation” to start
- Ask Your Question - Type any question in the input box
- Review Stage 1 - Click through tabs to see each model’s individual response
- Examine Peer Rankings - Stage 2 shows how each model ranked the responses, with aggregate rankings calculated
- Read the Final Answer - Stage 3 presents the Chairman’s synthesized response
Architecture
Security Benefits
Using LLM Council with Unbound provides:- Multi-Model Consensus: Reduce single-model hallucinations by cross-validating answers
- Request Monitoring: All AI requests are logged and monitored through Unbound
- Cost Controls: Set budgets and limits on API usage across all council members
- Compliance: Ensure AI interactions meet your organization’s standards
- Audit Trail: Complete visibility into which models were consulted and how they ranked each other
Tech Stack
- Backend: FastAPI (Python 3.10+), async httpx
- Frontend: React + Vite, react-markdown for rendering
- Storage: JSON files in
data/conversations/ - Package Management: uv for Python, npm for JavaScript

