DualAgentDebate
Execute scripts/dual_agent_debate.py to run the debate loop.
Setup
Set required environment variables:
export OPENBRAIN_MCP_URL="http://127.0.0.1:54321/mcp"
# optional if MCP is protected
export OPENBRAIN_MCP_TOKEN="..."
Optional: if you want direct OpenAI API mode, set:
export OPENAI_API_KEY="..."
If OPENAI_API_KEY is not set, the script uses openclaw agent (OAuth-backed local setup) for debate responses.
Optional tool/model overrides (defaults shown):
export OPENBRAIN_CONTEXT_TOOL="search_docs"
export OPENBRAIN_SQL_TOOL="execute_sql"
export DEBATE_MODEL="gpt-4o-mini"
Run
python3 skills/dual-agent-debate/scripts/dual_agent_debate.py \
--query "Should I migrate this service to Supabase edge functions?"
Optional explicit thoughts:
python3 skills/dual-agent-debate/scripts/dual_agent_debate.py \
--query "Should we launch this feature now?" \
--thoughts "My concern is reliability and on-call burden." \
--rounds 3 \
--agreement-threshold 0.9
Behavior
- Pull context from Open Brain MCP (
OPENBRAIN_CONTEXT_TOOL). - Pull related prior thoughts from
public.thoughtsvia MCPexecute_sqlunless--thoughtsis provided. - Ask ChatGPT to debate the query using context and thoughts.
- Compute semantic similarity (OpenAI embeddings) between ChatGPT reply and thoughts.
- Repeat up to 3 rounds or stop early on agreement threshold.
- Persist full outcome into
public.memoriesvia MCPexecute_sql.
Notes
- If your Open Brain MCP uses different tool names or argument schemas, set the tool env vars accordingly.
- The script uses MCP
tools/callJSON-RPC shape; pointOPENBRAIN_MCP_URLat your MCP HTTP endpoint.