Hugging Face's Research Agent Pattern for Tool Transparency
TRIGGER
When LLMs with function-calling capabilities access external tools (web search, databases), users lose visibility into what information is being retrieved and which sources inform the response—the tool use happens invisibly within the model's reasoning.
APPROACH
Rather than giving discussion LLMs direct tool access, Consilium created a dedicated 'research agent' that appears as another visual participant at the roundtable. When any model needs external data, it triggers the research agent which displays progress indicators, time estimates, and source attribution in its own speech bubble. The agent accesses Web Search, Wikipedia, arXiv, GitHub, and SEC EDGAR through a unified BaseTool interface with quality scoring (recency, authority, specificity, relevance).
PATTERN
“Users can't distinguish hallucination from legitimate tool-retrieved information when function calls happen invisibly. A visible research agent shows what was searched, which sources responded, and how long it took. Opaque tool use becomes auditable.”
✓ WORKS WHEN
- Users need to verify or challenge the sources informing AI reasoning
- Multiple LLMs need consistent access to the same external tools (agent provides uniform interface)
- Research operations take >2 seconds where progress feedback maintains engagement
- Domain requires source attribution (legal, medical, financial analysis)
- Tool results should influence multiple agents' reasoning (research agent broadcasts to all)
✗ FAILS WHEN
- Speed is critical and visual feedback overhead is unacceptable
- Tool calls are simple lookups that don't benefit from transparency theater
- Single-agent system where the user trusts the model's tool use
- Tools require model-specific authentication that can't be shared through an agent
- Research results are only relevant to the calling model, not the broader discussion