Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,7 @@ This repository contains a collection of tutorials, sample code, and guidelines
- [Composio Newsletter Summarizer Agent](/tutorials/07-agents/composio-newsletter-summarizer-agent): Build an agent that summarizes newsletters using Composio and Groq.
- [aiXplain Agents](/tutorials/07-agents/aiXplain-agents): Build intelligent agents using aiXplain's platform with Groq.
- [Minions with Groq](/tutorials/07-agents/minions-groq): Create lightweight agent workers (minions) for distributed tasks.
- [AG2 Multi-Agent Research](/tutorials/07-agents/ag2-multi-agent-research): Build a multi-agent research assistant using AG2 with Groq and live web search.

### 08. Integrations & Frameworks
- [Gradio with Groq](/tutorials/08-integrations/groq-gradio): Learn how to build a full-stack application with Gradio powered by Groq.
Expand Down
48 changes: 48 additions & 0 deletions tutorials/07-agents/ag2-multi-agent-research/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
## AG2 Multi-Agent Research Assistant with Groq

[AG2](https://ag2.ai) (formerly AutoGen) is an open-source framework for building multi-agent AI applications. This tutorial demonstrates two AG2 agents collaborating on a web research task using Groq's fast inference and DuckDuckGo search.

### Overview

A **Researcher** agent uses DuckDuckGo to search the web for information, while a **Reviewer** agent evaluates the research quality and asks follow-up questions. The two agents chat back and forth until the Reviewer is satisfied with the findings.

This showcases:
- AG2's `ConversableAgent` with Groq via `api_type='groq'`
- Built-in `DuckDuckGoSearchTool` for live web search (no API key needed)
- Multi-agent conversation using `initiate_chat`

### Setup

Create a virtual environment and install dependencies:

```bash
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
```

Set your Groq API key:

```bash
export GROQ_API_KEY=gsk_...
```

### Run

Open the notebook:

```bash
jupyter notebook ag2-multi-agent-research.ipynb
```

### Requirements

- Python 3.10+
- A [Groq API key](https://console.groq.com/keys)
- No additional API keys required (DuckDuckGo search is free)

### References

- [AG2 Documentation](https://docs.ag2.ai)
- [Groq Console](https://console.groq.com)
- [AG2 Tool Registration](https://docs.ag2.ai/docs/tutorial/tool-use)
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ah9uws246hs",
"source": "# AG2 Multi-Agent Research Assistant with Groq\n\n[AG2](https://ag2.ai) (formerly AutoGen) is an open-source framework for building multi-agent AI applications. In this tutorial, two agents collaborate on a research task — one searches the web using DuckDuckGo, the other reviews the findings and asks follow-up questions.\n\n**What you'll learn:**\n- How to create AG2 agents powered by Groq's fast inference\n- How to use AG2's built-in `DuckDuckGoSearchTool` for live web search\n- How to orchestrate a multi-agent conversation with `initiate_chat`",
"metadata": {}
},
{
"cell_type": "markdown",
"id": "zc2ksm9dfg9",
"source": "## Install Dependencies",
"metadata": {}
},
{
"cell_type": "code",
"id": "el2cx8gqqg",
"source": "!pip install -q \"ag2[groq,duckduckgo]>=0.11.0\"",
"metadata": {},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"id": "12dzoe3xsold",
"source": "## Configure Groq\n\nSet your Groq API key. You can get one from [console.groq.com/keys](https://console.groq.com/keys).",
"metadata": {}
},
{
"cell_type": "code",
"id": "oe8n4m51zl8",
"source": "import os\n\nfrom autogen import ConversableAgent, LLMConfig\n\nllm_config = LLMConfig(\n {\n \"api_type\": \"groq\",\n # Llama 4 Scout handles tool calling reliably on Groq.\n # You can also try \"qwen/qwen3-32b\" or \"llama-3.3-70b-versatile\".\n \"model\": \"meta-llama/llama-4-scout-17b-16e-instruct\",\n \"api_key\": os.environ.get(\"GROQ_API_KEY\"),\n }\n)",
"metadata": {},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"id": "acesbna1xye",
"source": "## Set Up Web Search\n\nAG2 includes a built-in `DuckDuckGoSearchTool` that wraps the [duckduckgo-search](https://pypi.org/project/duckduckgo-search/) package. It requires no API key and returns structured results (title, link, snippet).\n\nThe tool uses AG2's **registration pattern**: we register it with the Researcher agent for LLM use (so the Researcher decides when to search) and with the Reviewer agent for execution (so the Reviewer runs the actual search call).",
"metadata": {}
},
{
"cell_type": "code",
"id": "j2zdzr0uoq",
"source": "from autogen.tools.experimental import DuckDuckGoSearchTool\n\nsearch_tool = DuckDuckGoSearchTool()",
"metadata": {},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"id": "jygx8trvqg",
"source": "## Create Agents\n\nWe create two agents that will collaborate:\n\n- **Researcher**: A research specialist that uses the DuckDuckGo search tool to gather information, then synthesizes findings into a structured summary.\n- **Reviewer**: A critical reviewer that evaluates research quality. It asks follow-up questions if the research is incomplete, and responds with `TERMINATE` when satisfied.",
"metadata": {}
},
{
"cell_type": "code",
"id": "k2mtb04auq",
"source": "researcher = ConversableAgent(\n name=\"Researcher\",\n system_message=(\n \"You are a research specialist. Your job is to use the duckduckgo_search \"\n \"tool to find information on the topic you are given. Search multiple times \"\n \"if needed to cover different angles. After gathering enough information, \"\n \"synthesize your findings into a clear, structured summary with key takeaways.\"\n ),\n llm_config=llm_config,\n is_termination_msg=lambda msg: \"TERMINATE\" in (msg.get(\"content\") or \"\"),\n human_input_mode=\"NEVER\",\n)\n\nreviewer = ConversableAgent(\n name=\"Reviewer\",\n system_message=(\n \"You are a critical research reviewer. Evaluate the research provided to you \"\n \"for completeness, accuracy, and depth. If the research is missing important \"\n \"perspectives or needs more detail, ask specific follow-up questions or request \"\n \"additional searches. When you are satisfied that the research is thorough and \"\n \"well-supported, respond with TERMINATE.\"\n ),\n llm_config=llm_config,\n human_input_mode=\"NEVER\",\n)\n\n# Researcher decides when to call the search tool\nsearch_tool.register_for_llm(researcher)\n\n# Reviewer executes the actual search\nsearch_tool.register_for_execution(reviewer)",
"metadata": {},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"id": "54rux0y2we8",
"source": "## Run the Research Conversation\n\nThe Reviewer kicks off the conversation by sending a research request to the Researcher. The two agents then chat back and forth — the Researcher searches the web and synthesizes findings, while the Reviewer evaluates and asks follow-ups.\n\n`max_turns=6` prevents runaway conversations.",
"metadata": {}
},
{
"cell_type": "code",
"id": "qt4rkmt3o9l",
"source": "result = reviewer.initiate_chat(\n researcher,\n message=(\n \"Research the current state of AI inference hardware. \"\n \"Compare the approaches of at least two companies. \"\n \"Provide a summary with key findings.\"\n ),\n max_turns=6,\n)",
"metadata": {},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"id": "65w91dajd93",
"source": "## Review the Results\n\nThe `ChatResult` object contains the full conversation history and a summary. Let's inspect what happened.",
"metadata": {}
},
{
"cell_type": "code",
"id": "w0w1fss75e",
"source": "print(\"=== Conversation Summary ===\\n\")\nprint(result.summary)\n\nprint(\"\\n\\n=== Full Chat History ===\\n\")\nfor msg in result.chat_history:\n speaker = msg.get(\"name\", msg[\"role\"])\n content = msg.get(\"content\", \"\")\n if content:\n print(f\"--- {speaker} ---\")\n print(content[:500])\n if len(content) > 500:\n print(\"... (truncated)\")\n print()",
"metadata": {},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"id": "4rpt67q30vv",
"source": "## Learn More\n\n- [AG2 Documentation](https://docs.ag2.ai) — guides, API reference, and examples\n- [AG2 Tool Use Tutorial](https://docs.ag2.ai/docs/tutorial/tool-use) — more on registering tools with agents\n- [Groq API Documentation](https://console.groq.com/docs) — models, rate limits, and API reference",
"metadata": {}
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.10.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
2 changes: 2 additions & 0 deletions tutorials/07-agents/ag2-multi-agent-research/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
ag2[groq,duckduckgo]>=0.11.0
jupyter