Skip to content

Latest commit

 

History

History
216 lines (152 loc) · 7.38 KB

File metadata and controls

216 lines (152 loc) · 7.38 KB

Beyond Tools: Resources and Prompts

So far in this workshop, we've focused on Tools — the most common MCP capability. But MCP servers can expose two more: Resources and Prompts. Together, these three capabilities cover the full range of how an AI app interacts with external systems.


Why Not Just Tools for Everything?

Tools are great for actions: searching, fetching, creating. But not everything is an action:

  • Sometimes the AI needs background context before making decisions (e.g., "what topics have been searched before?"). You don't want the LLM calling a tool for this — the app should just read it.
  • Sometimes you want to give users pre-built workflows (e.g., "summarize this topic for a beginner"). These are recipes, not tools.

That's where Resources and Prompts come in.


Resources: Read-Only Data

Resources are data the server exposes for reading — similar to GET endpoints in a REST API. The application decides when to read them (not the LLM).

When to Use Resources vs Tools

Scenario Use
Fetch data based on user input Tool (LLM decides what to search)
Provide background context Resource (app reads it proactively)
Data changes rarely Resource
Action with side effects Tool
Read-only, no parameters needed Resource

How Resources Work

Each resource has a URI (like wiki://topics or wiki://python) and returns content when read. Resources can be static (always the same URI) or templated (URI has a variable part).

Example from Our Wikipedia Server

Open servers/wikipedia-server-full-stdio.py and look at the resource definitions:

@mcp.resource("wiki://topics")
def get_available_topics() -> str:
    """List all available topic folders."""
    # Reads the wiki_articles directory
    # Returns a markdown list of topics
    ...

@mcp.resource("wiki://{topic}")
def get_topic_articles(topic: str) -> str:
    """Get articles for a specific topic."""
    # Reads cached article data
    # Returns formatted markdown
    ...

The first resource (wiki://topics) is static — it always has the same URI. The second (wiki://{topic}) is templated — the {topic} part is a variable.

How a Client Reads Resources

# List what resources are available
resources = await session.list_resources()
# Returns: [wiki://topics, ...]

# Read a specific resource
content = await session.read_resource("wiki://topics")
# Returns: markdown list of all topics

# Read a templated resource
content = await session.read_resource("wiki://python")
# Returns: articles about Python

When Resources Shine

Imagine your DevTools Assistant starts up. Before the user even asks a question, the app reads wiki://topics to know what data is already cached. Now when the user asks "What do we know about Python?", the app already has context — it can answer immediately from the resource instead of calling a tool.


Prompts: Pre-Built Workflows

Prompts are templates that server authors provide for common tasks. The user (not the LLM, not the app) selects which prompt to use.

When to Use Prompts

  • Your server has domain expertise that maps to specific workflows
  • You want to guide users toward effective ways to use the server's tools
  • You want to share "recipes" that combine multiple tool calls

Example from Our Wikipedia Server

@mcp.prompt()
def generate_wiki_game_prompt(topic: str, difficulty: str = "medium") -> str:
    """Generate a prompt for creating a word game from Wikipedia articles."""
    return f"""Create an interactive "Hidden Word Wiki" game about '{topic}'.

    1. Use search_articles(topic='{topic}', max_results=3)
    2. Use get_article_content() for the most relevant result
    3. Extract 5-8 key terms (difficulty: {difficulty})
    4. Build an interactive game...
    """

This prompt doesn't just ask a question — it orchestrates a multi-step workflow using the server's tools. The server author knows the best way to combine their tools, and packages that knowledge as a prompt.

How a Client Uses Prompts

# List available prompts
prompts = await session.list_prompts()
# Returns: [generate_wiki_game_prompt, ...]

# Get a specific prompt with arguments
prompt = await session.get_prompt(
    "generate_wiki_game_prompt",
    arguments={"topic": "Python", "difficulty": "hard"}
)
# Returns: the full prompt text, ready to send to the LLM

Building a Full Server: Tools + Resources + Prompts

Let's trace through the full Wikipedia server to see all three capabilities together.

Step 1: Start with Tools (what you already know)

from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Wikipedia MCP")

@mcp.tool()
def search_articles(topic: str, max_results: int = 5) -> List[str]:
    """Search Wikipedia for articles on a topic."""
    ...

@mcp.tool()
def get_article_content(article_title: str) -> str:
    """Get the full content of a Wikipedia article."""
    ...

Step 2: Add Resources (background context)

@mcp.resource("wiki://topics")
def get_available_topics() -> str:
    """List all cached topics — no search needed."""
    topics = []
    if os.path.exists(WIKI_DIR):
        for topic_dir in os.listdir(WIKI_DIR):
            if os.path.isdir(os.path.join(WIKI_DIR, topic_dir)):
                topics.append(topic_dir)
    return "\n".join(topics) if topics else "No topics cached yet."

@mcp.resource("wiki://{topic}")
def get_topic_articles(topic: str) -> str:
    """Read cached articles for a specific topic."""
    # Read from local JSON files — no API calls
    ...

Step 3: Add Prompts (expert workflows)

@mcp.prompt()
def generate_wiki_game_prompt(topic: str, difficulty: str = "medium") -> str:
    """Create a word game from Wikipedia content."""
    return f"""Use search_articles and get_article_content to build
    a "{difficulty}" difficulty word game about "{topic}"..."""

Step 4: Run it

if __name__ == "__main__":
    mcp.run(transport='stdio')

That's it. The complete server in servers/wikipedia-server-full-stdio.py follows exactly this pattern. The "full" variants (*-full-stdio.py, *-full-sse.py, *-full-http.py) are identical in logic — they only differ in the transport line.


Hands-On: Try It Yourself

  1. Start the full Wikipedia server:

    make run-server-wiki-full-stdio
  2. In another terminal, you can test it with the MCP Inspector or connect step 07's client to it.

  3. Try adding your own resource or prompt to the server in step 06 (06-build-mcp-server.py):

    • Add a resource that lists all cached search results
    • Add a prompt that generates a "compare two articles" workflow

Summary

Capability Decorator Who controls it Purpose
Tools @mcp.tool() LLM Actions (search, fetch, create)
Resources @mcp.resource("uri") Application Read-only context data
Prompts @mcp.prompt() User Pre-built workflows

All three use the same FastMCP server. All three are auto-discovered by MCP clients. And all three work across any transport protocol.


Next: Multi-Server Architecture

In step 10, we'll connect to multiple MCP servers simultaneously. You'll see how adding a new data source is just one line in a config file — the client auto-discovers all tools from all servers.