Skip to content

MCP Integration

Eric Fitzgerald edited this page Apr 8, 2026 · 4 revisions

MCP Integration

This page documents TMI's planned integration with the Model Context Protocol (MCP).

Coming Soon

MCP integration is planned for a future release of TMI. This integration will enable AI assistants and LLM-powered tools to interact with TMI threat models through a standardized protocol. Whether TMI should also support invoking external MCP servers is under consideration.

Implementation Status

Current Status: Not yet implemented

Planned Timeline: TBD

Tracking:

Learn More About MCP

The Model Context Protocol is an open standard for connecting AI assistants to data sources and tools.

Resources:

Get Involved

Interested in MCP integration for TMI?

  1. Share your use case: Open a GitHub discussion describing how you would use MCP with TMI, or add your comments to the issues above
  2. Contribute: Help design and implement the MCP integration
  3. Stay informed: Watch the TMI repository for updates

GitHub Discussions:

Alternative Integrations

While MCP integration is pending, you can integrate AI tools with TMI using:

  1. REST API: Use TMI's REST API to build custom AI integrations (example: TMI-TF Terraform analyzer)

  2. Webhooks: Receive real-time notifications for AI processing

  3. Addons: Build AI-powered addons using the addon system

Example: AI Integration via API

While waiting for MCP support, here's how you can integrate AI tools today:

import requests
import openai

# Configuration
TMI_API = 'https://your-tmi-server'
TMI_TOKEN = 'your-tmi-token'
OPENAI_KEY = 'your-openai-key'

def analyze_threat_model_with_ai(threat_model_id):
    """Use AI to analyze a threat model"""

    # Fetch threat model from TMI
    response = requests.get(
        f'{TMI_API}/threat_models/{threat_model_id}',
        headers={'Authorization': f'Bearer {TMI_TOKEN}'}
    )
    threat_model = response.json()

    # Fetch associated threats (nested under the threat model)
    response = requests.get(
        f'{TMI_API}/threat_models/{threat_model_id}/threats',
        headers={'Authorization': f'Bearer {TMI_TOKEN}'}
    )
    threats = response.json()

    # Format for AI analysis
    context = f"""
    Threat Model: {threat_model['name']}
    Description: {threat_model.get('description', 'N/A')}

    Identified Threats:
    {format_threats(threats)}

    Please analyze this threat model and suggest:
    1. Any missing threats
    2. Improvements to existing threats
    3. Prioritization recommendations
    """

    # Call OpenAI
    client = openai.OpenAI(api_key=OPENAI_KEY)
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[
            {"role": "system", "content": "You are a security expert analyzing threat models."},
            {"role": "user", "content": context}
        ]
    )

    analysis = response.choices[0].message.content
    return analysis

def format_threats(threats):
    """Format threats for AI analysis"""
    return '\n'.join([
        f"- {t['name']} (Severity: {t.get('severity', 'N/A')})"
        for t in threats
    ])

This approach works today and provides AI-assisted threat modeling capabilities while we work on native MCP integration.

Related Documentation

Clone this wiki locally