mcp-integration

Integrate AI agents with business data via Model Context Protocol. Query ads, analytics, CRM data through normalized interfaces. Use when connecting agents to business systems, enabling data access, or building MCP servers. Triggers on "MCP", "Model Context Protocol", "business data", "agent integration", "Claude MCP".

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "mcp-integration" with this command: npx skills add engsathiago/mcp-business-integration

MCP Integration

Model Context Protocol (MCP) connects AI agents to real business data through normalized interfaces.

What is MCP?

Model Context Protocol is Anthropic's open standard for connecting AI models to external data sources and tools. It provides a unified way for agents to:

  • Query databases and APIs
  • Access files and resources
  • Execute tools and functions
  • Maintain context across sessions

Why MCP Matters

Before MCP:

  • Each integration = custom code
  • Different APIs = different patterns
  • Context lost between tools
  • Security = ad-hoc per integration

With MCP:

  • One protocol, many integrations
  • Standard patterns for all sources
  • Persistent context
  • Built-in security model

MCP Architecture

┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│   Client    │────▶│   Server    │────▶│  Resource   │
│  (Agent)    │     │   (MCP)     │     │  (Data)     │
└─────────────┘     └─────────────┘     └─────────────┘
                           │
                    ┌──────┴──────┐
                    │   Tools     │
                    │  Prompts    │
                    │  Resources  │
                    └─────────────┘

Components

1. MCP Server

  • Exposes resources and tools
  • Handles authentication
  • Manages connections

2. MCP Client

  • Connects to servers
  • Discovers capabilities
  • Executes operations

3. Resources

  • Files, databases, APIs
  • Read/write operations
  • Subscriptions for updates

4. Tools

  • Executable functions
  • Input/output schemas
  • Side effects

5. Prompts

  • Reusable prompt templates
  • Parameterized
  • Composable

Integration Types

1. Database Integration

# MCP Server for PostgreSQL
from mcp import Server

server = Server("postgres-integration")

@server.resource("postgres://users")
async def get_users():
    # Query users from database
    return await db.query("SELECT * FROM users")

@server.tool("query_users")
async def query_users(filters: dict):
    # Execute parameterized query
    return await db.query_with_filters(filters)

2. API Integration

# MCP Server for REST API
@server.resource("api://customers")
async def get_customers():
    response = await httpx.get("https://api.example.com/customers")
    return response.json()

@server.tool("create_customer")
async def create_customer(data: dict):
    response = await httpx.post(
        "https://api.example.com/customers",
        json=data
    )
    return response.json()

3. File System Integration

# MCP Server for file access
@server.resource("file://documents/{path}")
async def read_document(path: str):
    with open(f"documents/{path}") as f:
        return f.read()

@server.tool("write_document")
async def write_document(path: str, content: str):
    with open(f"documents/{path}", "w") as f:
        f.write(content)
    return {"status": "written"}

Business Data Integration

Ads Data

# Google Ads MCP
@server.resource("ads://campaigns")
async def get_campaigns():
    """Get all ad campaigns with metrics"""
    campaigns = await ads_client.get_campaigns()
    return normalize_campaigns(campaigns)

@server.tool("optimize_budget")
async def optimize_budget(campaign_id: str):
    """Automatically adjust campaign budget"""
    # Analyze performance
    # Adjust spend allocation
    # Return optimization results

Analytics Data

# Analytics MCP
@server.resource("analytics://metrics")
async def get_metrics():
    """Get normalized metrics across platforms"""
    return {
        "google_analytics": await ga.get_metrics(),
        "mixpanel": await mixpanel.get_events(),
        "custom_events": await custom.get_events()
    }

@server.tool("query_analytics")
async def query_analytics(query: str):
    """Natural language analytics query"""
    # Parse query
    # Execute across platforms
    # Return unified results

CRM Data

# Salesforce MCP
@server.resource("crm://leads")
async def get_leads():
    """Get leads from CRM"""
    return await salesforce.query("SELECT Id, Name, Email FROM Lead")

@server.tool("create_lead")
async def create_lead(data: dict):
    """Create new lead in CRM"""
    lead = await salesforce.create("Lead", data)
    return lead

Best Practices

1. Normalization

# Normalize data from different sources
def normalize_campaign(data, source):
    schema = {
        "id": data.get("id") or data.get("campaign_id"),
        "name": data.get("name") or data.get("campaign_name"),
        "spend": data.get("spend") or data.get("cost"),
        "impressions": data.get("impressions") or data.get("views"),
        "clicks": data.get("clicks") or data.get("clicks_count"),
        "source": source
    }
    return schema

2. Error Handling

@server.tool("risky_operation")
async def risky_operation(data: dict):
    try:
        result = await external_api.call(data)
        return {"success": True, "data": result}
    except APIError as e:
        return {
            "success": False,
            "error": str(e),
            "suggestion": "Try again with valid parameters"
        }

3. Caching

from functools import lru_cache
from datetime import datetime, timedelta

cache = {}

@server.resource("api://expensive-data")
async def get_expensive_data():
    cache_key = "expensive-data"
    cached = cache.get(cache_key)
    
    if cached and cached["expires"] > datetime.now():
        return cached["data"]
    
    # Fetch fresh data
    data = await expensive_api_call()
    cache[cache_key] = {
        "data": data,
        "expires": datetime.now() + timedelta(hours=1)
    }
    return data

4. Security

# Validate inputs
from pydantic import BaseModel

class QueryInput(BaseModel):
    table: str
    filters: dict
    limit: int = 100

@server.tool("safe_query")
async def safe_query(input: QueryInput):
    # Input is validated by Pydantic
    # SQL injection prevented
    return await db.query(input.table, input.filters, input.limit)

Claude Desktop Integration

// claude_desktop_config.json
{
  "mcpServers": {
    "business-data": {
      "command": "python",
      "args": ["mcp_server.py"],
      "env": {
        "DATABASE_URL": "postgresql://...",
        "API_KEY": "..."
      }
    }
  }
}

Common MCP Servers

Official Servers

ServerDescription
filesystemFile system access
postgresPostgreSQL database
sqliteSQLite database
githubGitHub API
google-driveGoogle Drive
slackSlack API

Custom Servers

Create custom servers for:

  • Internal APIs
  • Proprietary databases
  • Custom tools
  • Business-specific operations

Debugging

Server Logs

import logging

logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("mcp_server")

@server.tool("debug_operation")
async def debug_operation(data: dict):
    logger.debug(f"Input: {data}")
    result = await process(data)
    logger.debug(f"Output: {result}")
    return result

Connection Issues

# Test MCP server
python -m mcp.server --debug

# Test client connection
python -m mcp.client --url "ws://localhost:8080"

Examples

Query Multiple Data Sources

@server.tool("cross_platform_query")
async def cross_platform_query(query: str):
    """Query across multiple platforms"""
    results = {}
    
    # Query each platform
    results["analytics"] = await analytics.query(query)
    results["crm"] = await crm.query(query)
    results["ads"] = await ads.query(query)
    
    # Merge results
    return merge_results(results)

Automated Insights

@server.tool("generate_insights")
async def generate_insights(data_source: str):
    """Generate insights from business data"""
    # Get data
    data = await get_data(data_source)
    
    # Analyze
    insights = []
    
    # Trend analysis
    if data["trend"] == "increasing":
        insights.append("Revenue trending up - consider scaling")
    
    # Anomaly detection
    if data["anomaly"]:
        insights.append(f"Anomaly detected: {data['anomaly']}")
    
    return {"insights": insights, "data": data}

Resources

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

Charmie CRM Lite

Lightweight CRM with SQLite – manage contacts. Upgrade to Pro for email, messaging, and more.

Registry SourceRecently Updated
541Profile unavailable
Web3

FarmDash Signal Architect

100% non-custodial agent-to-agent swap router API. Execute single-chain and cross-chain token swaps with zero custody via FarmDash.

Registry SourceRecently Updated
1551Profile unavailable
Coding

Norman: Manage Clients

Manage business clients - list, search, create, or update client information. Use when the user mentions clients, contacts, customers, Kunden, or needs to ma...

Registry SourceRecently Updated
3610Profile unavailable
Automation

Fast.io

Workspaces for agentic teams. Complete agent guide with all 19 consolidated tools using action-based routing — parameters, workflows, ID formats, and constra...

Registry SourceRecently Updated
3.7K1Profile unavailable