MCP Setup Guide

The Euno AI Assistant can be integrated with your preferred AI coding assistant (Claude Desktop, Cursor, VSCode Copilot, etc.) using the Model Context Protocol (MCP). This allows you to query your data model directly from within your development environment.

Prerequisites

Before setting up the MCP integration, you'll need:

  1. API Key: Contact your Euno administrator to obtain an API key for MCP access

  2. Account ID: Your Euno account identifier (available in your Euno account settings)

Configuration by Platform

Claude Desktop (Pro, Max, Teams and Enterprise)

  1. Open Claude Desktop Settings

  2. Navigate to Connectors

  3. Click "Add custom connector" in the bottom of the screen

  4. Give this tool a name (i.e "euno"). In the URL, enter: `https://api.app.euno.ai/mcp?account_id=<ACCOUNT_ID>&api_key=<API_KEY>. Replace <ACCOUNT_ID> and <API_KEY> with your specific details.

  5. Restart Claude. You will need to go through the authentication flow the first time you use the tool.

Cursor

  1. Access MCP Settings

    • Open Cursor settings

    • Navigate to Tools & Integrations section

    • Click "New MCP server"

  2. Configure MCP Server In the mcp.json file that opens, add:

    {
      "mcpServers": {
        "euno-assistant": {
           "type": "http",
           "url": "https://api.app.euno.ai/mcp",
           "headers": {
               "x-api-key": <API_KEY>,
               "x-account-id": <ACCOUNT_ID>
           }
         }
    }

    Make sure to replace <ACCOUNT_ID> and <API_KEY> with your specific details.

  3. Restart Cursor

Available Tools

Tool
Description

search_data_pipeline_resources

Search for resources in the data pipeline, including databases, tables, schemas, data sources, dashboards, or transformations. Uses intelligent fetching based on properties, relationships, usage, or other metadata with exact or semantic matching.

ask_data_pipeline

Ask any question regarding the data pipeline, existing resources, data sources, and transformations across all layers of the data stack. Provides comprehensive information about the entire data infrastructure and can help with understanding data flow, resource dependencies, and transformation logic.

sql_planner

Plan a SQL query based on user requests. Returns an overview of existing SQL resources and recommendations. Analyzes the request, searches for relevant existing SQL logic, and provides a comprehensive plan for building the required query.

run_impact_analysis

Generate an Impact Analysis report on any base column or table. Returns a list of downstream resources that will be affected if the column/table is renamed.

Last updated