# dbt Source Project

Euno's dbt Source Project integration supports auto-discovery of [dbt resources](https://docs.euno.ai/sources/transformation-etl/dbt-core/dbt-integration-discovered-resources). Unlike **dbt Cloud** (where Euno fetches artifacts from the dbt Cloud API) or **dbt Core** (where you upload pre-built artifacts), the dbt Source Project integration **builds your dbt project inside Euno's infrastructure** and generates the JSON artifacts for you. You do not need to run dbt yourself or upload any files.

## How It Differs from Other dbt Integrations

| Integration            | How artifacts are obtained                                                                                                                                                   |
| ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **dbt Cloud**          | Euno fetches artifacts from the dbt Cloud API after jobs run in dbt Cloud.                                                                                                   |
| **dbt Core**           | You upload pre-built artifacts (manifest, catalog, run\_results, etc.) to Euno.                                                                                              |
| **dbt Source Project** | Euno clones your git repository, runs the dbt project in a secure, managed environment, and generates the JSON artifacts; then Euno processes them the same way as dbt Core. |

## How It Works

1. **Clone the repository** – Euno clones your dbt project from GitHub or GitLab using the deploy key you provide (and optional branch or subdirectory).
2. **Build in Euno's environment** – Euno prepares the project and runs dbt compile (including dependency resolution) in a managed container, using the warehouse target you configured (e.g. BigQuery project and dataset).
3. **Collect artifacts** – Euno downloads the generated artifacts (manifest.json, catalog.json, run\_results.json, and semantic\_manifest.json when present).
4. **Process the artifacts** – Euno processes these artifacts and adds the same [discovered dbt resources](https://docs.euno.ai/sources/transformation-etl/dbt-core/dbt-integration-discovered-resources) to the data model as with dbt Core.

## Prerequisites

* A **git repository** (HTTPS URL) containing your dbt project. GitHub and GitLab are supported.
* An **SSH deploy key** with read access to the repository. The deploy key is used by Euno to clone the repo.
* **Warehouse credentials**: The credentials and target details required depend on the **target environment** you select. Currently, **BigQuery** is the only supported target environment. See the configuration section below for details.

## Setting Up Euno's dbt Source Project Integration

### Step 1: Access the Sources Page

1. Navigate to the **Sources** page in the Euno application.
2. Click on the **Add New Source** button.
3. Select **dbt Source Project**.

### Step 2: General Configuration

An asterisk (\*) indicates a required field.

**Common settings** (all target environments):

<table><thead><tr><th width="221">Configuration</th><th>Description</th></tr></thead><tbody><tr><td>Name*</td><td>Enter a name for your dbt Source Project source (e.g., "dbt - Marketing Models").</td></tr><tr><td>Repository URL*</td><td>The HTTPS URL of the git repository containing the dbt project (e.g., https://github.com/your-org/your-dbt-repo).</td></tr><tr><td>Repository subdirectory</td><td>Subdirectory within the repository where the dbt project lives. Defaults to <code>/</code> (repository root).</td></tr><tr><td>Repository branch</td><td>Git branch to use. If not specified, the repository's default branch is used.</td></tr><tr><td>Deploy key*</td><td>SSH private key for repository access. Euno uses this to clone the repository. Ensure the corresponding public key is added to the repo as a deploy key with read access. See GitHub's <a href="https://docs.github.com/en/authentication/connecting-to-github-with-ssh/managing-deploy-keys">Managing deploy keys</a> for how to add a deploy key to a GitHub repository.</td></tr><tr><td>Target environment*</td><td>The warehouse to use for dbt compilation. Select your warehouse below and configure the required fields for that target.</td></tr></tbody></table>

**Target environment – warehouse-specific configuration**

Currently, **BigQuery** is the only supported target environment. Select **BigQuery** in the integration form and configure the required fields below.

<details>

<summary>BigQuery</summary>

When **BigQuery** is selected as the target environment, configure:

<table><thead><tr><th width="221">Configuration</th><th>Description</th></tr></thead><tbody><tr><td>Service account JSON*</td><td>GCP service account JSON key. The service account must have access to the BigQuery project and dataset used for compilation. <strong>Minimum privileges:</strong> <strong>BigQuery Job User</strong> (to run compile jobs) and <strong>BigQuery Data Viewer</strong> or <strong>BigQuery Data Editor</strong> on the target project/dataset so dbt can read metadata and generate the catalog. See <a href="https://docs.cloud.google.com/iam/docs/keys-create-delete">Create and delete service account keys</a> and <a href="https://docs.cloud.google.com/bigquery/docs/use-service-accounts">BigQuery: Use service accounts</a> for how to create and configure the key.</td></tr><tr><td>Target GCP project*</td><td>The GCP project ID containing the BigQuery dataset.</td></tr><tr><td>Target BigQuery dataset*</td><td>The BigQuery dataset used for dbt compile (e.g., for metadata/catalog generation).</td></tr></tbody></table>

</details>

#### Future support

The following warehouse targets are not yet available but are planned for a future release.

<details>

<summary>Snowflake (planned)</summary>

Support for **Snowflake** as a build target is planned. Configuration and required fields will be documented here when available.

</details>

<details>

<summary>Databricks (planned)</summary>

Support for **Databricks** as a build target is planned. Configuration and required fields will be documented here when available.

</details>

### Step 3: Resource Cleanup Options

To keep your data relevant and free of outdated resources, Euno provides automatic **resource cleanup** options. These settings determine when a resource should be removed if it is no longer detected by a source integration. For a detailed explanation, see [Resource Sponsorship in Euno](https://docs.euno.ai/developer-reference/technical-concepts/resource-sponsorship-and-cleanup-in-euno).

* **Time-Based Cleanup (default)**: Remove resources that were last detected X days before the most recent successful source integration run (user-defined X, default is 7 days).
* **Immediate Cleanup**: Remove resources not detected in the most recent successful source integration run.
* **No Cleanup**: Keep all resources indefinitely, even if they are no longer detected.

### Step 4: Advanced Settings (Optional)

Click on the **Advanced** section to display these additional configurations.

<table><thead><tr><th width="209">Configuration</th><th>Description</th></tr></thead><tbody><tr><td>Schema aliases</td><td>A mapping of database.schema combinations. Euno will ingest dbt resources to the database and schema stated in the manifest file, unless a mapping is defined. Same behavior as [dbt Core](../dbt-core/README.md) mapping.</td></tr><tr><td>Allow resources with no catalog entry</td><td>If enabled, Euno will include dbt resources that do not have a corresponding entry in the catalog (e.g., from manifest only). By default, only resources with catalog entries are observed.</td></tr><tr><td>Override URI prefix</td><td>Optional prefix to override the URI of ingested resources. If not set, Euno uses <code>dbt.&#x3C;dbt project name></code>.</td></tr></tbody></table>

### Step 5: Save and Run

1. Click **Save**. Euno validates the configuration (including repository access and dbt project structure).
2. After saving, you can **run the integration** on a schedule or manually via **Run now**. There is no artifact upload step—each run clones the repo, builds the project, and processes the generated artifacts.

## Running the Integration

* **Scheduled runs**: Configure a schedule (e.g., daily or weekly) in the source settings. Euno will clone the repository, run dbt compile, and process the artifacts on each run.
* **Manual runs**: Use **Run now** on the source page to trigger a run on demand.

Each run uses the current state of the configured branch (or default branch) and subdirectory.

## Logs and Artifacts

After each run, the integration provides:

* **Run report** – Includes repository URL, branch, commit SHA, Cloud Run execution status, and dbt compile duration.
* **Logs** – Execution logs from the dbt compile step, available from the integration run details in the UI.
* **Artifacts** – The generated artifacts (e.g., manifest.json, catalog.json, run\_results.json) are stored and can be downloaded from the run details for debugging if needed.

## Troubleshooting

| Issue                              | What to check                                                                                                                                                                                                                                                         |
| ---------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Clone or authentication failed** | Verify the deploy key has read access to the repository. Ensure the repository URL is HTTPS and correct. For GitHub/GitLab, confirm the deploy key is added to the repo (or org) with read access.                                                                    |
| **Branch not found**               | Ensure the specified branch exists in the repository. If left blank, the default branch is used.                                                                                                                                                                      |
| **dbt project not found**          | Verify the repository subdirectory points to the folder containing `dbt_project.yml`.                                                                                                                                                                                 |
| **Cloud Run / compile failed**     | Ensure the credentials and target settings for your chosen warehouse are correct (e.g. for BigQuery: GCP service account, project, and dataset). Check that the dbt project is valid (e.g., `dbt deps` and `dbt compile` would succeed locally with the same target). |
| **Missing artifacts**              | The run may have failed during compile. Review the run logs and run report for errors.                                                                                                                                                                                |

## Related Documentation

* [dbt resources discovered by Euno](https://docs.euno.ai/sources/transformation-etl/dbt-core/dbt-integration-discovered-resources) – Types of resources (models, sources, metrics, etc.) added to the data model.
* [dbt Core](https://docs.euno.ai/sources/transformation-etl/dbt-core) – Upload pre-built artifacts instead of having Euno build the project.
* [dbt Cloud](https://docs.euno.ai/sources/transformation-etl/dbt-cloud) – Use dbt Cloud jobs and have Euno fetch artifacts from the dbt Cloud API.
* [Resource Sponsorship in Euno](https://docs.euno.ai/developer-reference/technical-concepts/resource-sponsorship-and-cleanup-in-euno) – How cleanup and sponsorship work for source integrations.
