dbt Source Project

Euno's dbt Source Project integration supports auto-discovery of dbt resources. Unlike dbt Cloud (where Euno fetches artifacts from the dbt Cloud API) or dbt Core (where you upload pre-built artifacts), the dbt Source Project integration builds your dbt project inside Euno's infrastructure and generates the JSON artifacts for you. You do not need to run dbt yourself or upload any files.

How It Differs from Other dbt Integrations

Integration
How artifacts are obtained

dbt Cloud

Euno fetches artifacts from the dbt Cloud API after jobs run in dbt Cloud.

dbt Core

You upload pre-built artifacts (manifest, catalog, run_results, etc.) to Euno.

dbt Source Project

Euno clones your git repository, runs the dbt project in a secure, managed environment, and generates the JSON artifacts; then Euno processes them the same way as dbt Core.

How It Works

  1. Clone the repository – Euno clones your dbt project from GitHub or GitLab using the deploy key you provide (and optional branch or subdirectory).

  2. Build in Euno's environment – Euno prepares the project and runs dbt compile (including dependency resolution) in a managed container, using the warehouse target you configured (e.g. BigQuery project and dataset).

  3. Collect artifacts – Euno downloads the generated artifacts (manifest.json, catalog.json, run_results.json, and semantic_manifest.json when present).

  4. Process the artifacts – Euno processes these artifacts and adds the same discovered dbt resources to the data model as with dbt Core.

Prerequisites

  • A git repository (HTTPS URL) containing your dbt project. GitHub and GitLab are supported.

  • An SSH deploy key with read access to the repository. The deploy key is used by Euno to clone the repo.

  • Warehouse credentials: The credentials and target details required depend on the target environment you select. Currently, BigQuery is the only supported target environment. See the configuration section below for details.

Setting Up Euno's dbt Source Project Integration

Step 1: Access the Sources Page

  1. Navigate to the Sources page in the Euno application.

  2. Click on the Add New Source button.

  3. Select dbt Source Project.

Step 2: General Configuration

An asterisk (*) indicates a required field.

Common settings (all target environments):

Configuration
Description

Name*

Enter a name for your dbt Source Project source (e.g., "dbt - Marketing Models").

Repository URL*

The HTTPS URL of the git repository containing the dbt project (e.g., https://github.com/your-org/your-dbt-repo).

Repository subdirectory

Subdirectory within the repository where the dbt project lives. Defaults to / (repository root).

Repository branch

Git branch to use. If not specified, the repository's default branch is used.

Deploy key*

SSH private key for repository access. Euno uses this to clone the repository. Ensure the corresponding public key is added to the repo as a deploy key with read access. See GitHub's Managing deploy keysarrow-up-right for how to add a deploy key to a GitHub repository.

Target environment*

The warehouse to use for dbt compilation. Select your warehouse below and configure the required fields for that target.

Target environment – warehouse-specific configuration

Currently, BigQuery is the only supported target environment. Select BigQuery in the integration form and configure the required fields below.

chevron-rightBigQueryhashtag

When BigQuery is selected as the target environment, configure:

Configuration
Description

Service account JSON*

GCP service account JSON key. The service account must have access to the BigQuery project and dataset used for compilation. Minimum privileges: BigQuery Job User (to run compile jobs) and BigQuery Data Viewer or BigQuery Data Editor on the target project/dataset so dbt can read metadata and generate the catalog. See Create and delete service account keysarrow-up-right and BigQuery: Use service accountsarrow-up-right for how to create and configure the key.

Target GCP project*

The GCP project ID containing the BigQuery dataset.

Target BigQuery dataset*

The BigQuery dataset used for dbt compile (e.g., for metadata/catalog generation).

Future support

The following warehouse targets are not yet available but are planned for a future release.

chevron-rightSnowflake (planned)hashtag

Support for Snowflake as a build target is planned. Configuration and required fields will be documented here when available.

chevron-rightDatabricks (planned)hashtag

Support for Databricks as a build target is planned. Configuration and required fields will be documented here when available.

Step 3: Resource Cleanup Options

To keep your data relevant and free of outdated resources, Euno provides automatic resource cleanup options. These settings determine when a resource should be removed if it is no longer detected by a source integration. For a detailed explanation, see Resource Sponsorship in Euno.

  • Time-Based Cleanup (default): Remove resources that were last detected X days before the most recent successful source integration run (user-defined X, default is 7 days).

  • Immediate Cleanup: Remove resources not detected in the most recent successful source integration run.

  • No Cleanup: Keep all resources indefinitely, even if they are no longer detected.

Step 4: Advanced Settings (Optional)

Click on the Advanced section to display these additional configurations.

Configuration
Description

Schema aliases

A mapping of database.schema combinations. Euno will ingest dbt resources to the database and schema stated in the manifest file, unless a mapping is defined. Same behavior as [dbt Core](../dbt-core/README.md) mapping.

Allow resources with no catalog entry

If enabled, Euno will include dbt resources that do not have a corresponding entry in the catalog (e.g., from manifest only). By default, only resources with catalog entries are observed.

Override URI prefix

Optional prefix to override the URI of ingested resources. If not set, Euno uses dbt.<dbt project name>.

Step 5: Save and Run

  1. Click Save. Euno validates the configuration (including repository access and dbt project structure).

  2. After saving, you can run the integration on a schedule or manually via Run now. There is no artifact upload stepβ€”each run clones the repo, builds the project, and processes the generated artifacts.

Running the Integration

  • Scheduled runs: Configure a schedule (e.g., daily or weekly) in the source settings. Euno will clone the repository, run dbt compile, and process the artifacts on each run.

  • Manual runs: Use Run now on the source page to trigger a run on demand.

Each run uses the current state of the configured branch (or default branch) and subdirectory.

Logs and Artifacts

After each run, the integration provides:

  • Run report – Includes repository URL, branch, commit SHA, Cloud Run execution status, and dbt compile duration.

  • Logs – Execution logs from the dbt compile step, available from the integration run details in the UI.

  • Artifacts – The generated artifacts (e.g., manifest.json, catalog.json, run_results.json) are stored and can be downloaded from the run details for debugging if needed.

Troubleshooting

Issue
What to check

Clone or authentication failed

Verify the deploy key has read access to the repository. Ensure the repository URL is HTTPS and correct. For GitHub/GitLab, confirm the deploy key is added to the repo (or org) with read access.

Branch not found

Ensure the specified branch exists in the repository. If left blank, the default branch is used.

dbt project not found

Verify the repository subdirectory points to the folder containing dbt_project.yml.

Cloud Run / compile failed

Ensure the credentials and target settings for your chosen warehouse are correct (e.g. for BigQuery: GCP service account, project, and dataset). Check that the dbt project is valid (e.g., dbt deps and dbt compile would succeed locally with the same target).

Missing artifacts

The run may have failed during compile. Review the run logs and run report for errors.

Last updated