dbt Source Project
Euno's dbt Source Project integration supports auto-discovery of dbt resources. Unlike dbt Cloud (where Euno fetches artifacts from the dbt Cloud API) or dbt Core (where you upload pre-built artifacts), the dbt Source Project integration builds your dbt project inside Euno's infrastructure and generates the JSON artifacts for you. You do not need to run dbt yourself or upload any files.
How It Differs from Other dbt Integrations
dbt Cloud
Euno fetches artifacts from the dbt Cloud API after jobs run in dbt Cloud.
dbt Core
You upload pre-built artifacts (manifest, catalog, run_results, etc.) to Euno.
dbt Source Project
Euno clones your git repository, runs the dbt project in a secure, managed environment, and generates the JSON artifacts; then Euno processes them the same way as dbt Core.
How It Works
Clone the repository β Euno clones your dbt project from GitHub or GitLab using the deploy key you provide (and optional branch or subdirectory).
Build in Euno's environment β Euno prepares the project and runs dbt compile (including dependency resolution) in a managed container, using the warehouse target you configured (e.g. BigQuery project and dataset).
Collect artifacts β Euno downloads the generated artifacts (manifest.json, catalog.json, run_results.json, and semantic_manifest.json when present).
Process the artifacts β Euno processes these artifacts and adds the same discovered dbt resources to the data model as with dbt Core.
Prerequisites
A git repository (HTTPS URL) containing your dbt project. GitHub and GitLab are supported.
An SSH deploy key with read access to the repository. The deploy key is used by Euno to clone the repo.
Warehouse credentials: The credentials and target details required depend on the target environment you select. Currently, BigQuery is the only supported target environment. See the configuration section below for details.
Setting Up Euno's dbt Source Project Integration
Step 1: Access the Sources Page
Navigate to the Sources page in the Euno application.
Click on the Add New Source button.
Select dbt Source Project.
Step 2: General Configuration
An asterisk (*) indicates a required field.
Common settings (all target environments):
Name*
Enter a name for your dbt Source Project source (e.g., "dbt - Marketing Models").
Repository URL*
The HTTPS URL of the git repository containing the dbt project (e.g., https://github.com/your-org/your-dbt-repo).
Repository subdirectory
Subdirectory within the repository where the dbt project lives. Defaults to / (repository root).
Repository branch
Git branch to use. If not specified, the repository's default branch is used.
Deploy key*
SSH private key for repository access. Euno uses this to clone the repository. Ensure the corresponding public key is added to the repo as a deploy key with read access. See GitHub's Managing deploy keys for how to add a deploy key to a GitHub repository.
Target environment*
The warehouse to use for dbt compilation. Select your warehouse below and configure the required fields for that target.
Target environment β warehouse-specific configuration
Currently, BigQuery is the only supported target environment. Select BigQuery in the integration form and configure the required fields below.
BigQuery
When BigQuery is selected as the target environment, configure:
Service account JSON*
GCP service account JSON key. The service account must have access to the BigQuery project and dataset used for compilation. Minimum privileges: BigQuery Job User (to run compile jobs) and BigQuery Data Viewer or BigQuery Data Editor on the target project/dataset so dbt can read metadata and generate the catalog. See Create and delete service account keys and BigQuery: Use service accounts for how to create and configure the key.
Target GCP project*
The GCP project ID containing the BigQuery dataset.
Target BigQuery dataset*
The BigQuery dataset used for dbt compile (e.g., for metadata/catalog generation).
Future support
The following warehouse targets are not yet available but are planned for a future release.
Snowflake (planned)
Support for Snowflake as a build target is planned. Configuration and required fields will be documented here when available.
Databricks (planned)
Support for Databricks as a build target is planned. Configuration and required fields will be documented here when available.
Step 3: Resource Cleanup Options
To keep your data relevant and free of outdated resources, Euno provides automatic resource cleanup options. These settings determine when a resource should be removed if it is no longer detected by a source integration. For a detailed explanation, see Resource Sponsorship in Euno.
Time-Based Cleanup (default): Remove resources that were last detected X days before the most recent successful source integration run (user-defined X, default is 7 days).
Immediate Cleanup: Remove resources not detected in the most recent successful source integration run.
No Cleanup: Keep all resources indefinitely, even if they are no longer detected.
Step 4: Advanced Settings (Optional)
Click on the Advanced section to display these additional configurations.
Schema aliases
A mapping of database.schema combinations. Euno will ingest dbt resources to the database and schema stated in the manifest file, unless a mapping is defined. Same behavior as [dbt Core](../dbt-core/README.md) mapping.
Allow resources with no catalog entry
If enabled, Euno will include dbt resources that do not have a corresponding entry in the catalog (e.g., from manifest only). By default, only resources with catalog entries are observed.
Override URI prefix
Optional prefix to override the URI of ingested resources. If not set, Euno uses dbt.<dbt project name>.
Step 5: Save and Run
Click Save. Euno validates the configuration (including repository access and dbt project structure).
After saving, you can run the integration on a schedule or manually via Run now. There is no artifact upload stepβeach run clones the repo, builds the project, and processes the generated artifacts.
Running the Integration
Scheduled runs: Configure a schedule (e.g., daily or weekly) in the source settings. Euno will clone the repository, run dbt compile, and process the artifacts on each run.
Manual runs: Use Run now on the source page to trigger a run on demand.
Each run uses the current state of the configured branch (or default branch) and subdirectory.
Logs and Artifacts
After each run, the integration provides:
Run report β Includes repository URL, branch, commit SHA, Cloud Run execution status, and dbt compile duration.
Logs β Execution logs from the dbt compile step, available from the integration run details in the UI.
Artifacts β The generated artifacts (e.g., manifest.json, catalog.json, run_results.json) are stored and can be downloaded from the run details for debugging if needed.
Troubleshooting
Clone or authentication failed
Verify the deploy key has read access to the repository. Ensure the repository URL is HTTPS and correct. For GitHub/GitLab, confirm the deploy key is added to the repo (or org) with read access.
Branch not found
Ensure the specified branch exists in the repository. If left blank, the default branch is used.
dbt project not found
Verify the repository subdirectory points to the folder containing dbt_project.yml.
Cloud Run / compile failed
Ensure the credentials and target settings for your chosen warehouse are correct (e.g. for BigQuery: GCP service account, project, and dataset). Check that the dbt project is valid (e.g., dbt deps and dbt compile would succeed locally with the same target).
Missing artifacts
The run may have failed during compile. Review the run logs and run report for errors.
Related Documentation
dbt resources discovered by Euno β Types of resources (models, sources, metrics, etc.) added to the data model.
dbt Core β Upload pre-built artifacts instead of having Euno build the project.
dbt Cloud β Use dbt Cloud jobs and have Euno fetch artifacts from the dbt Cloud API.
Resource Sponsorship in Euno β How cleanup and sponsorship work for source integrations.
Last updated