📋Context Issues
Context Issues in Euno are a way to capture and track missing context or institutional knowledge that the AI assistant (or your organization’s use of Euno) doesn’t yet have. They are used during onboarding and ongoing governance to close the gap between what Euno knows and what your organization actually means by terms, definitions, and conventions.
What Are Context Issues?
A Context Issue is a short, structured report that describes:
What context is missing — e.g. a business term, a naming convention, or a rule the assistant should follow.
Where it came up — the user question or conversation that revealed the gap.
Whether the assistant was wrong or uncertain — and what the user decided is correct and why.
Any other pertinent information — so admins or data stewards can act on it.
Context Issues are created from the Euno app (by users with the right permissions) or by the Exploratory Mode assistant when it identifies a institutional knowledge that might be missing. Resolving a Context Issue usually means turning that knowledge into active metadata or AI instructions (see below).
What Are Context Issues For?
Context Issues are for:
Onboarding — When you first connect Euno to your stack, the assistant and MCP don’t yet know your org’s vocabulary, certification rules, or preferences. Exploratory Mode + Context Issues help you find and record those gaps.
Ongoing governance — Whenever someone notices the assistant guessing, being uncertain, or using the “wrong” definition, they (or the assistant in Exploratory Mode) can file a Context Issue so the team can fix it in one place (tags or AI directives).
Audit trail — Issues are stored with status, resolution, and timestamps, so you can see what was missing and what was done about it.
Creating an Issue Manually
Go to Metadata Activation → Context Issues.
Click Report context issue.
Write a short report (e.g. up to 2500 characters), including:
What context or institutional knowledge is missing.
In what situation it came up (e.g. “User asked about 12‑month retention; several tables could apply.”).
Whether the assistant was wrong or uncertain, and what the user decided is correct and why.
Submit. The issue appears in the list with status Open and (after processing) optional auto-identified terms for filtering.
Using Exploratory Mode to Identify and Report Issues
Turn on Exploratory Mode in the assistant.
Use the assistant as usual. In this mode it is tuned to:
Answer your questions, but also to surface uncertainty (e.g. several possible tables for “retention”) and ask you which is correct and why.
Ask you to clarify terms it doesn’t recognize, so it can learn.
Suggest filing a Context Issue when it identifies missing context or institutional knowledge.
When the assistant offers to file a report, review the draft, edit if needed, and confirm. The assistant will submit the issue and post the link to the new report.
Managing Issues
List and filter — View issues by status (open, in progress, resolved), by date range, or by auto-identified terms.
Update status — Mark issues as in progress or resolved as you work on them.
Resolution — Use the resolution field to note what you did (e.g. “Added AI directive: …” or “Created tag X and applied to …”). Resolving usually means you’ve captured the knowledge in Metadata Tags and/or AI Directives.
Institutional Knowledge and How It Maps to Euno
This section explains what “institutional knowledge” means in Euno and how you turn it into something the assistant and the catalog can use: active metadata tags and AI instructions.
What Is Institutional Knowledge?
Institutional knowledge is the unwritten or semi-written knowledge that your organization uses to interpret data and analytics:
Terminology — “When we say ‘retention’ we mean subscription retention at 12 months from signup,” or “’Model’ in this team means a Snowflake semantic view.”
Definitions and rules — “Our certified dashboards are those whose upstream dbt models are all in the Gold layer,” or “We never expose PII in Tableau to role X.”
Conventions — Naming patterns (e.g. marts vs. staging), which sources are authoritative, and how domains or teams map to assets.
Preferences — “Prefer certified resources when answering” or “Always filter out deprecated models.”
Euno’s assistant and MCP start from the indexed metadata (schemas, lineage, descriptions, etc.). They do not automatically know your org’s precise meanings and rules. Institutional knowledge fills that gap. The goal of Context Issues and Exploratory Mode is to discover that knowledge and then encode it in Euno so the assistant behaves correctly.
Two Main Surfaces: Active Metadata Tags and AI Instructions
Euno gives you two main levers to encode institutional knowledge so the assistant (and EQL/search) can use it:
Active metadata tags
Account-level AI instructions (AI directives)
Context Issues point at “something is missing”; the fix is usually to add or refine one or both of these.
1. Active Metadata Tags
Metadata tags in Euno extend the data model with your own categories and rules. Active tags are computed from rules (EQL, propagation, or AI), so they stay up to date as the catalog changes.
How institutional knowledge maps to tags:
Terminology — Define tags that reflect how you use words. E.g. “subscription_retention” as a category so the assistant can filter
tags in ("subscription_retention")when someone asks about retention.Certification and quality — Use live booleans or propagated tags (e.g. “gold upstream”) so “certified” or “preferred” is explicit in the graph.
Domains and ownership — Use categorical or AI-driven tags so “sales”, “marketing”, “finance” are queryable and the assistant can prefer or filter by them.
PII and security — Use fixed or propagated booleans so the assistant (and workflows) can respect “contains PII” or “audited”.
When a Context Issue says “the assistant didn’t know which table is our retention definition,” the fix is often: define or adjust an active tag (e.g. a category or live boolean) that marks the “right” resource(s), and optionally add a short description so natural-language search and the assistant can use it.
2. AI Instructions (Account-Level Directives)
AI directives are account-level instructions that tell the assistant how to interpret and respond. They are the right place for knowledge that is not easily expressed as a tag or EQL rule:
Terminology — “When the user says ‘model’ we mean a Snowflake semantic view.”
Preferences — “Always prefer certified resources” or “Filter out resources tagged deprecated.”
Process — “When multiple tables could answer the question, show options and ask the user which one they mean.”
Exceptions — “Never suggest Tableau for PII; use Looker only for this use case.”
How institutional knowledge maps to AI instructions:
Ambiguous terms → Add a directive that defines the term or points to the right resource type/tag.
Wrong default behavior → Add a directive that states the correct preference (e.g. certification, layer, domain).
Exploratory behavior → Instructions can reinforce “when uncertain, offer options and ask the user to clarify,” which supports both better answers and surfacing more Context Issues.
When a Context Issue says “the assistant assumed the wrong definition of X,” the fix is often: add or edit an AI directive that states the correct definition or rule. Optionally, also add a tag so EQL and search are consistent with the directive.
Learn more: AI Assistant and AI Directives →
Summary: From Context Issue to Fix
“Which resource is the real X?” (e.g. retention)
Active tag (category, live boolean, or propagated)
Tag “subscription_retention” on the correct model(s); assistant can filter by it.
“We mean X when we say Y.”
AI directive (and optionally a tag)
Directive: “When we say ‘model’ we mean Snowflake semantic view.”
“Always prefer / never use Z.”
AI directive
“Always prefer resources where certified = true.”
“This is PII / certified / deprecated.”
Active tag (fixed or propagated)
PII/certified/deprecated tags so assistant and workflows can respect them.
Context Issues don’t fix the gap by themselves; they record it. The actual fix is almost always one or both of: new or updated metadata tags and new or updated AI instructions.
Related Documentation
Metadata Activation — Workflows, sync, and metadata tags.
Metadata Tags — Types of tags and how to create them.
AI Assistant — Assistant behavior and AI directives.
Using Euno — Day-to-day usage of Euno.
Last updated