Skip to main content

Core Analysis Concepts

To effectively configure and manage automated analysis in Fabius, it’s essential to understand these core components:
  • What it is: The central “recipe” or blueprint for an analysis task. It defines which events should trigger which specific analyses.
  • Key Settings:
    • Name: A label for easy identification (e.g., “Discovery Call Analysis”, “Demo Review”).
    • Type: Broad category (currently Standard is the main type).
    • Mode: Specifies the type of event this configuration applies to (Call, Document, Opportunity).
    • Filter Template: Rules that determine if this configuration should run for a specific incoming event.
    • Active: Toggles whether the configuration is currently running.
    • Linked Fields: The specific Analysis Fields (Scores or Values) that will be executed when this configuration runs.
    • Score Eligibility Questions (Optional): Pre-checks to skip analysis on certain events (like no-shows).
  • What it is: Represents a single, specific analysis task – the “question” you want the AI to answer or the “job” you want it to perform for an interaction.
  • Two Main Types:
    • Score Fields: Used to evaluate performance or quality on a numeric scale (typically 1-10).
      • Requires: A Rubric (defined within the field’s Instructions) detailing the criteria for each score level.
      • Outputs: A numeric score, plus textual Pros (what went well), Cons (areas for improvement), and actionable Suggestions.
      • Example: “Discovery Quality Score”.
    • Value Fields: Used to extract specific information or generate content based on the interaction.
      • Requires: Detailed Instructions telling the AI what to extract or generate.
      • Outputs: Primarily a string output. While the backend might support lists or maps conceptually, the UI and standard configuration focus on generating a well-formatted string. You instruct the AI via prompts on how to structure this string (e.g., using Markdown lists or key-value pairs).
      • Example: “Next Steps Identified”, “Follow-up Email Draft”.
  • Key Settings:
    • Name: A label for the field (e.g., “Budget Authority”, “Competitors Mentioned”).
    • Purpose (description): Explains the goal of the field; crucial for AI-assisted editing.
    • Instructions/Rubric: The core prompt for the AI.
    • Directions: Supplementary guidance, examples, edge cases.
    • Definitions: Clarification of specific terms.
  • What it is: A rule written in Go template language, attached to an Analysis Configuration.
  • Purpose: To precisely control which specific events (calls, documents, etc.) trigger which configuration. Fabius checks the filter template before running a configuration. If the template evaluates to true, the analysis proceeds.
  • How it works: You have access to data about the incoming event (like call title, duration, participants, associated opportunity stage, custom fields) and can write logical conditions.
  • Example: Only run “Discovery Call Analysis” if the call title contains “Discovery” AND the associated opportunity stage is “Qualification”. {{ and (TitleContains "Discovery") (OppCallStageIn "Qualification") }}
  • What it is: A set of custom questions defined on an Analysis Configuration.
  • Purpose: To perform a final check after the Filter Template passes, specifically to determine if an event is suitable for scoring. Used to prevent scoring calls that weren’t substantive (e.g., client didn’t show, call cut short, purely personal).
  • How it works: An AI answers your custom questions (and standard built-in checks) with a Yes/No based on the interaction content. If any question results in a “Yes” (meaning it meets an exclusion criterion), the entire analysis for that configuration is skipped for that event.
  • What it is: A version control and testing system for Analysis Fields.
  • Purpose: Allows you to safely edit and test changes to field prompts (Instructions, Rubrics, Directions) without impacting the live analysis running in production.
  • Workflow: Create a draft -> Edit using the AI Chat Editor -> Test against real interactions -> Deploy the changes live.
  • What it is: The database record storing the output of a completed analysis run for a specific event and configuration.
  • Contains:
    • Links back to the original event (Call, Document, etc.) and the Configuration used.
    • The generated Scores (EnumerationValueScore) and Values (EnumerationValueOutput) for each field run by the configuration.
  • Knowledge Documents: Your internal playbooks, product docs, case studies, etc. Can be linked to Analysis Fields to provide the AI with specific context during analysis. Learn More
  • Custom Fields: Data synced from your CRM (like Account Tier, Industry, Deal Size). Can be included in prompts to give the AI richer background on the interaction. Learn More
Understanding these concepts will help you navigate the configuration settings and build powerful, tailored analyses for your specific needs. Next: Getting Started Guide