Skip to main content

How Chat Works

Fabius Chat combines your sales data with powerful AI models to provide insightful answers. Here’s a breakdown of the key concepts:

1. Data Sources

Chat can draw information from several types of data within Fabius:
  • Call Transcripts: The text content of recorded sales calls.
  • Documents: Content ingested from uploaded files (PDFs, DOCX, etc.) or scraped web pages (Knowledge Documents).
  • Email Threads: The text content of email conversations linked via integrations.
  • Analysis Fields (Outputs & Scores): The results generated by your configured Enumeration Fields (e.g., “Next Steps Identified”, “Discovery Score”, “Budget Mentioned”).

2. Building the Context

When you ask a question, Fabius doesn’t just send your query to the AI. It first builds a detailed context based on where you are in the app and what you’ve selected:
  1. Interaction Selection: Based on the chat location (Global, Account, Interaction page) or the items you explicitly select (in the Global Chat wizard), Fabius identifies the relevant Calls, Documents, and/or Email Threads.
  2. Analysis Inclusion: For the selected interactions, Fabius retrieves any associated analysis field outputs or scores that you’ve configured. You can often filter which specific fields’ analysis results are included.
  3. Data Aggregation: Information from all selected sources (transcripts, document contents, email bodies, analysis results) is gathered.
  4. Chronological Ordering: Interactions are typically sorted by date (oldest to newest) so the AI understands the timeline. The AI is instructed to prioritize information from newer interactions if conflicts arise.
  5. Running the Prompt: The constructed prompt is sent to the AI model.

3. Controlling the Context

You have controls to refine what information the AI receives:
  • What it does: Determines whether the raw text content (call transcripts, document text, email bodies) is included in the AI’s context.
  • When to use:
    • Enable: When you need the AI to answer detailed questions directly from the source text, find specific quotes, or perform analysis that requires the full interaction content. This is the default if no analysis fields are selected.
    • Disable: When you primarily want to chat about the analysis results (scores/outputs) and don’t need the AI to re-read the entire raw interaction. This saves tokens and can focus the AI on the summarized analysis.
  • Note: Even if disabled, basic metadata (like call title, date, document name) is usually still included.
  • What it does: Instructs the AI to cite its sources by providing direct quotes or snippets from the interactions it used to answer your question. It also attempts to provide a markdown link back to the relevant interaction within Fabius.
  • When to use:
    • Enable: Essential for verifying the AI’s answers and easily navigating back to the source interaction. Highly recommended for most use cases.
    • Disable: If you only need high-level summaries and don’t require specific citations (can sometimes result in slightly more concise answers).

4. AI Processing & Response

  • The constructed prompt is sent to the AI model.
  • The AI analyzes the provided context and your latest message to generate a relevant response.
  • Fabius receives the AI’s response and displays it in the chat interface. If you included references, then you should see clickable links back to the source interactions.

5. Limitations


- Token Limits: AI models have maximum context window sizes (often either 250 pages or 1000 pages). If you select too many interactions or very long documents, Fabius will automatically truncate the context, prioritizing the most recent interactions to fit within the limit. This might mean the AI doesn’t see very old data if the total volume is too large.
- AI Hallucinations: While Fabius provides strong context, AI models can still occasionally misinterpret information or “hallucinate” details not present in the source data. This has decreased significantly in the past few years but even the newest models are not perfect. If knowing the exact sources is important, then use the Include References feature to verify critical information.
- Data Quality: The quality of chat responses depends heavily on the quality of the underlying data (clear call audio, well-structured documents, accurate analysis fields).
Understanding how context is built helps you ask better questions and interpret the AI’s responses more effectively.
Next: Usage Guide & Examples