Skip to main contentTroubleshooting Analysis Issues
If your automated analysis isn’t behaving as expected, here are common issues and steps to diagnose them.
Issue: Analysis Didn’t Run for an Interaction
Symptoms: You expect analysis results on a Call/Document page, but the relevant Analysis section is missing or empty for a specific configuration.
Checks:
- Configuration Active?
- Go to Settings -> Analysis -> Configurations.
- Find the relevant configuration. Is the Active toggle ON? If not, analysis won’t run automatically.
- Filter Template Match?
- Review the configuration’s Filter Template.
- Manually check if the specific interaction (Call title, Opportunity stage at the time, Document name, etc.) actually matches the conditions defined in the template. Remember template logic is case-sensitive unless functions like
TitleContains (which are case-insensitive) are used. A small mismatch will prevent the filter from passing.
- Tip: Temporarily remove or simplify the filter template on the configuration (remember to save!) and see if analysis runs on the next similar interaction. If it does, the filter was the issue. Add complexity back incrementally.
- Score Eligibility (for Score Fields)?
- If the configuration includes Score Fields, did the interaction meet an exclusion criterion defined in the Score Eligibility Questions or standard checks (No-Show, Cut Short, Personal, Stop Record)?
- Review the interaction content. Was it very short? Did the client not show up? Was it off-topic? If so, it might have been correctly deemed ineligible for scoring/analysis. Check the reasoning in the
<ScoreEligibility> output if available (currently might require backend logs).
- System Processing Delay?
- Analysis takes time. For complex configurations or during high system load, there might be a delay between the interaction finishing and the analysis appearing. Check back after a few minutes.
- Correct Mode?
- Does the configuration’s
Mode (Call, Document, Opportunity) match the type of interaction you’re looking at? A Call-mode configuration won’t run on a Document.
Symptoms: A Score is incorrect, Pros/Cons are vague, a Value field missed key details, or generated text is poorly formatted.
Checks:
- Field Prompts (Instructions/Rubric, Directions):
- This is the most likely culprit. Are the instructions clear and specific?
- Scores: Is the Rubric well-defined with distinct levels and observable criteria? Do the Directions provide good examples and handle edge cases?
- Values: Are the Instructions explicit about what to extract and how to format it? Do the Directions clarify ambiguities?
- Action: Create a Draft and use the Chat Editor to refine the prompts. Use the problematic interaction as a context source for the AI assistant.
- Context Quality:
- Transcript/Document Content: Was the source material clear? Poor audio quality leading to transcription errors, or poorly structured documents, can hinder AI understanding.
- Knowledge Documents: Are the relevant Knowledge Documents linked to the field? Is their content accurate and up-to-date? Are the Usage Modes set correctly? Are there relevant document-specific instructions?
- Custom Fields: Is the CRM data (Account, Contact, Opp) associated with the interaction accurate and complete? Missing CRM data means less context for the AI.
- Test Thoroughly:
- Use the Draft Tester with a variety of interactions (good, bad, simple, complex) to see how the draft performs before deploying. An instruction that works for one call might fail on another.
Issue: Filter Template Isn’t Working Correctly
Symptoms: The configuration runs on interactions it shouldn’t, or doesn’t run on interactions it should.
Checks:
- Syntax: Double-check the Go template syntax (
{{ }}, function calls, boolean logic and/or/not). Typos are common.
- Available Functions/Data: Ensure you are using functions and data fields that are actually available in the right context. Refer to the Filters & Eligibility documentation for available functions.
- Case Sensitivity: Be mindful of case sensitivity in string comparisons unless using case-insensitive functions (like
TitleContains). Opportunity stages provided in the filter might need to match the case used in your CRM or the OppCallStageIn/OppCurrentStageIn function’s expectations (often lowercase).
- Logic: Test the logic step-by-step. If you have
{{ and (condition1) (or (condition2) (condition3)) }}, verify each condition individually.
- Data Accuracy: Is the underlying data the filter relies on correct? Is the call title accurate? Is the opportunity stage synced correctly from the CRM?
Issue: Chat Editor Isn’t Making Good Suggestions or Applying Changes Correctly
Checks:
- Clarity of Your Request: Was your instruction to the AI assistant clear and unambiguous? Try rephrasing.
- Provided Context: Did you add relevant Calls, Documents, or Knowledge Documents as sources to help the AI understand the specific example or standard you’re referring to?
- Field Purpose: Is the field’s Purpose (
description) clear and accurate? The AI relies heavily on this. Consider refining the Purpose first.
- Reviewing Diffs: Are you carefully reviewing the proposed changes in the diff view (
DiffableFieldEditor.tsx) before Accepting? Sometimes the AI might misunderstand and make unintended changes elsewhere. Reject incorrect changes for specific components.
- Iterate: Treat it like a conversation. If the first suggestion isn’t right, tell the AI why and ask for a different approach.
By systematically checking these points, you can usually diagnose and resolve most common issues with the Fabius analysis system. If problems persist, gather specific examples (Interaction IDs, Configuration/Field IDs, expected vs. actual results) and reach out to support.