Skip to main content

Analysis Field Prompting Guide

The effectiveness of your automated analysis hinges on the quality of the prompts you provide to the AI. Fabius uses a structured approach with several components for each Analysis Field. Understanding and utilizing these components correctly is key to getting accurate and useful results. These components are primarily defined and edited within the Draft Workflow, ideally using the Chat Editor.

Core Prompt Components

  • What it is: A high-level description of the field’s goal and intended use. Think of it as the “why” behind the field.
  • Importance: Crucial. This is the primary input the AI assistant uses in the Chat Editor to help you generate and refine the other prompt components (Instructions, Directions, etc.). It also serves as documentation for your team.
  • Best Practices:
    • Be specific about the desired outcome (e.g., “Identify quantifiable pain points…” not just “Find pain points”).
    • Mention the intended audience or use case (e.g., “…to inform coaching sessions”, “…for tailoring follow-up emails”).
    • Keep it concise but comprehensive (1-3 paragraphs is often sufficient).
  • Example (for a “Next Steps” Value field):
    “This field aims to extract the concrete, actionable next steps agreed upon during a sales call. It should capture who is responsible for each action item and any specific deadlines mentioned. The output will be used by reps to ensure follow-through and by managers to track deal progression.”
  • Reference: Purpose Field Documentation
  • What it is: The primary directive for the AI. This tells the AI what analysis to perform or what criteria to score against.
  • Score Fields: This section contains the Rubric.
    • It must define clear criteria for each score level (typically 0-10 or 1-10).
    • Describe observable behaviors or specific content that corresponds to each level.
    • Clearly differentiate between score levels. What makes an 8 different from a 6?
    • Use Markdown for structure (headings for score levels, bullets for criteria).
    • Example Snippet (Discovery Score Rubric):
      ### 8/10: Strong Discovery
      
      - Rep successfully uncovered and quantified at least 2 major pain points.
      - Rep identified the primary decision-maker and their key criteria.
      - Rep established a clear understanding of the prospect's timeline and budget range.
      - Minor gaps in understanding the full technical landscape may exist.
      
      ### 6/10: Competent Discovery
      
      - Rep identified general pain points but struggled to quantify impact.
      - Rep identified some stakeholders but not clearly the primary decision-maker.
      - Basic understanding of timeline or budget, but specifics are vague.
      
  • Value Fields: This section contains the Instructions.
    • Clearly state what information to extract or what content to generate.
    • Be specific about the desired format (e.g., “Output as a Markdown bulleted list”, “Generate a 3-paragraph summary”, “Extract verbatim quotes tagged with the speaker”).
    • Break down complex tasks into smaller steps if necessary.
    • Example (Competitor Mentions Value Field):
      1.  Scan the transcript for mentions of competitor companies or products.
      2.  For each mention, extract:
          - The competitor's name.
          - The context of the mention (e.g., currently using, evaluating, comparing features).
          - Any specific pros or cons mentioned about the competitor.
      3.  Format the output as a Markdown list, with each competitor as a main bullet point and context/pros/cons as sub-bullets.
      
  • Reference: Instructions Documentation, Rubric Documentation
  • What it is: Supplementary guidance that provides context, examples, and clarifies nuances for the AI. Tells the AI how to approach the task defined in Instructions/Rubric.
  • Purpose:
    • Provide concrete examples of “good” vs. “bad”.
    • Address potential ambiguities or edge cases.
    • Specify desired tone or level of detail.
    • Guide the AI on how to interpret specific situations mentioned in the transcript/document.
  • Best Practices:
    • Use clear examples directly related to the Instructions/Rubric.
    • Explain why something is good or bad according to your process.
    • Don’t repeat the Instructions; elaborate on them.
  • Example (Supplementing Discovery Score Rubric):
    • Good Example (Budget): Rep asks “What budget range have you allocated for solutions like this?” or “How does your typical purchasing process work for software in this price range?”
    • Poor Example (Budget): Rep avoids the budget question entirely or only asks “Do you have budget?” without probing further.
    • Edge Case: If the prospect explicitly states they cannot discuss budget on this call, do not penalize the rep in the ‘Cons’ section for not getting a number, but note the lack of budget information.
  • Reference: Directions Documentation
  • What it is: A place to define specific terms, acronyms, or concepts unique to your business, industry, or product.
  • Purpose: To ensure the AI understands your specific language and avoids misinterpretations.
  • When to Use: Use sparingly, only when a term might be ambiguous or has a very specific meaning in your context that differs from common usage.
  • Example:
    • MEDDPICC: Our methodology for deal qualification (Metrics, Economic Buyer, Decision Criteria, Decision Process, Identified Pain, Champion, Competition).
    • ARR: Annual Recurring Revenue.
    • QBR: Quarterly Business Review.
  • Reference: Definitions Documentation (Note: General template, adapt as needed)

General Prompting Tips

  • Be Specific and Explicit: Don’t assume the AI knows your internal processes or jargon. Clearly define what you want.
  • Use Structure: Leverage Markdown (headings, lists, bold text) in your prompts to make them easier for the AI (and humans) to parse.
  • Iterate: Your first prompt is rarely perfect. Use the Drafts and Testing features heavily. Test with diverse examples (good calls, bad calls, tricky situations).
  • Leverage the Chat Editor: Work with the AI assistant to refine your prompts. It can often suggest improvements or help clarify instructions based on the Purpose you provide.
  • Focus on Observable Behaviors (for Scores): Base rubric criteria on things the AI can actually identify in the text (e.g., “Rep asked about X”, “Prospect mentioned Y”) rather than subjective judgments like “Rep built rapport well” (unless you provide specific examples of how to identify good rapport in the Directions).
  • Define Output Format (for Values): Tell the AI exactly how you want the extracted information or generated text structured.
By investing time in crafting clear and comprehensive prompts using these components, you empower Fabius to deliver highly accurate and relevant analysis tailored to your specific needs. Next: Learn about specific field types: Score Fields | Value Fields