ClairaClaira Help Desk
AI ReviewPrompting

Custom Prompts

Voir en français

How to write, test, and refine custom prompts in Claira for review tasks that go beyond built-in templates.

Custom Prompts

Built-in prompt templates cover common review tasks, but every case has unique requirements. Custom prompts let you write instructions tailored to your specific legal issues, document types, and output needs. This page walks you through when to write a custom prompt, how to structure it well, and how to iterate until it works reliably.

When to use custom prompts vs. templates

Use a built-in template when:

  • Your task matches a standard pattern (basic relevance review, simple entity extraction, document type classification)
  • You want to get started quickly without writing instructions from scratch
  • The template's output format meets your needs

Write a custom prompt when:

  • Your relevance criteria are case-specific (particular parties, date ranges, or legal issues)
  • You need a specific output format that templates do not provide
  • You are combining multiple tasks (for example, relevance determination plus key fact extraction)
  • The built-in templates produce results that are close but not quite right for your workflow

How to structure a custom prompt

A good custom prompt has four parts:

1. Task instruction

Start with a clear, direct statement of what you want the AI to do.

Task instruction

Review this document and determine whether it is relevant to the data breach investigation involving GlobalTech Systems between January 2024 and June 2025.

2. Definitions and criteria

Define any terms that have case-specific meaning. List the criteria the AI should apply.

Definitions

A document is "Relevant" if it:

  • Discusses, references, or relates to the data breach incident at GlobalTech Systems
  • Mentions security vulnerabilities, unauthorized access, or data exfiltration
  • Involves communications between GlobalTech IT staff, executives, or outside counsel about the incident
  • Falls within or references the period of January 2024 through June 2025

A document is "Not Relevant" if it does not meet any of the above criteria.

3. Output format

Specify the exact format you want. Be precise about field names, allowed values, and structure.

Output format

Respond in exactly this format: Determination: [Relevant / Not Relevant] Key Topics: [Comma-separated list of relevant topics found, or "None"] Summary: [One sentence summarizing the document's connection to the investigation, or "N/A" if Not Relevant]

4. Edge case handling

Tell the AI what to do when the document is ambiguous, incomplete, or does not fit neatly into your categories.

Edge case instructions

If the document is partially relevant or you are uncertain, respond with Determination: "Needs Review" and explain what makes it ambiguous in the Summary field.

If the document contains no extractable text or is too short to analyze meaningfully, respond with Determination: "Insufficient Text".

Testing on sample documents

Never run a new custom prompt at scale without testing it first. Testing is how you catch problems before they cost tokens and time.

How to test effectively

  1. Pick 10 to 25 benchmark documents. Choose documents that cover the full range of what you expect to encounter: clear positives, clear negatives, edge cases, and documents with little or no text.
  2. Know the expected answer for each one. Before you scan, write down what the correct result should be for each benchmark document.
  3. Run single scans. Use Single Review to scan each benchmark document one at a time.
  4. Compare results to expectations. For each document, check:
    • Did the AI return the correct determination?
    • Is the output format exactly as specified?
    • Are edge cases handled according to your instructions?
Keep a simple spreadsheet of your benchmark documents, expected results, and actual results. This makes it easy to track improvements across prompt iterations.

Iterating to improve accuracy

Prompt writing is an iterative process. Your first draft will rarely be perfect, and that is expected. Here is how to improve:

If the AI gives the wrong determination

Check whether your criteria are specific enough. Often the AI returns wrong answers because the prompt does not distinguish clearly enough between categories.

Before: ambiguous criteria

Is this document about the project?

After: specific criteria

Is this document about the Riverdale construction project (Project ID: RV-2024-031) managed by Henderson Construction LLC? Only documents that directly discuss this specific project should be marked as Relevant.

If the output format is inconsistent

Make the format specification more explicit. Show an example of the exact output you expect.

Format with example

Respond in exactly this format (do not add any extra text or explanation):

Determination: Relevant Key Topics: data breach, unauthorized access, IT security Summary: This email discusses the discovery of unauthorized access to the GlobalTech customer database.

If edge cases are handled poorly

Add specific instructions for each type of edge case you encounter. The more situations you address in your prompt, the fewer surprises you will get.

If results are inconsistent across similar documents

Tighten your definitions. When the AI gives different answers for documents that should be treated the same way, it usually means your criteria have room for interpretation. Remove that room.

Putting it all together

Here is a complete custom prompt that demonstrates all four parts:

Complete custom prompt example

Review this document and determine whether it is relevant to the employment discrimination claim filed by Jane Martinez against Apex Industries in March 2025.

A document is "Relevant" if it:

  • Discusses Jane Martinez's employment, performance, or termination at Apex Industries
  • Contains communications involving Jane Martinez, her supervisor (David Chen), or HR personnel about her employment status
  • References Apex Industries' employment policies, diversity initiatives, or discrimination complaints
  • Falls within or references the period of January 2023 through March 2025

A document is "Not Relevant" if it does not meet any of the above criteria.

Respond in exactly this format: Determination: [Relevant / Not Relevant / Needs Review] Key Topics: [Comma-separated list of relevant topics found, or "None"] Summary: [One sentence explaining the determination]

If the document is ambiguous or could be interpreted either way, use "Needs Review" and explain the ambiguity in the Summary. If the document has no extractable text, respond with Determination: "Insufficient Text" and leave other fields as "N/A".

Next steps


Need help? Contact us at support@claira.to.

Was this page helpful?