Custom Prompts
Voir en françaisHow to write, test, and refine custom prompts in Claira for review tasks that go beyond built-in templates.
Custom Prompts
Built-in prompt templates cover common review tasks, but every case has unique requirements. Custom prompts let you write instructions tailored to your specific legal issues, document types, and output needs. This page walks you through when to write a custom prompt, how to structure it well, and how to iterate until it works reliably.
When to use custom prompts vs. templates
Use a built-in template when:
- Your task matches a standard pattern (basic relevance review, simple entity extraction, document type classification)
- You want to get started quickly without writing instructions from scratch
- The template's output format meets your needs
Write a custom prompt when:
- Your relevance criteria are case-specific (particular parties, date ranges, or legal issues)
- You need a specific output format that templates do not provide
- You are combining multiple tasks (for example, relevance determination plus key fact extraction)
- The built-in templates produce results that are close but not quite right for your workflow
How to structure a custom prompt
A good custom prompt has four parts:
1. Task instruction
Start with a clear, direct statement of what you want the AI to do.
2. Definitions and criteria
Define any terms that have case-specific meaning. List the criteria the AI should apply.
3. Output format
Specify the exact format you want. Be precise about field names, allowed values, and structure.
4. Edge case handling
Tell the AI what to do when the document is ambiguous, incomplete, or does not fit neatly into your categories.
Testing on sample documents
Never run a new custom prompt at scale without testing it first. Testing is how you catch problems before they cost tokens and time.
How to test effectively
- Pick 10 to 25 benchmark documents. Choose documents that cover the full range of what you expect to encounter: clear positives, clear negatives, edge cases, and documents with little or no text.
- Know the expected answer for each one. Before you scan, write down what the correct result should be for each benchmark document.
- Run single scans. Use Single Review to scan each benchmark document one at a time.
- Compare results to expectations. For each document, check:
- Did the AI return the correct determination?
- Is the output format exactly as specified?
- Are edge cases handled according to your instructions?
Iterating to improve accuracy
Prompt writing is an iterative process. Your first draft will rarely be perfect, and that is expected. Here is how to improve:
If the AI gives the wrong determination
Check whether your criteria are specific enough. Often the AI returns wrong answers because the prompt does not distinguish clearly enough between categories.
If the output format is inconsistent
Make the format specification more explicit. Show an example of the exact output you expect.
If edge cases are handled poorly
Add specific instructions for each type of edge case you encounter. The more situations you address in your prompt, the fewer surprises you will get.
If results are inconsistent across similar documents
Tighten your definitions. When the AI gives different answers for documents that should be treated the same way, it usually means your criteria have room for interpretation. Remove that room.
Putting it all together
Here is a complete custom prompt that demonstrates all four parts:
Next steps
- Review the core principles in the Prompting Overview.
- Learn how to evaluate and tune outputs in Understanding Results.
- Ready to scale? Head to Bulk Scan.
Need help? Contact us at support@claira.to.
Was this page helpful?