AI ReviewPromptingTemplatesRelevance
Assess Relevance
Voir en françaisAssess Relevance
Use this template for repeatable, defensible AI review in Claira.
PromptingTemplate PickerAssess Relevance
When you select this template, Claira displays the prompt configuration panel. Map the output to a review field in your case — typically a Memo field for the relevance determination and explanation.
RelevanceMemoclaira-relevance-01
What this prompt is for
This template helps standardize outputs so reviewers can validate and code results quickly.
Step-by-step in Claira
- Open Prompting > Template picker in your case.
- Select Assess Relevance and map the output to one or more review fields.
- Run a 10-25 document sample and inspect edge cases first.
- Adjust instructions outside the prompt body (field mapping, workflow settings), then rerun.
The prompt
Assess whether this document is relevant to the following case context:
[insert case description]
Return ONLY one of the following searchable values:
YESREL or NOTREL.
Then provide a brief explanation referencing specific language or themes from the document.Recommended customizations
- Add case context before running (date range, custodians, issue framing).
- Keep output constraints explicit (format, allowed values, fallback behavior).
- Version your template configuration so results remain reproducible.
Worked example
Input excerpt
Email dated 2025-02-14 from Dana Lee to Jordan Park confirms the vendor contract amendment was approved and signed the same day.Expected output shape
A concise answer that follows the exact format required by the prompt and is directly mappable to your selected review field(s).Troubleshooting
- If output is too broad, reduce scope and test with a smaller sample.
- If output format drifts, restate strict formatting requirements in workflow instructions.
- If quality varies across models, compare two models on the same fixed sample set.
Related
Was this page helpful?