ClairaClaira Help Desk
AI ReviewPrompting

Using Claude to Generate Prompts for Claira

Voir en français

How to use Anthropic's Claude assistant (Claude.ai or Claude Code) to draft long-form, defensible Claira prompts — including a downloadable Skill that walks you through a full responsiveness prompt.

Using Claude to Generate Prompts for Claira

A well-built Claira prompt is the difference between a clean, defensible review and a noisy one. Writing those prompts from scratch — with all the definitions, exclusion patterns, and counter-examples a precise criterion needs — takes time. Claude (Anthropic's assistant, available at claude.ai or in Claude Code) is a fast way to draft them.

This page covers what to give Claude, how to point it at the right Claira instructions, and how to use the Claira Responsiveness Prompt Skill — a downloadable Skill that turns Claude into a guided prompt-builder that asks you the right questions and produces a long-form responsiveness prompt ready to paste into Claira.

Why use Claude to write Claira prompts

Claira's built-in prompt generator drafts prompts from your Case Context and is designed to handle most reviews end-to-end. It is the fastest path from a clean Case Context to a working prompt, and it is the right tool for most matters.

Using Claude externally is a complementary option that some teams reach for in specific situations — most often because of where Claude already sits in their workflow:

  • You already use Claude for case work. If you have an ongoing Claude.ai or Claude Code conversation where your matter materials, witness outlines, or pleadings already live, drafting the prompt in that same conversation lets Claude draw on context you've already loaded.
  • You want a longer back-and-forth. Claude's chat surface lets you discuss trade-offs (precision vs. recall, broad vs. narrow scope), iterate over many turns, and paste in larger reference materials before settling on a prompt.
  • You're building a long-form responsiveness prompt with multiple gating conditions, defined terms, counter-examples, and "role not established" fallbacks — and you want to walk through each section deliberately.
  • You're reverse-engineering a responsive set after a bulk run — pasting in a sample of hits and asking what the prompt likely caught well and what it likely missed before refining.

Both paths produce the same kind of artifact: a prompt that is self-contained, defensible, and ready to run at scale inside Claira.

What to give Claude

Claude does not know your matter. The quality of what comes back depends almost entirely on what you put in. Give it three things, in this order.

1. Your case background

The most important input is your Case Context. Open Case Context in Claira, copy the rendered text, and paste it into Claude as your first message. This gives Claude:

  • Parties and people (with roles)
  • Description and timeline
  • Relevance and issues
  • Privilege indicators
  • Collection details

If you have not yet built Case Context for the matter, see Setting Up Case Context and Best Practices. Even a partial Case Context is far better than none.

Case Context is shared across users on the case. Do not paste privileged work product, mental impressions, or settlement strategy into Claude — keep it factual, the same way Case Context itself is supposed to be.

2. The Claira prompting guidance

So Claude understands the Claira conventions — token costs, structured output, allowed values, fallback handling, "Insufficient Text" responses — point it at our prompting docs. The most useful pages to give Claude (or paste excerpts from):

  • Prompting Overview — core principles (be specific, define terms, structured output, edge cases).
  • Custom Prompts — the four-part structure (task, definitions, output, edge cases) and how to test/iterate.
  • What Claira can do — capabilities so Claude does not suggest things Claira cannot do (e.g., chained reasoning across documents).
  • Understanding Results — how outputs are interpreted downstream, so Claude writes formats that QC cleanly.
  • The relevant Workflow page if your task is a recognized pattern (privilege, PII, summarization, objective coding, etc.).

You can paste the full text of these pages into Claude, or give Claude the URLs and ask it to fetch them (Claude.ai and Claude Code can both read public web pages).

3. Your substantive question

Tell Claude exactly what you want the prompt to find, in plain language. Push past the first phrasing — "documents about pricing" is not enough. A good substantive question is falsifiable: a reviewer reading it should be able to look at a document and decide whether it qualifies.

Useful detail to include:

  • Actor constraint — who must be involved (named individuals, role tier, function, business unit, internal vs. external).
  • Scope constraint — jurisdiction, time period, product line, business unit.
  • Action constraint — what the actor must be doing (deciding, approving, recommending — not just discussing or being copied).
  • Materiality — substantive discussion only, not passing mention.
  • Known false positives — the kinds of documents that have surface keyword overlap but are not actually responsive. This is the single most valuable input you can give Claude. Counter-examples are how Claude calibrates against keyword over-matching.

The Claira Responsiveness Prompt Skill (downloadable)

If you've decided Claude is the right surface for your prompt-drafting, our Skill turns Claude into a guided prompt-builder. When you invoke it, Claude walks you through structured elicitation and produces a polished, seven-section responsiveness prompt at the end — designed to drop straight into Claira.

Download claira-responsiveness-prompt.skill (11 KB, Claude Skill bundle)

What the Skill does

When you invoke the Skill in Claude, it runs you through three phases:

  1. Elicitation. Claude asks you a structured set of questions to surface:
    • The substantive question (specific enough to be falsifiable).
    • The actor constraint (who must be involved for the document to count).
    • The exclusion patterns (what kinds of documents will look responsive but should not be flagged).
  2. Drafting. Claude assembles a seven-section prompt:
    • Task framing — strict application of the criterion, default to non-responsive when uncertain.
    • Responsiveness criterion — three to five independent gating conditions, all required.
    • Definitions — every term, role, synonym, abbreviation, jurisdictional reference, and product codename defined.
    • Non-responsive examples — four to eight counter-examples covering the most likely false positives (right topic / wrong actor, right keywords / no decision, wrong jurisdiction, third-party authored, etc.).
    • Responsive examples — two to four positive anchors.
    • Reasoning process — anti-bias instructions: step through each condition, do not aggregate weak signals, default to non-responsive when in doubt.
    • Output format — structured fields (Decision-maker, Decision, Supporting basis, or — if non-responsive — which condition failed).
  3. Hardening (optional). Claude offers precision/recall levers — verbatim quote requirement, confidence gating, authority test for actors, "role not established in document" fallback, splitting adjacent themes into separate prompts.

The result is a prompt you can paste directly into Claira's prompt box, save, run on a 10–25 document sample using Single Review, and then send to Bulk Scan.

What Claude takes into account when building the prompt

The Skill is built around a few principles that drive most of the precision wins:

  • Independent conditions. Each gating condition must be verifiable from the document text alone — no condition compensates for another. This defends against the most common failure: keyword over-matching.
  • Defined terms. Every role, title, jurisdiction, product name, abbreviation, codename, and synonym is spelled out. Most precision problems are definition problems.
  • Counter-examples are first-class. The Skill asks you what should not be flagged, then writes those patterns into the prompt as explicit non-responsive examples. This is often more valuable than positive examples.
  • Fallbacks for unestablished facts. When a name appears without a title, the prompt instructs the model to write "role not established in document" instead of guessing. Free precision, no cost.
  • Required reason on non-responsive. The prompt forces the model to name which condition failed, which makes results auditable and surfaces calibration problems early.
  • Two calibration tests. For every condition, the Skill applies the "what would I exclude" and "what would I miss" tests — a deliberate tension that calibrates precision against recall.

Installing the Skill

The Skill is delivered as a .skill file (a zip bundle with a SKILL.md and references). Installation depends on which Claude product you use:

  • Claude.ai (web app): open Settings > Capabilities > Skills and upload the .skill file. Once installed, invoke it by name in any conversation.
  • Claude Code (CLI): drop the .skill file into your skills directory or run the install command for your version. Once installed, the Skill activates automatically when you describe a Claira responsiveness prompt task.

Once installed, start a chat with: "Help me build a Claira responsiveness prompt for [your matter]." Paste your Case Context first, then let the Skill take over.

You only need to install the Skill once per Claude account. Reuse it across every matter — it adapts to whatever Case Context and substantive question you give it.

A typical end-to-end workflow

Here is what most reviewers do once the Skill is installed:

  1. Open Case Context in Claira. Make sure it is current and complete enough for the matter. If not, edit it first.
  2. Start a new Claude conversation. Paste your Case Context.
  3. Paste or link the Claira prompting docs (Prompting Overview and Custom Prompts at minimum).
  4. Invoke the Skill. Tell Claude what you want to find. Answer its elicitation questions, especially the question about what should not be flagged.
  5. Paste the generated prompt into Claira. Save it as a custom prompt on the case.
  6. Test on 10–25 documents using Single Review. Compare results against your expectations.
  7. Iterate. If results drift on edge cases, paste a few of the misses back into Claude and ask the Skill to refine the prompt — usually a definition tightening, a new counter-example, or a hardening lever.
  8. Run Bulk Scan. Once the prompt is stable, scale it up to your full document set.
  9. Reverse-engineer after the run. Paste a sample of responsive hits back into Claude and ask: what did this prompt likely catch well, and what did it likely miss? Use the answer to refine the next pass or split adjacent themes into a separate prompt.

Common pitfalls

  • Skipping Case Context. Without it, Claude will produce a generic prompt that misses your case-specific terminology, custodians, and privilege relationships. Always paste Case Context first.
  • Vague substantive question. "Documents about the merger" produces a noisy prompt. Push the question until it has an actor, a scope, and an action.
  • No exclusion patterns. If you cannot articulate what should not be flagged, the prompt will over-include. Walk through hypotheticals with Claude until you can name at least four.
  • Not testing before bulk. A prompt that looks polished still needs a 10–25 document sample run before bulk. Trust the test, not the draft.
  • Treating the first prompt as final. Reverse-engineering after a bulk run almost always surfaces refinements. The first prompt is a draft.

Need help? Contact us at support@claira.to.

Was this page helpful?