Skip to content

Commit 92246d9

Browse files
authored
Parameterize LLM returning reasoning (#64)
* Parameterize LLM returning reasoning * Respect custom output models * Make sys prompts dynamic to respect reasoning flag * Add tests * Gracefully handle empty outputs * add note on performance and latency
1 parent 8b2e4c3 commit 92246d9

19 files changed

+616
-90
lines changed

docs/ref/checks/custom_prompt_check.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,11 @@ Implements custom content checks using configurable LLM prompts. Uses your custo
2020
- **`model`** (required): Model to use for the check (e.g., "gpt-5")
2121
- **`confidence_threshold`** (required): Minimum confidence score to trigger tripwire (0.0 to 1.0)
2222
- **`system_prompt_details`** (required): Custom instructions defining the content detection criteria
23+
- **`include_reasoning`** (optional): Whether to include reasoning/explanation fields in the guardrail output (default: `false`)
24+
- When `false`: The LLM only generates the essential fields (`flagged` and `confidence`), reducing token generation costs
25+
- When `true`: Additionally, returns detailed reasoning for its decisions
26+
- **Performance**: In our evaluations, disabling reasoning reduces median latency by 40% on average (ranging from 18% to 67% depending on model) while maintaining detection performance
27+
- **Use Case**: Keep disabled for production to minimize costs and latency; enable for development and debugging
2328

2429
## Implementation Notes
2530

@@ -42,3 +47,4 @@ Returns a `GuardrailResult` with the following `info` dictionary:
4247
- **`flagged`**: Whether the custom validation criteria were met
4348
- **`confidence`**: Confidence score (0.0 to 1.0) for the validation
4449
- **`threshold`**: The confidence threshold that was configured
50+
- **`reason`**: Explanation of why the input was flagged (or not flagged) - *only included when `include_reasoning=true`*

docs/ref/checks/hallucination_detection.md

Lines changed: 16 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,8 @@ Flags model text containing factual claims that are clearly contradicted or not
1414
"config": {
1515
"model": "gpt-4.1-mini",
1616
"confidence_threshold": 0.7,
17-
"knowledge_source": "vs_abc123"
17+
"knowledge_source": "vs_abc123",
18+
"include_reasoning": false
1819
}
1920
}
2021
```
@@ -24,6 +25,11 @@ Flags model text containing factual claims that are clearly contradicted or not
2425
- **`model`** (required): OpenAI model (required) to use for validation (e.g., "gpt-4.1-mini")
2526
- **`confidence_threshold`** (required): Minimum confidence score to trigger tripwire (0.0 to 1.0)
2627
- **`knowledge_source`** (required): OpenAI vector store ID starting with "vs_" containing reference documents
28+
- **`include_reasoning`** (optional): Whether to include detailed reasoning fields in the output (default: `false`)
29+
- When `false`: Returns only `flagged` and `confidence` to save tokens
30+
- When `true`: Additionally, returns `reasoning`, `hallucination_type`, `hallucinated_statements`, and `verified_statements`
31+
- **Performance**: In our evaluations, disabling reasoning reduces median latency by 40% on average (ranging from 18% to 67% depending on model) while maintaining detection performance
32+
- **Use Case**: Keep disabled for production to minimize costs and latency; enable for development and debugging
2733

2834
### Tuning guidance
2935

@@ -102,7 +108,9 @@ See [`examples/hallucination_detection/`](https://github.com/openai/openai-guard
102108

103109
## What It Returns
104110

105-
Returns a `GuardrailResult` with the following `info` dictionary:
111+
Returns a `GuardrailResult` with the following `info` dictionary.
112+
113+
**With `include_reasoning=true`:**
106114

107115
```json
108116
{
@@ -117,15 +125,15 @@ Returns a `GuardrailResult` with the following `info` dictionary:
117125
}
118126
```
119127

128+
### Fields
129+
120130
- **`flagged`**: Whether the content was flagged as potentially hallucinated
121131
- **`confidence`**: Confidence score (0.0 to 1.0) for the detection
122-
- **`reasoning`**: Explanation of why the content was flagged
123-
- **`hallucination_type`**: Type of issue detected (e.g., "factual_error", "unsupported_claim")
124-
- **`hallucinated_statements`**: Specific statements that are contradicted or unsupported
125-
- **`verified_statements`**: Statements that are supported by your documents
126132
- **`threshold`**: The confidence threshold that was configured
127-
128-
Tip: `hallucination_type` is typically one of `factual_error`, `unsupported_claim`, or `none`.
133+
- **`reasoning`**: Explanation of why the content was flagged - *only included when `include_reasoning=true`*
134+
- **`hallucination_type`**: Type of issue detected (e.g., "factual_error", "unsupported_claim", "none") - *only included when `include_reasoning=true`*
135+
- **`hallucinated_statements`**: Specific statements that are contradicted or unsupported - *only included when `include_reasoning=true`*
136+
- **`verified_statements`**: Statements that are supported by your documents - *only included when `include_reasoning=true`*
129137

130138
## Benchmark Results
131139

docs/ref/checks/jailbreak.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,8 @@ Detects attempts to bypass safety or policy constraints via manipulation (prompt
3333
"name": "Jailbreak",
3434
"config": {
3535
"model": "gpt-4.1-mini",
36-
"confidence_threshold": 0.7
36+
"confidence_threshold": 0.7,
37+
"include_reasoning": false
3738
}
3839
}
3940
```
@@ -42,6 +43,11 @@ Detects attempts to bypass safety or policy constraints via manipulation (prompt
4243

4344
- **`model`** (required): Model to use for detection (e.g., "gpt-4.1-mini")
4445
- **`confidence_threshold`** (required): Minimum confidence score to trigger tripwire (0.0 to 1.0)
46+
- **`include_reasoning`** (optional): Whether to include reasoning/explanation fields in the guardrail output (default: `false`)
47+
- When `false`: The LLM only generates the essential fields (`flagged` and `confidence`), reducing token generation costs
48+
- When `true`: Additionally, returns detailed reasoning for its decisions
49+
- **Performance**: In our evaluations, disabling reasoning reduces median latency by 40% on average (ranging from 18% to 67% depending on model) while maintaining detection performance
50+
- **Use Case**: Keep disabled for production to minimize costs and latency; enable for development and debugging
4551

4652
### Tuning guidance
4753

@@ -70,7 +76,7 @@ Returns a `GuardrailResult` with the following `info` dictionary:
7076
- **`flagged`**: Whether a jailbreak attempt was detected
7177
- **`confidence`**: Confidence score (0.0 to 1.0) for the detection
7278
- **`threshold`**: The confidence threshold that was configured
73-
- **`reason`**: Explanation of why the input was flagged (or not flagged)
79+
- **`reason`**: Explanation of why the input was flagged (or not flagged) - *only included when `include_reasoning=true`*
7480
- **`used_conversation_history`**: Boolean indicating whether conversation history was analyzed
7581
- **`checked_text`**: JSON payload containing the conversation history and latest input that was analyzed
7682

docs/ref/checks/llm_base.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,8 @@ Base configuration for LLM-based guardrails. Provides common configuration optio
99
"name": "LLM Base",
1010
"config": {
1111
"model": "gpt-5",
12-
"confidence_threshold": 0.7
12+
"confidence_threshold": 0.7,
13+
"include_reasoning": false
1314
}
1415
}
1516
```
@@ -18,6 +19,11 @@ Base configuration for LLM-based guardrails. Provides common configuration optio
1819

1920
- **`model`** (required): OpenAI model to use for the check (e.g., "gpt-5")
2021
- **`confidence_threshold`** (required): Minimum confidence score to trigger tripwire (0.0 to 1.0)
22+
- **`include_reasoning`** (optional): Whether to include reasoning/explanation fields in the guardrail output (default: `false`)
23+
- When `true`: The LLM generates and returns detailed reasoning for its decisions (e.g., `reason`, `reasoning`, `observation`, `evidence` fields)
24+
- When `false`: The LLM only returns the essential fields (`flagged` and `confidence`), reducing token generation costs
25+
- **Performance**: In our evaluations, disabling reasoning reduces median latency by 40% on average (ranging from 18% to 67% depending on model) while maintaining detection performance
26+
- **Use Case**: Keep disabled for production to minimize costs and latency; enable for development and debugging
2127

2228
## What It Does
2329

docs/ref/checks/nsfw.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,11 @@ Flags workplace‑inappropriate model outputs: explicit sexual content, profanit
2929

3030
- **`model`** (required): Model to use for detection (e.g., "gpt-4.1-mini")
3131
- **`confidence_threshold`** (required): Minimum confidence score to trigger tripwire (0.0 to 1.0)
32+
- **`include_reasoning`** (optional): Whether to include reasoning/explanation fields in the guardrail output (default: `false`)
33+
- When `false`: The LLM only generates the essential fields (`flagged` and `confidence`), reducing token generation costs
34+
- When `true`: Additionally, returns detailed reasoning for its decisions
35+
- **Performance**: In our evaluations, disabling reasoning reduces median latency by 40% on average (ranging from 18% to 67% depending on model) while maintaining detection performance
36+
- **Use Case**: Keep disabled for production to minimize costs and latency; enable for development and debugging
3237

3338
### Tuning guidance
3439

@@ -51,6 +56,7 @@ Returns a `GuardrailResult` with the following `info` dictionary:
5156
- **`flagged`**: Whether NSFW content was detected
5257
- **`confidence`**: Confidence score (0.0 to 1.0) for the detection
5358
- **`threshold`**: The confidence threshold that was configured
59+
- **`reason`**: Explanation of why the input was flagged (or not flagged) - *only included when `include_reasoning=true`*
5460

5561
### Examples
5662

docs/ref/checks/off_topic_prompts.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,11 @@ Ensures content stays within defined business scope using LLM analysis. Flags co
2020
- **`model`** (required): Model to use for analysis (e.g., "gpt-5")
2121
- **`confidence_threshold`** (required): Minimum confidence score to trigger tripwire (0.0 to 1.0)
2222
- **`system_prompt_details`** (required): Description of your business scope and acceptable topics
23+
- **`include_reasoning`** (optional): Whether to include reasoning/explanation fields in the guardrail output (default: `false`)
24+
- When `false`: The LLM only generates the essential fields (`flagged` and `confidence`), reducing token generation costs
25+
- When `true`: Additionally, returns detailed reasoning for its decisions
26+
- **Performance**: In our evaluations, disabling reasoning reduces median latency by 40% on average (ranging from 18% to 67% depending on model) while maintaining detection performance
27+
- **Use Case**: Keep disabled for production to minimize costs and latency; enable for development and debugging
2328

2429
## Implementation Notes
2530

@@ -39,6 +44,7 @@ Returns a `GuardrailResult` with the following `info` dictionary:
3944
}
4045
```
4146

42-
- **`flagged`**: Whether the content aligns with your business scope
43-
- **`confidence`**: Confidence score (0.0 to 1.0) for the prompt injection detection assessment
47+
- **`flagged`**: Whether the content is off-topic (outside your business scope)
48+
- **`confidence`**: Confidence score (0.0 to 1.0) for the assessment
4449
- **`threshold`**: The confidence threshold that was configured
50+
- **`reason`**: Explanation of why the input was flagged (or not flagged) - *only included when `include_reasoning=true`*

docs/ref/checks/prompt_injection_detection.md

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,8 @@ After tool execution, the prompt injection detection check validates that the re
3131
"name": "Prompt Injection Detection",
3232
"config": {
3333
"model": "gpt-4.1-mini",
34-
"confidence_threshold": 0.7
34+
"confidence_threshold": 0.7,
35+
"include_reasoning": false
3536
}
3637
}
3738
```
@@ -40,6 +41,11 @@ After tool execution, the prompt injection detection check validates that the re
4041

4142
- **`model`** (required): Model to use for prompt injection detection analysis (e.g., "gpt-4.1-mini")
4243
- **`confidence_threshold`** (required): Minimum confidence score to trigger tripwire (0.0 to 1.0)
44+
- **`include_reasoning`** (optional): Whether to include the `observation` and `evidence` fields in the output (default: `false`)
45+
- When `true`: Returns detailed `observation` explaining what the action is doing and `evidence` with specific quotes/details
46+
- When `false`: Omits reasoning fields to save tokens (typically 100-300 tokens per check)
47+
- **Performance**: In our evaluations, disabling reasoning reduces median latency by 40% on average (ranging from 18% to 67% depending on model) while maintaining detection performance
48+
- **Use Case**: Keep disabled for production to minimize costs and latency; enable for development and debugging
4349

4450
**Flags as MISALIGNED:**
4551

@@ -77,13 +83,16 @@ Returns a `GuardrailResult` with the following `info` dictionary:
7783
}
7884
```
7985

80-
- **`observation`**: What the AI action is doing
86+
- **`observation`**: What the AI action is doing - *only included when `include_reasoning=true`*
8187
- **`flagged`**: Whether the action is misaligned (boolean)
8288
- **`confidence`**: Confidence score (0.0 to 1.0) that the action is misaligned
89+
- **`evidence`**: Specific evidence from conversation supporting the decision - *only included when `include_reasoning=true`*
8390
- **`threshold`**: The confidence threshold that was configured
8491
- **`user_goal`**: The tracked user intent from conversation
8592
- **`action`**: The list of function calls or tool outputs analyzed for alignment
8693

94+
**Note**: When `include_reasoning=false` (the default), the `observation` and `evidence` fields are omitted to reduce token generation costs.
95+
8796
## Benchmark Results
8897

8998
### Dataset Description

src/guardrails/checks/text/hallucination_detection.py

Lines changed: 37 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -94,8 +94,8 @@ class HallucinationDetectionOutput(LLMOutput):
9494
Extends the base LLM output with hallucination-specific details.
9595
9696
Attributes:
97-
flagged (bool): Whether the content was flagged as potentially hallucinated.
98-
confidence (float): Confidence score (0.0 to 1.0) that the input is hallucinated.
97+
flagged (bool): Whether the content was flagged as potentially hallucinated (inherited).
98+
confidence (float): Confidence score (0.0 to 1.0) that the input is hallucinated (inherited).
9999
reasoning (str): Detailed explanation of the analysis.
100100
hallucination_type (str | None): Type of hallucination detected.
101101
hallucinated_statements (list[str] | None): Specific statements flagged as
@@ -104,16 +104,6 @@ class HallucinationDetectionOutput(LLMOutput):
104104
by the documents.
105105
"""
106106

107-
flagged: bool = Field(
108-
...,
109-
description="Indicates whether the content was flagged as potentially hallucinated.",
110-
)
111-
confidence: float = Field(
112-
...,
113-
description="Confidence score (0.0 to 1.0) that the input is hallucinated.",
114-
ge=0.0,
115-
le=1.0,
116-
)
117107
reasoning: str = Field(
118108
...,
119109
description="Detailed explanation of the hallucination analysis.",
@@ -184,14 +174,6 @@ class HallucinationDetectionOutput(LLMOutput):
184174
3. **Clearly contradicted by the documents** - Claims that directly contradict the documents → FLAG
185175
4. **Completely unsupported by the documents** - Claims that cannot be verified from the documents → FLAG
186176
187-
Respond with a JSON object containing:
188-
- "flagged": boolean (true if ANY factual claims are clearly contradicted or completely unsupported)
189-
- "confidence": float (0.0 to 1.0, your confidence that the input is hallucinated)
190-
- "reasoning": string (detailed explanation of your analysis)
191-
- "hallucination_type": string (type of issue, if detected: "factual_error", "unsupported_claim", or "none" if supported)
192-
- "hallucinated_statements": array of strings (specific factual statements that may be hallucinated)
193-
- "verified_statements": array of strings (specific factual statements that are supported by the documents)
194-
195177
**CRITICAL GUIDELINES**:
196178
- Flag content if ANY factual claims are unsupported or contradicted (even if some claims are supported)
197179
- Allow conversational, opinion-based, or general content to pass through
@@ -206,6 +188,30 @@ class HallucinationDetectionOutput(LLMOutput):
206188
).strip()
207189

208190

191+
# Instruction for output format when reasoning is enabled
192+
REASONING_OUTPUT_INSTRUCTION = textwrap.dedent(
193+
"""
194+
Respond with a JSON object containing:
195+
- "flagged": boolean (true if ANY factual claims are clearly contradicted or completely unsupported)
196+
- "confidence": float (0.0 to 1.0, your confidence that the input is hallucinated)
197+
- "reasoning": string (detailed explanation of your analysis)
198+
- "hallucination_type": string (type of issue, if detected: "factual_error", "unsupported_claim", or "none" if supported)
199+
- "hallucinated_statements": array of strings (specific factual statements that may be hallucinated)
200+
- "verified_statements": array of strings (specific factual statements that are supported by the documents)
201+
"""
202+
).strip()
203+
204+
205+
# Instruction for output format when reasoning is disabled
206+
BASE_OUTPUT_INSTRUCTION = textwrap.dedent(
207+
"""
208+
Respond with a JSON object containing:
209+
- "flagged": boolean (true if ANY factual claims are clearly contradicted or completely unsupported)
210+
- "confidence": float (0.0 to 1.0, your confidence that the input is hallucinated)
211+
"""
212+
).strip()
213+
214+
209215
async def hallucination_detection(
210216
ctx: GuardrailLLMContextProto,
211217
candidate: str,
@@ -242,15 +248,23 @@ async def hallucination_detection(
242248
)
243249

244250
try:
245-
# Create the validation query
246-
validation_query = f"{VALIDATION_PROMPT}\n\nText to validate:\n{candidate}"
251+
# Build the prompt based on whether reasoning is requested
252+
if config.include_reasoning:
253+
output_instruction = REASONING_OUTPUT_INSTRUCTION
254+
output_format = HallucinationDetectionOutput
255+
else:
256+
output_instruction = BASE_OUTPUT_INSTRUCTION
257+
output_format = LLMOutput
258+
259+
# Create the validation query with appropriate output instructions
260+
validation_query = f"{VALIDATION_PROMPT}\n\n{output_instruction}\n\nText to validate:\n{candidate}"
247261

248262
# Use the Responses API with file search and structured output
249263
response = await _invoke_openai_callable(
250264
ctx.guardrail_llm.responses.parse,
251265
input=validation_query,
252266
model=config.model,
253-
text_format=HallucinationDetectionOutput,
267+
text_format=output_format,
254268
tools=[{"type": "file_search", "vector_store_ids": [config.knowledge_source]}],
255269
)
256270

src/guardrails/checks/text/jailbreak.py

Lines changed: 5 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -40,8 +40,6 @@
4040
import textwrap
4141
from typing import Any
4242

43-
from pydantic import Field
44-
4543
from guardrails.registry import default_spec_registry
4644
from guardrails.spec import GuardrailSpecMetadata
4745
from guardrails.types import GuardrailLLMContextProto, GuardrailResult, token_usage_to_dict
@@ -50,6 +48,7 @@
5048
LLMConfig,
5149
LLMErrorOutput,
5250
LLMOutput,
51+
LLMReasoningOutput,
5352
create_error_result,
5453
run_llm,
5554
)
@@ -226,15 +225,6 @@
226225
MAX_CONTEXT_TURNS = 10
227226

228227

229-
class JailbreakLLMOutput(LLMOutput):
230-
"""LLM output schema including rationale for jailbreak classification."""
231-
232-
reason: str = Field(
233-
...,
234-
description=("Justification for why the input was flagged or not flagged as a jailbreak."),
235-
)
236-
237-
238228
def _build_analysis_payload(conversation_history: list[Any] | None, latest_input: str) -> str:
239229
"""Return a JSON payload with recent turns and the latest input."""
240230
trimmed_input = latest_input.strip()
@@ -251,12 +241,15 @@ async def jailbreak(ctx: GuardrailLLMContextProto, data: str, config: LLMConfig)
251241
conversation_history = getattr(ctx, "get_conversation_history", lambda: None)() or []
252242
analysis_payload = _build_analysis_payload(conversation_history, data)
253243

244+
# Use LLMReasoningOutput (with reason) if reasoning is enabled, otherwise use base LLMOutput
245+
output_model = LLMReasoningOutput if config.include_reasoning else LLMOutput
246+
254247
analysis, token_usage = await run_llm(
255248
analysis_payload,
256249
SYSTEM_PROMPT,
257250
ctx.guardrail_llm,
258251
config.model,
259-
JailbreakLLMOutput,
252+
output_model,
260253
)
261254

262255
if isinstance(analysis, LLMErrorOutput):

0 commit comments

Comments
 (0)