Navigation
Mobile DocsNavigation
Mobile DocsApply safety guardrails to AI inputs and outputs
Validate and filter AI model inputs and outputs with configurable guardrails. Supports content filtering, PII protection, topic restrictions, output format enforcement, token limits, and custom regex rules.
Placeholder: Enter text or reference a variable to validate against guardrails...
Layout: full
Layout: half
Options: Input (before AI), Output (after AI)
Layout: half
Options: Block, Warn, Log Only
Layout: full
Options: Content Filter, PII Protection, Topic Restriction, Output Format
Placeholder: e.g., 4096
Layout: half
Placeholder: e.g., politics, religion
Layout: half
Placeholder: e.g., \bforbidden-word\b
Layout: full
Required
Optional
Optional
Optional
Optional
Optional
Optional
Primary response type:
{
"passed": "boolean",
"filteredText": "string",
"violations": "json",
"warnings": "json",
"metadata": "json"
}