prompt-injection-detection
Analyzes user inputs and LLM prompts to identify and block prompt injection attacks that attempt to manipulate model behavior or bypass safety guidelines. Uses pattern recognition and behavioral analysis to detect malicious prompt crafting techniques.
data-loss-prevention-for-llms
Monitors and prevents sensitive data (PII, trade secrets, credentials) from being sent to external LLM providers or exposed in model outputs. Applies context-aware rules specific to GenAI workflows rather than generic DLP patterns.
multi-model-provider-management
Provides centralized management and monitoring across multiple LLM providers (OpenAI, Anthropic, Google, etc.) with unified policies, controls, and visibility. Enables organizations to use multiple models while maintaining consistent security and governance.
user-and-application-access-control
Manages granular access control for LLM usage at the user and application level, including role-based access, team-based restrictions, and per-application model permissions. Enables fine-grained governance of who can use which models.
cost-and-usage-analytics
Tracks and analyzes LLM usage patterns and associated costs across the organization, providing visibility into spending by team, application, and model. Helps optimize resource allocation and identify cost anomalies.
jailbreak-attempt-detection
Identifies and blocks known and novel jailbreak techniques that attempt to circumvent model safety guidelines or restrictions. Detects patterns like role-playing exploits, hypothetical scenarios, and instruction override attempts.
llm-usage-audit-logging
Captures and logs all LLM interactions including prompts, responses, user identity, timestamps, and model metadata. Provides comprehensive audit trails for compliance and forensic analysis.
api-gateway-zero-trust-enforcement
Enforces zero-trust security policies at the API gateway level, controlling which LLM providers can be accessed, validating all requests, and preventing unauthorized data flows to external AI services. Implements identity-based access control for LLM integrations.
+5 more capabilities