LLM Provider

Overview

The LLM Providers tab manages the Large Language Models (LLMs) and Embeddings available in QAnswer:

LLM Providers Navbar
LLM Providers Menu

LLM Provider Management

To connect to a new LLM provider, click on the Create button and fill in the required information.

  • LLM Name
  • LLM Name to display for the users
  • The provider (Openai, Azure, Bedrock, Mistral, Anthropic, Openrouter) — see the LiteLLM Documentation for details.
  • The modality, if it is multimodal (e.g., text, image)
  • The engine
  • The data sensitivity classification (e.g., public, private, confidential)
  • Description
  • Endpoint URL
  • the API key that secures the endpoint
  • the max context window, which defines the maximum number of tokens that the model can process in a single request
  • the max output tokens, which defines the maximum number of tokens that the model can generate in response to a prompt
  • (Optionally) Guardrail configurations, see below.
LLM Providers Create

Guardrails

QAnswer Guardrails ensure safe and secure AI interactions.

Guardrails control AI interactions for two primary purposes: enforcing your organization's safety and ethical guidelines, and preventing sensitive data from being sent to public LLMs.

When to use Guardrails

  • Maintaining Compliance: Enforce data privacy, acceptable use, or content generation policies.
  • Protecting Sensitive Information: Prevent data leakage when working with confidential documents by restricting interaction with external LLMs.
  • Controlling AI Behavior: Define boundaries for acceptable responses to prevent harmful, biased, or irrelevant content.

Configuring your Guardrails

Define the following parameters when setting up a guardrail:

  • Model: Select the LLM that enforces the guardrail. For maximum control and data security, use an on-premise model hosted within your own infrastructure.
  • Scope (Input/Output): Select where the guardrail operates:
    • Input (in): Monitors and controls prompts submitted to the LLM.
    • Output (out): Monitors and controls responses generated by the LLM.
    • Both (in-out): Monitors both inputs and outputs.
  • Mode: Choose how triggered guardrails are handled:
    • Warning Mode: Alerts the user that a guardrail was triggered but allows proceeding with caution.
    • Error Mode: Blocks the request entirely when a guardrail is triggered.
    • Prompt: Define a prompt that outlines the rules the guardrail enforces. This prompt provides context for assessing and filtering interactions.

Configure these parameters to tailor guardrails to your organization's requirements and ensure responsible, secure AI usage.

LLM Providers Edit Input/Output

Effect of the guardrails

When a guardrail is triggered, the LLM will either warn the user or block the request, depending on the mode you have selected.

LLM Providers warning

Jailbreak Guardrail

The Jailbreak Guardrail protects the AI system's integrity by preventing users from bypassing core instructions and safety mechanisms. It defends against attempts to manipulate the LLM into unauthorized actions or into revealing confidential system information.

What is a Jailbreak Attempt?

A jailbreak attempt circumvents the LLM's intended limitations. Common forms include:

  • Access the System Prompt: Discover the instructions initially given to the AI.
  • Override Safety Guidelines: Bypass restrictions on harmful or unethical content.
  • Manipulate Behavior: Trick the AI into acting outside its intended role.
  • Perform Prompt Injection Attacks: Introduce malicious instructions in a prompt to alter AI behavior.

How the Jailbreak Guardrail Works

The Jailbreak Guardrail is a specialized output guardrail that analyzes user inputs for jailbreak patterns, using the LLM itself to detect and flag potentially harmful prompts.

Configuration

Configure a Jailbreak Guardrail by providing a prompt that instructs the LLM to act as a security filter, identifying and flagging suspicious user messages.

Results

LLM Providers Jailbreak Enabled

Import and Export

You can import or export the list of LLM and embedding models of your organization in JSON format by clicking the following buttons. The export is generated as a zip file and also contains the logos and cost metadata of the models and embedders.

To import or export, click on the button at the top right of the LLM or embedders section