Skip to main content

Chat

Like a conversation, Chat section allows you to interact with the AI Assistant you just built!

info

Additional Resources

Master data retrieval with QAnswer’s Chat Interface! Check out our 1-minute tutorial for fast, accurate insights through intuitive chat-based querying.

caution

As soon as you close or refresh the page, the AI Assistant Chat will forget all about your previous interaction.

To access this AI task click on AI Tasks and Chat:

info

In chat interface of AI Task, the answers are based only on the information delivered to the AI Assistant with the connectors. You can interact with the chat and give it more information. You can refer to the data-source section to setup a connector, check Data Source section!



The response of AI assistant is a synthesized answer based on all the documents QAnswer has found for last chat message you sent. The answer is based on a generative model, therefore each generated answer could be in different wording but contain always the same information.

tip

You can trigger re-generate to get a new answer by clicking on

icon.
icon.


To further improve the performance of the responses, you can give feedback by clicking on

or .


Additionally, you can also edit the generated answer by clicking on

icon.

Task Settings

If you want to customize your Chat Ai Assistant you can click on

. You can customize the way your Ai assistant answers you by :

  • Changing the prompt in the prompt editor : Adpapt how the AI assistant will behave.

  • Adjusting the LLM settings : you can choose the LLM that will power the answers, the initial message, the answer length, its creativity level, the answer speed.

  • Adjusting the Retriever : you can choose the LLM reference level, add synonyms.

Prompt Editor

The prompt editor allows you to fully customize the behavior of your AI assistant. You can define the personality of your AI assistant, how it should respond to queries, what style it should use and many more. By using slots variables like {{bot_name}} or {{bot_answer_length}} you can inject dynamic values into the prompt.

Press the / and you'll get the available variable list. Refer to the slots section for more information on how to use them.

LLM Settings

In the LLM settings, you can adjust the parameters of the LLM that will power your AI assistant like:

  • the LLM model unstructured
  • the initial message
  • the context window in tokens
  • to show or hide last update date in sources
  • the maximum answer length
  • the creativity level (temperature)
  • the answer speed
tip
  • Context window (tokens): how much text (conversation + documents) the model can consider at once. If your input exceeds this, earlier content may be dropped.
  • Maximum response length: cap on tokens the model returns; prevents overly long outputs.

Practical tip: use low temperature for factual extraction, increase context window to include long documents, and set a max response length to control output size and cost.

Enhance Context

This feature can be accessed by 2 settings:

  • Per-user access (controlled by admins via user table)
  • Per-assistant access (in the settings panel)

If checked in Admin Panel settings the toggle will be visible in the Search and Chat AI Task interfaces and for the chatbot input box when the AI assistant is tagged.

  • The feature is inactive by default.
  • If the toggle is set to active , the AI assistant will be able to process large documents more effectively. If your document is too big for the AI to read all at once, it will automatically break the document into smaller parts, understand each part, and then combine the answers. This helps the AI consider the whole document when answering your question, even if the file is large.

Retriever Settings

In the retriever settings, you can adjust the parameters of the retriever that will be used to retrieve relevant documents for your AI assistant. It contains:

  • the embedding model used (note this can only be changed at creation time of the AI Assistant)
  • The number of references passed to the LLM
  • the synonyms used to retrieve more relevant documents

Synonyms

In the section dedicated to synonyms you can define groups of synonyms that will be used to retrieve more relevant documents.

  • Click on + Add synonym group to add a new group of synonyms.

  • Enter a word in the input field and press Enter or click the + button to add it to the list of synonyms.

  • You can only add a new group if there are no existing groups, or if the last one has at least one synonym and the input field is empty.

  • You can delete a group by clicking the

    button.

Document Chunking

How It Works

Document chunking controls how text is split before being embedded and indexed. It impacts retrieval quality and context windows. You have three main parameters:

Split by

Choose the unit used to cut the document:

  • Words – Splits by a fixed number of words.
  • Sentences – Splits by sentence boundaries.
  • Pages – Splits based on original PDF/page breaks.

Split Length

Defines how big each chunk is, based on the selected unit.

  • If split by Words: this = number of words per chunk.
  • If split by Sentences: this = number of sentences per chunk.
  • If split by Pages: this = number of pages per chunk (usually 1).

Split Overlap Defines how much of the previous chunk is carried into the next chunk to preserve context.

  • Example: Split length = 100 words, Overlap = 20 → each new chunk repeats 20 words from the previous one.

Example Scenarios

Split ByLengthOverlapResult
Words1000Independent 100-word chunks
Words10020Each chunk shares 20 words with the next
Sentences51Each chunk contains 5 sentences, last sentence overlaps
Pages10One full page per chunk, no overlap

How to Test It

Step 1 – Upload a Simple Test Document

Use a small manual text like:

This is sentence one.
This is sentence two.
This is sentence three.

Step 2 – Set Chunking Options

Test multiple configurations:

  • Words (Split length = 3, Overlap = 1) – Expect sliding windows of 3 words.
  • Sentences (Split length = 1) – Each sentence becomes a separate chunk.
  • Pages – Upload a multi-page PDF and set split by Pages.

Step 3 – Check the Indexed Chunks

[Search Task] Ask a question and check:

the top extracts:

  • Does it retrieve the expected chunk?
  • Is each chunk contain a sentence?

or:

info button on the assistant message, check the prompt and then verify the #Documents part of the prompt

Query Expansion

Query expansion is used to reformulate user queries so that they include the necessary context for retrieving the correct documents. This is especially important when users ask follow-up questions or when queries involve time references.

Example

When a user asks a follow-up question, the system expands the query to include the context from the previous question.

  • First user query: When was the QA company founded?
  • Follow-up user query: Who is the CEO?
  • Expanded query: Who is the CEO of the QA company?

This ensures that the system correctly understands the entity being referred to and retrieves the relevant information.

By applying these rules, query expansion allows the system to interpret user intent more accurately and provide more relevant results.

Query Expansion Prompt

In most cases, the default prompt we provide will be sufficient to handle your use cases.
However, there may be situations where you want to adjust it for better precision.

For example, imagine a user asking:

  • “What did Simone Biles do recently?”

If your corpus contains documents about Simone Biles’ activities spanning multiple years, the retriever may struggle to identify the most relevant documents from such a vague query.

A practical way to solve this problem is to adapt the query expansion prompt so that it explicitly includes the time reference (for example, the current date or a specific period).

“Give me the latest news of [time period] [date/date range].”
For example:
When a user asks for the latest or most recent news about a specific topic, expand the query to include the current date.
For example:
“Give me the latest news of today” → “Give me the latest news of today 25th June Wednesday 2025”
“Latest news about the AI sector” → “Latest news about the AI sector 25th June 2025”
“Most recent news about climate change” → “Most recent news about climate change 25th June 2025”

When a user asks for the news in a time reference (e.g., “Give me the news of yesterday,” “news of last week,” “news of last month,” “news of last year”), expand the query to explicitly include the date or date range, using the format:
“Give me the latest news of this week” → “Give me the latest news of the week 25th June 24th June 23rd June 22nd June 21st June 2025”
“Give me the latest news of this year” → “Give me the latest news of the year 2025”
“Give me the latest news of this month” → “Give me the latest news of the month June 2025”
"Who was the mayor of New York in first part of the year 2024" → "Who was the mayor of New York in first part of the year 2024 January February March April May June"
"Who was the mayor of New York in the second part of the year 2024" → "Who was the mayor of New York in the second part of the year 2024 July August September October November December"

For factual queries that are time-sensitive (e.g., “Who is the current Prime Minister of India”), append the current date to the query:
“Who is the current Prime Minister of India 25th June 2025”
For queries that are not time-dependent or do not reference a specific time period (e.g., “Who is Lebron James”), do not modify the query.

INCOMING QUESTION:
{{question}}

Current Date:
{{date}}

Respond ONLY with the replaced query.

REPLACED QUESTION:

Human Takeover

  • When this option is enabled, a user of the AI assistant can request a human to take over the chat. It works only if the AI assistant is shared with other users.
  • An email is sent to the owner of the AI assistant informing them of the request.
  • The owner can access the conversation and take over if needed.
info

To share a AI assistant refer to the Access Rights section.

Advanced filters

You can choose to filter the documents used to answer your question by clicking on Advanced filters.

Chat Metadatas

info

Faceted Chat allows you to refine AI Assistant results with filters. Some filters are always available by default, while others come from the metadata you add to your data sources. This makes it easier to target the most relevant sources and quickly find accurate answers.

UI Presentation

To access Faceted Chat:

  • Go to AI Tasks → Chat.
  • On the right panel, click Chat Settings → Advanced Filters.
  • Enable the filters you want to make available to users.
  • Types of Filters Available
  • Default filters (always included):
  • Data Source Name
  • Data Source Type
  • File Name
  • File Format
  • Uploaded at

AI Assistant Specific filters (from your metadata):

  • Example: Date
  • Example: Year
  • Any other metadata fields you defined for your data sources

How it Works

When a user asks a question, they can refine results using filters.

Default filters → always available, technical (e.g., Data Source Name, File Format).

Chat-specific filters → come from metadata you added (e.g., Project, Document Type, Year).

When filters are applied, only the matching data sources are included in the chat. These filtered results are then used by the LLM to generate the answer.

Example with Metadata

Suppose you have three documents tagged with metadata:

Document 1
  • Character → Homer Simpson
  • Role → Father
  • Year Introduced → 1989
Document 2
  • Character → Marge Simpson
  • Role → Mother
  • Year Introduced → 1989
Document 3
  • Character → Lisa Simpson
  • Role → Daughter
  • Year Introduced → 1992

User Question in Chat:

“Which Simpsons family members were introduced ?”

Applied Filter:

Year Introduced = 1989

Assistant Answer:

  Homer Simpson and Marge Simpson.
tip

Users can combine multiple filters (e.g., Data Source Type + Character + Year Introduced) for very precise control over chat results.

caution

If no documents match the selected filters, the assistant will not return any answers. Make sure your metadata is consistent across files.

Auto Date Filtering

info

Auto Date Filtering allows the AI Assistant to detect dates in a user’s question and automatically apply them as filters.

UI Presentation

To use Auto Date Filtering:

  • Go to AI Tasks and select Search.
  • Ask a question that includes a specific date, month, or year.
  • The system will automatically apply the detected date in the Advanced Filters panel.

How it Works

When a question contains a date reference, the assistant interprets it and applies the matching date filter.

Example:

  • Question: What are all episodes of The Simpsons that were aired in 2025?
  • Auto Filter Applied: Year = 2025
  • Answer:
  The Past and the Furious — February 12, 2025
The Flandshees of Innersimpson — March 30, 2025
The Last Man Expanding — April 6, 2025
P.S. I Hate You — April 13, 2025
Yellow Planet — April 22, 2025
Abe League of Their Moe — April 27, 2025
Stew Lies — May 4, 2025
Full Heart, Empty Pool — May 11, 2025
Estranger Things — May 18, 2025
example pages (screenshots)
tip

Auto Date Filtering can be combined with other filters (e.g., File Format or Data Source Type) for more refined results.

Data Source Refresh & Last Update

info

This functionality allows you to keep your data sources up to date automatically. You can schedule refreshes and display the last update date directly in the assistant’s answers.

UI Presentation

To enable Data Source Refresh:

  • Go to Add Data Source and connect your source.
  • In the Update Websites menu, choose the refresh frequency: Now, Daily, Weekly, Monthly, or Never.
  • In Chat Settings, enable Show Last Update date in sources.

How it Works

  • Data sources are refreshed according to the frequency you selected.
  • When “Show Last Update date in sources” is enabled, the assistant adds the cut-off date to its answers.
  • If the data is refreshed later, answers may change to reflect the new content.

Example

  • Data Source: Simpsons Wiki — List of Episodes

  • Refresh Frequency: Weekly

  • Question: What episodes are airing in the first week of October 2025?

  • Answer (before refresh):

Keep Chalm and Gary On — October 5, 2025
(Data until 03 Oct 2025)

RAG Mode

Join Us

We value your feedback and are always here to assist you.
If you need additionnal help, feel free to join our Discord server. We look forward to hearing from you!

Discord Community Server