Search
Search section allows you to ask questions to the AI Assistant you just built!
Additional Resources
Quickly discover how to retrieve precise answers with QAnswer's Search-Based Interface. Watch our 1-minute tutorial for simple, accurate data querying and get the insights you need!
UI Presentation
To access this AI task click on AI Tasks and Search:
In search interface of AI Task, the answers are based only on the information delivered to the AI Assistant with the connectors. You can refer to the data-source section to setup a connector, check Data Source section!
For "The Simpsons" AI Assistant, you can ask questions like:
Question 1 : Who are the children of the simpsons?
Question 2 : What is the name of the dog of the simpsons?
- Question 1
- Question 2
- Zoom
Answer
The answer displayed in the card is a synthesized answer based on all the documents QAnswer has found for this question. The answer is based on a generative model, therefore each generated answer could be in different wording but contain always the same information.
You can trigger re-generate to get a new answer by clicking on
icon.icon.
To further improve the performance of the responses, you can give feedback by clicking on
or .Additionally, you can also edit the generated answer by clicking on
icon.Documents
The documents displayed in the "Top extracts" part are the most relevant documents QAnswer found for this question you just typed. These documents are used during the process of generating the answer you see in the answer card.
You can check where is the information used to answer this question by clicking:
You will see the exact information in original sources like shown below:
Task Settings
If you want to customize your Search Ai Assistant you can click on
. You can customize the way your Ai assistant answers you by :Filling the prompt settings : It will define its personality and you can follow the example provided in the default settings
Adjusting the LLM settings : you can choose the LLM that will power the answers, the answer length, its creativity level, the answer speed.
Adjusting the Retriver : you can choose the LLM reference level, add synonyms.
- Open Tasks Settings
- Prompt
- LLM
- Retriever
- Advanced Filters
Prompt Settings
On prompt settings, you can define the personality of your AI assistant by using variables like {{bot_name}} or {{bot_answer_length}}. It allows you to personalize the answers you get from your AI assistant settings.
Press the / and you'll get the available variable list.
For example brief comes from the {{bot_answer_length}} variable you can set in the LLM settings of your AI Task.
- Prompt
- Prompt Editing
- Variables list
LLM Settings
In the LLM settings, you can adjust the parameters of the LLM that will power your AI assistant like:
- enhance the context for this AI assistant
- the LLM model unstructured
- the context window in tokens
- to show or hide last update date in sources
- the maximum answer length
- the creativity level (temperature)
- the answer speed
- Context window (tokens): how much text (conversation + documents) the model can consider at once. If your input exceeds this, earlier content may be dropped.
- Maximum response length: cap on tokens the model returns; prevents overly long outputs.
Practical tip: use low temperature for factual extraction, increase context window to include long documents, and set a max response length to control output size and cost.
Enhance Context
This feature can be accessed by 2 settings:
- Per-user access (controlled by admins via user table)
- Per-assistant access (in the settings panel)
- Admin Panel
- Search Task
- Chat Settings
If checked in Admin Panel settings the toggle will be visible in the Search and Chat AI Task interfaces and for the chatbot input box when the AI assistant is tagged.
If enabled in Search AI Task settings the toggle will be visible in the search task interface.
If enabled in Chat AI Task settings the toggle will be visible in the chat task interface and the chatbot input box when the AI assistant is tagged.
- The feature is inactive by default.
- If the toggle is set to active , the AI assistant will be able to process large documents more effectively. If your document is too big for the AI to read all at once, it will automatically break the document into smaller parts, understand each part, and then combine the answers. This helps the AI consider the whole document when answering your question, even if the file is large.
Retriever Settings
In the retriever settings, you can adjust the parameters of the retriever that will be used to retrieve relevant documents for your AI assistant. It contains:
- the embedding model used (note this can only be changed at creation time of the AI Assistant)
- The number of references passed to the LLM
- the synonyms used to retrieve more relevant documents
Synonyms
In the synonyms settings, you can define groups of synonyms that will be used to retrieve more relevant documents.
Click on + Add synonym group to add a new group of synonyms.
Enter a word in the input field and press Enter or click the + button to add it to the list of synonyms.
You can only add a new group if there are no existing groups, or if the last one has at least one synonym and the input field is empty.
You can delete a group by clicking the
button.
- Synonyms
- First group
- Input filled
- List created
- Add Group
Document Chunking
How It Works
Document chunking controls how text is split before being embedded and indexed. It impacts retrieval quality and context windows. You have three main parameters:
Split by
Choose the unit used to cut the document:
- Words – Splits by a fixed number of words.
- Sentences – Splits by sentence boundaries.
- Pages – Splits based on original PDF/page breaks.
Split Length
Defines how big each chunk is, based on the selected unit.
- If split by Words: this = number of words per chunk.
- If split by Sentences: this = number of sentences per chunk.
- If split by Pages: this = number of pages per chunk (usually 1).
Split Overlap Defines how much of the previous chunk is carried into the next chunk to preserve context.
- Example: Split length = 100 words, Overlap = 20 → each new chunk repeats 20 words from the previous one.
Example Scenarios
| Split By | Length | Overlap | Result |
|---|---|---|---|
| Words | 100 | 0 | Independent 100-word chunks |
| Words | 100 | 20 | Each chunk shares 20 words with the next |
| Sentences | 5 | 1 | Each chunk contains 5 sentences, last sentence overlaps |
| Pages | 1 | 0 | One full page per chunk, no overlap |
How to Test It
Step 1 – Upload a Simple Test Document
Use a small manual text like:
This is sentence one.
This is sentence two.
This is sentence three.
Step 2 – Set Chunking Options
Test multiple configurations:
- Words (Split length = 3, Overlap = 1) – Expect sliding windows of 3 words.
- Sentences (Split length = 1) – Each sentence becomes a separate chunk.
- Pages – Upload a multi-page PDF and set split by Pages.
Step 3 – Check the Indexed Chunks
[Search Task] Ask a question and check:
the top extracts:
- Does it retrieve the expected chunk?
- Is each chunk contain a sentence?
or:
info button on the assistant message, check the prompt and then verify the #Documents part of the prompt
Query Expansion
Query expansion is used to reformulate user queries so that they include the necessary context for retrieving the correct documents. This is especially important when users ask follow-up questions or when queries involve time references.
Example
When a user asks a follow-up question, the system expands the query to include the context from the previous question.
- First user query: When was the QA company founded?
- Follow-up user query: Who is the CEO?
- Expanded query: Who is the CEO of the QA company?
This ensures that the system correctly understands the entity being referred to and retrieves the relevant information.
By applying these rules, query expansion allows the system to interpret user intent more accurately and provide more relevant results.
Query Expansion Prompt
In most cases, the default prompt we provide will be sufficient to handle your use cases.
However, there may be situations where you want to adjust it for better precision.
For example, imagine a user asking:
- “What did Simone Biles do recently?”
If your corpus contains documents about Simone Biles’ activities spanning multiple years, the retriever may struggle to identify the most relevant documents from such a vague query.
A practical way to solve this problem is to adapt the query expansion prompt so that it explicitly includes the time reference (for example, the current date or a specific period).
“Give me the latest news of [time period] [date/date range].”
For example:
When a user asks for the latest or most recent news about a specific topic, expand the query to include the current date.
For example:
“Give me the latest news of today” → “Give me the latest news of today 25th June Wednesday 2025”
“Latest news about the AI sector” → “Latest news about the AI sector 25th June 2025”
“Most recent news about climate change” → “Most recent news about climate change 25th June 2025”
When a user asks for the news in a time reference (e.g., “Give me the news of yesterday,” “news of last week,” “news of last month,” “news of last year”), expand the query to explicitly include the date or date range, using the format:
“Give me the latest news of this week” → “Give me the latest news of the week 25th June 24th June 23rd June 22nd June 21st June 2025”
“Give me the latest news of this year” → “Give me the latest news of the year 2025”
“Give me the latest news of this month” → “Give me the latest news of the month June 2025”
"Who was the mayor of New York in first part of the year 2024" → "Who was the mayor of New York in first part of the year 2024 January February March April May June"
"Who was the mayor of New York in the second part of the year 2024" → "Who was the mayor of New York in the second part of the year 2024 July August September October November December"
For factual queries that are time-sensitive (e.g., “Who is the current Prime Minister of India”), append the current date to the query:
“Who is the current Prime Minister of India 25th June 2025”
For queries that are not time-dependent or do not reference a specific time period (e.g., “Who is Lebron James”), do not modify the query.
INCOMING QUESTION:
{{question}}
Current Date:
{{date}}
Respond ONLY with the replaced query.
REPLACED QUESTION:
Advanced filters
You can choose to filter the documents used to answer your question by clicking on Advanced filters.
The AI models we use are evolving over time so the answers that you get might be different.
If you have questions or you need any assistance, please contact us!
Search Metadatas
Faceted Search allows you to refine AI Assistant results with filters. Some filters are always available by default, while others come from the metadata you add to your data sources. This makes it easier to target the most relevant sources and quickly find accurate answers.
UI Presentation
To access Faceted Search:
- Go to AI Tasks and select Search.
- On the right panel, click Search Settings → Advanced Filters.
- Enable the filters you want to make available to users.
Types of filters available:
- Default filters (always included):
- Data Source Name
- Data Source Type
- File Name
- File Format
- Uploaded at
AI Assistant Specific filters (generated from your metadata):
Example: Date Example: Year Any other metadata fields you defined for your data sources
How it Works
When a user asks a question, they can refine results using these filters.
Default filters → always available, technical (Data Source Name, Data Source Type, File Name, File Format, Uploaded at).
AI Assistant Specific filters → comes from metadata added to the data sources. (e.g.Project, Document Type, Year). When filters are applied, only the matching data sources are included in the search. These filtered results are then used by the LLM to generate the answer.
Example with Metadata
Suppose you have three documents tagged with metadata:
Document 1- Character → Homer Simpson
- Role → Father
- Year Introduced → 1989
- Character → Marge Simpson
- Role → Mother
- Year Introduced → 1989
- Character → Lisa Simpson
- Role → Daughter
- Year Introduced → 1992
Question: Which Simpsons family members were introduced in 1989?
Applied Filter: Year Introduced = 1989
Answer: Homer Simpson, Marge Simpson
Users can combine multiple filters (e.g., Data Source Type + Character + Year Introduced) for very precise control over search results.
If no documents match the selected filters, the assistant will not return any answers. Make sure your metadata is consistent across files.
Auto Date Filtering
Auto Date Filtering allows the AI Assistant to detect dates in a user’s question and automatically apply them as filters.
UI Presentation
To use Auto Date Filtering:
- Go to AI Tasks and select Search.
- Ask a question that includes a specific date, month, or year.
- The system will automatically apply the detected date in the Advanced Filters panel.
How it Works
When a question contains a date reference, the assistant interprets it and applies the matching date filter.
Example:
- Question: What are all episodes of The Simpsons that were aired in 2025?
- Auto Filter Applied: Year = 2025
- Answer:
The Past and the Furious — February 12, 2025
The Flandshees of Innersimpson — March 30, 2025
The Last Man Expanding — April 6, 2025
P.S. I Hate You — April 13, 2025
Yellow Planet — April 22, 2025
Abe League of Their Moe — April 27, 2025
Stew Lies — May 4, 2025
Full Heart, Empty Pool — May 11, 2025
Estranger Things — May 18, 2025
example pages (screenshots)
Auto Date Filtering can be combined with other filters (e.g., File Format or Data Source Type) for more refined results.
Data Source Refresh & Last Update
This functionality allows you to keep your data sources up to date automatically. You can schedule refreshes and display the last update date directly in the assistant’s answers.
UI Presentation
To enable Data Source Refresh:
- Go to Add Data Source and connect your source.
- In the Update Websites menu, choose the refresh frequency: Now, Daily, Weekly, Monthly, or Never.
- In Chat Settings, enable Show Last Update date in sources.
How it Works
- Data sources are refreshed according to the frequency you selected.
- When “Show Last Update date in sources” is enabled, the assistant adds the cut-off date to its answers.
- If the data is refreshed later, answers may change to reflect the new content.
Example
-
Data Source: Simpsons Wiki — List of Episodes
-
Refresh Frequency: Weekly
-
Question: What episodes are airing in the first week of October 2025?
-
Answer (before refresh):
Keep Chalm and Gary On — October 5, 2025
(Data until 03 Oct 2025)