Search

Video Tutorial
Search
Watch on Tutorials page →
The Search section lets you ask questions to the AI Assistant.
UI Presentation
To access this AI task, click AI Tasks and then Search:
For "The Simpsons" AI Assistant, you can ask questions like:
- Question 1: Who are the children of the simpsons?
- Question 2: What is the name of the dog of the simpsons?
Answer
The card displays a synthesized answer based on all documents QAnswer found for the question. Answers are generative, so wording may vary while the information remains consistent.
To further improve the performance of the responses, you can give feedback by clicking on the thumbs up or thumbs down icons.
Additionally, you can also edit the generated answer by clicking on the edit icon.
Documents
The "Top extracts" section shows the most relevant documents QAnswer found for the question. These documents are used to generate the answer shown in the answer card.
You will see the exact information in original sources like shown below:
Task Settings
Customize the Search AI Assistant by clicking the settings panel icon:
- Filling the prompt settings: defines the assistant's personality; follow the example in the default settings.
- Adjusting the LLM settings: choose the LLM, answer length, creativity level, and answer speed.
- Adjusting the Retriever: choose the reference level and add synonyms.
- Configuring Advanced Filters: let users filter documents by specific metadata.
Prompt Settings
Define the assistant's personality using variables such as {{bot_name}} or {{bot_answer_length}} to personalize responses.
Press / to see the available variable list.
LLM Settings
In the LLM settings, adjust the parameters of the LLM powering the AI assistant:
- enhance the context for this AI assistant
- the LLM model unstructured
- the context window in tokens
- to show or hide last update date in sources
- the maximum answer length
- the creativity level (temperature)
- the answer speed
- Context window (tokens): how much text (conversation + documents) the model can consider at once. If your input exceeds this, earlier content may be dropped.
- Maximum response length: cap on tokens the model returns; prevents overly long outputs.
Practical tip: use low temperature for factual extraction, increase context window to include long documents, and set a max response length to control output size and cost.
Enhance Context
This feature is controlled by two settings:
- Per-user access (controlled by admins via user table)
- Per-assistant access (in the settings panel)
When enabled in Admin Panel settings, the toggle appears in the Search and Chat AI Task interfaces and in the chatbot input box when the AI assistant is tagged.
The feature is inactive by default. When activated, the assistant processes large documents more effectively by splitting them into smaller parts, processing each part, and combining the results. This ensures the whole document is considered when answering, even for large files.
Retriever Settings
In the retriever settings, adjust the parameters used to retrieve relevant documents:
- the embedding model used (note this will reindex completely your AI assistant if you change it)
- The number of references passed to the LLM
- the synonyms used to retrieve more relevant documents
Synonyms
Define groups of synonyms to retrieve more relevant documents.
- Click + Add synonym group to add a new group.
- Enter a word and press Enter or click + to add it to the group.
- A new group can only be added when no existing groups are present, or the last group has at least one synonym and the input field is empty.
- Click the trash icon to delete a group.
- Click Bulk Upload to upload synonyms in bulk. Use download the CSV example to see the format, or Choose File to upload a CSV.
Document Chunking
Document chunking controls how text is split before being embedded and indexed. It impacts retrieval quality and context windows. You have three main parameters:
Split by — Choose the unit used to cut the document: Words, Sentences, or Pages.
Split Length — Defines how big each chunk is, based on the selected unit.
Split Overlap — Defines how much of the previous chunk is carried into the next chunk to preserve context. Example: Split length = 100 words, Overlap = 20 → each new chunk repeats 20 words from the previous one.
Query Expansion
Query expansion is used to reformulate user queries so that they include the necessary context for retrieving the correct documents. This is especially important when users ask follow-up questions or when queries involve time references.
Advanced filters
Filter the documents used to answer questions by clicking Advanced filters.
For questions or assistance, contact us.
Search Metadatas
To access Faceted Search: go to AI Tasks → Search, open Search Settings → Advanced Filters, and enable the desired filters.
Auto Date Filtering
When a question contains a date reference, the assistant interprets it and applies the corresponding date filter automatically.