Skip to main content



FAQ Dataset

Your FAQ Dataset is the base for answering your user´s questions. The dataset is divided into categories with unique answers and a set of questions attached to it.

To help you with this important step we have pre-filled your account with a Starter Set. It is a collection of more than 3.000 HR-related questions in 68 categories. These categories were build to reflect patterns in 130,000 questions asked by real candidates. The Starter Set has been anonymized and is available in English and German. You can read more about how this process looked like here.

You will need to take a look at this Starter Set and adjust it for your specific needs. Please consider:

  • Which categories are not suited for your company? 

  • Which categories might be missing but are important for your company? 

You can simply go to the Category view in your SmartPal dashboard and see the whole Starter Set. Go through all categories and evaluate if they are useful for your case or not. If not, simply delete the category. If you feel that some common topics, that are unique to your business, are missing, you can add them with the blue New Category button. You will be able to find those topics usually in your current FAQ page, user-facing inbox, previous chat solutions, phone logs, social media channels (e.g., Facebook Career Page) and so on. It helps to involve the department(s) which already are dealing with answering user queries on a daily basis.

For the categories that you have decided to keep, please do not forget to add your answers since the Starter Set has only placeholders.

This is how the process usually looks like:

  1. You will get access to your account in SmartPal's dashboard with Starter Set, in a selected language,  already in it.

  2. Adjust the Starter Set:

    1. Delete irrelevant categories.

    2. Add your own categories with answers and at least 15 questions. Read more about how to create good categories here.

  3. Replace placeholder answers in Starter Set categories.

  4. Data Review (dataset performance check performed by SmartPal).

  5. Data Review implementation (depending on the support level, done by SmartPal, or your team).

Good Categories

The FAQ dataset is divided into categories and a set of, attached to it, questions. Our NLP engine compares every incoming question with all queries available in the dataset. When the similarity is detected the incoming question is matched to the appropriate category and the category’s answer is given to the user by the chatbot.

For this process to function as well as possible it is important to have a clean FAQ dataset with well-defined categories. The categories and questions within the set should not overlap. This means very similar questions should not belong to different categories.

How to create good categories?

  • Do not create overlaps.
    The questions in two different categories should be clearly distinct from another. This means avoiding the same question being located in two different categories.

  • Enrich your categories.
    Add if possible add at least 15 questions to each category to avoid weak and worse performing categories.

  • Have fewer categories with more questions in them.
    It is useful to combine all questions for a topic to one category and have a nice well-rounded answer for that category instead of having one topic spread into several smaller categories. For example: Combine questions for parking into one parking category (instead of having three categories: Parking allowed, Parking costs, Parking limit).


Keep in mind that with bigger topics it can make sense to split them into several categories, for example:

  • Benefits: Can be divided into Benefits Health insurance, Benefits general, Benefits internship etc.

  • Salary: Can be divided into Salary general, Salary internship, Salary dual studies etc.

It is important in this case to have the category-defining keywords (like an internship, insurance etc.) in the questions. In that way the categories will stay clean and distinctive like in the examples below:

  • “Do you offer healthcare” → Benefits Health insurance category

  • “What benefits do you have” → Benefits general category

  • “What benefits will I have as an intern?” → Benefits internship category

Keep also in mind that for job-specific answers you can use contexts and you should not create separate categories.

Good Answers

The answers to the incoming questions are defined in every category. You can set up two different types of categories: In the case of context-independent categories you will need to create just one answer for your category. In the case of context-dependent categories it is recommended to provide answers for your contexts to make sure that your user will get a response from the chatbot. Read more about context-dependency here

When writing your answers there are few things to keep in mind:

  • Keep them short

The answers should not be too long - ideally 300 to 600 characters. Longer answers do not lead to the best user experience due to the nature of messaging platforms that will divide them into few pieces or will force your user to scroll to see the whole message. You can add a link (please use full links like https:// or http://) to redirect the user for more information if necessary or create a content flow around that topic.


  • Do not start with “Yes!”, “Unfortunately” or similar

The answers should not start with “Yes!”, “Unfortunately” or similar because, as you can see in the example, not all questions that users ask require a “Yes” or “No” response from the chatbot and such answers would not be ideal UX.

  • Be informative

The answers should be informative and not only consist of links redirecting them to the content. Of course you can include links leading the user to more information if necessary but the link should not be the only part of the answer.

  • Cover all questions 

It is important that all questions in a category are covered by the answer otherwise it could happen that, even though the correct category for an incoming user question was detected, the answer does not fit or the information given is insufficient.

  • Other things to keep in mind:

    • All links in the answers should start with http:// or https:// otherwise they will not be clickable in the chat window later on.

    • Feel free to be less formal, use emojis and exclamation marks to give your chatbot a character.

    • Do not use bullet points because they are not rendered properly in the messaging platforms. 

Common questions:

  1. Can I create hyperlinks?
    Unfortunately, it is not possible. Please use full links (starting with http:// or https://) instead to make sure that they are clickable for the user. You can of course use link shorteners like to provide better UX in a case of long links. 

  2. Can I make text bold, italic, or apply any other formatting to it?
    Unfortunately, it is not possible.

  3. Can I add emojis to my answers? How to do it?
    Yes, you can! Please copy and paste the image of your emoji from any database available online (for example

  4. Can I add images or videos to my answers?
    Unfortunately, it is not possible. If your chosen platform supports it, you can direct your questions into the Content Flow that can handle images and videos. 

  5. Can I provide an answer with follow-up buttons or quick replies?
    You cannot do it in your dataset but you can, with support from your Implementation Manager, create a category that will lead to the Content Flow block. That block can contain buttons or quick replies that will lead to more content.

  6. How can I create a fallback answer in my context-dependent categories? 
    Unfortunately, it is not possible. Please make sure that all possible contexts have an answer before going live. The general answer will only be given when a question is not asked in the context of a job.

Good Questions

Great chatbot performance depends on questions you create and approve in your dataset. They are the base on which chatbot calculates similarity and provides automatic responses to other, incoming questions. Please find here a few tips on what makes a good (and bad) question:

  • Questions should be short & precise

Too long questions might confuse the NLP engine in picking the correct category since there might be a lot of keywords and irrelevant topics in the query.

Short and precise questions allow the NLP engine to better process the meaning behind them and provide, in the future, better responses to the incoming questions. 

  • Questions should not be too general

Too general questions might confuse the NLP engine in picking the correct category since the category-defining keywords are missing. A general question can possibly belong to several categories so please avoid adding those to any category.

  • Questions should not be just single words or commands

Single words or commands are not questions and since a particular word can belong to several categories it should never be trained to just one - it gives that category too much “power” and may lead to category overpowering other, similar ones. 

  • Questions should be unique for your category

The exact same or very similar questions trained into different categories may cause confusion, meaning that the chatbot will not be able to decide where, similar questions, should belong. This way, it may start providing wrong answers to new incoming questions. If you notice such an overlap it is good to review the categories and shift similar questions into one category. 

  • Questions should not contain personal information

Adding personal information will not only confuse the chatbot but it is against privacy guidelines. Please always avoid adding questions with any kind of personal information like names, contact information, addresses, payment information, and so on.

  • Questions should be in chatbot’s language

Questions in foreign to your chatbot languages should not be trained or approved since every dataset works on just a singular language framework (meaning - a chatbot can fully understand just one selected language). Chatbot has limited knowledge of other languages, this is why you may sometimes see it providing correct categories for foreign questions. But in no case, you should approve those questions into your dataset since it may have a really bad impact on the chatbot’s performance.

  • Questions should not be small talk or insults

Small talk questions and insults will be handled before they arrive to your dashboard so creating small talk categories and adding small talk questions is unnecessary. We currently handle several small talk categories, you can read more about them here

As a Chatbot Trainer, you will spend most of your time in the Training section of your dashboard. Here you will see all incoming questions (with few exceptions, like small talks). 

Incoming questions are a great opportunity to train your chatbot and possibly avoid mistakes in category detection for future similar questions! For that purpose we have built a Training decision tree to help you with your decision-making process:

When managing your dataset follow these steps to create a well-performing chatbot:

  1. Does it look like a relevant and good question?

    1. If No, provide a direct answer or delete it.
      Examples: commands like “internship”, small talks like “what is your name?”, emojis, question marks, etc. should be deleted. Valid questions with personal information should receive a direct answer.

    2. If Yes, move to the next step.

  2. Does the incoming question fit into an existing category?

    1. If Yes, then assign it to the fitting category.

    2. If No, then move to the next step.

  3. Is it likely that a similar question will appear again?

    1. If No, provide a direct answer.

    2. If Yes, create a new category and answer for it. Do not forget to enrich that newly created category with at least 14 other questions.

Training incoming questions

As a Chatbot Trainer, you will spend most of your time in the Training section of your dashboard. Here you will see all incoming questions (with few exceptions, like small talks). 

Incoming questions are a great opportunity to train your chatbot and possibly avoid mistakes in category detection for future similar questions! For that purpose we have built a Training decision tree to help you with your decision-making process:




When managing your dataset follow these steps to create a well-performing chatbot:

  1. Does it look like a relevant and good question?

    1. If No, provide a direct answer or delete it.
      Examples: commands like “internship”, small talks like “what is your name?”, emojis, question marks, etc. should be deleted. Valid questions with personal information should receive a direct answer.

    2. If Yes, move to the next step.

  2. Does the incoming question fit into an existing category?

    1. If Yes, then assign it to the fitting category.

    2. If No, then move to the next step.

  3. Is it likely that a similar question will appear again?

    1. If No, provide a direct answer.

    2. If Yes, create a new category and answer for it. Do not forget to enrich that newly created category with at least 14 other questions.

Small talk feature

Small talk is a special feature we have developed to handle common user requests. We sift through the incoming questions and, if they belong to one of the small talk categories, provide answers for them automatically (without intervention from Chatbot Trainer). Those answers are not available in your dataset for that reason. This way we are able to reduce the time you and your team spend approving unnecessary questions like “How are you?“ or heavily limit the visibility of insults that chatbot may experience. You can see how this engine works here:



Overview of currently handled small talks:



Chatbot Purpose

How can you help me? What are you for? What can you do?

About Chatbot

Are you a robot? Are you human? What is a chatbot?

How are you

How are you? What is up?

About your Company

What do you do? What is the company about?


Hello! Hi! Good morning

Thank you

Thank you! Thanks!


Stop, Please stop, Do not talk to me + Insults


Goodbye, Ciao, Bye-bye, etc.



During the implementation stage, your Implementation Consultant will provide you with a Configuration Workbook where you will be able to review all default small talk responses and adjust them if necessary. Once you finalize the copies, we will implement them for you. Keep in mind that you are not able to access those responses later on and only your account manager is able to adjust them for you.


Here are some common questions we have received around small talk:

  1. Can I create a few variations of the response? (so chatbot does not just repeat the same answer?)
    Yes! You can provide up to 5 variations per each small talk (and for certain small talks like “Hello“, we highly recommend that).

  2. Which small talk is used the most by the users?
    The clear winner is “Hello“ with 74% usage, the second one is About your Company with 9% usage. All the other small talks have 7% or less usage.

  3. How much small talk is used?
    4% of all user queries end up as small talk. You can read more about it in our article.

  4. Should I train small talks in my dataset?
    If small talk is covered by our feature please do not train those questions in your dataset.

If you see small talks, that should be covered under one of the topics listed, appearing in your Training view, do inform your account manager about that (please provide a screenshot if possible).


In a chatbot conversation, like in real life, the ideal answer to a particular question can depend on the situation. Take the “How big is the team?” question as an example - the answer is probably different if it was asked related to a designer position, compared to a sales position. In order to give you the flexibility to train your chatbot to properly answer such questions, we created contexts. 

Turning a category into a context dependent category will make the chatbot give different answers to the same question depending on the context of the conversation. A typical context is a job position or a group of job positions. The context is set when the candidate selects the job position (typically in the job carousel) and asks a position related question.



Setting up a context dependent category

You can make a category context dependent by checking the checkbox upon category creation on the FAQ page on the dashboard. More information on the process here.

Also, an existing category can be be made context dependent by checking the checkbox in the “category core data” section on the “category” tab of the FAQ page on the dashboard. More information on the process here.

How to handle context specific questions?

Incoming questions are displayed in the list view of the training tab on the FAQ page. If a question has a context set, it’s displayed in the context column:


Click on the such a question will bring up the question approval modal with an option to setup a context dependent answer:


Whenever a context is set and the question belongs to a context dependent category, the context specific answer will be given. If no such response exists yet, the chatbot won't use the category's general answer - you'll have to respond the question, by supplying the answer when training, or via direct answer. In case of context independent categories, the chatbot will always respond with the general answer in every context.

Delayed answers from the chatbot

When a question was not answered automatically by the chatbot, user will receive a message from the chatbot that it does not have an answer to this question yet (check

FAQ flow for more information). Dashboard account holders who enabled notifications in their

Profile will get immediately an email that there is a pending question in the dashboard.

Once you train the question (attach a category to it) or provide a direct answer for it, your user will receive a response in a matter of minutes (if they are already inactive for some time, meaning they are not proactively interacting with the chatbot).


Delayed answers on different platforms

Depending on the chosen platform, your users may need to go back to the page where the chatbot is located. Usually, platforms have some time limit for automatic interactions meaning, after certain inactivity time, they will not allow the chatbot to pass the response to the user. Please see below an overview of those limitations:


Time Limit

How user can get receive a response back?



The user has to come back to the website where the chatbot is located.



Response provided in the WhatsApp app.



User will get a response in the Messenger app (if they were logged in to their Facebook account while chatting) or when they are back on your website (if they accessed chatbot as “Guests“).



Response provided in the WeChat app.

For a full overview of our four main platforms go here:

Top Platforms Overview: WhatsApp, WebMessenger, Facebook, WeChat & SMS

When the user will not get an answer back?

There are few scenarios when, even though you have provided a delayed response, your user will not get an answer back:

  1. There was already an automatic answer provided to that question (chatbot does not correct itself).

  2. The platform time limit was exceeded and the chatbot has sent the message but the platform did not accept it.

Common questions:

  1. Why chatbot will not correct itself and why should I approve those questions if it cannot?
    In the case that chatbot has already provided an automatic response we do not allow the chatbot to correct itself, so even if you provide a different category for the question or answer directly, it will not have an effect on the user. We have done so to make sure that any changes to the already trained questions will not have any ill effect on the UX (imagine getting several corrections to the same question or correction to a question you have asked 2 months ago that was just now shifted to a new category). Why approving those questions into new categories is still important? All future similar questions will be getting the correct response and this way you are teaching the chatbot how to behave from now on.

  2. Why user is not getting a delayed response I have just sent/approved right away?
    This is noticeable, especially during the testing stage. Responses are not sent immediately. Once the user becomes inactive it may take a few minutes (up to 20) of idle time for the chatbot to send a delayed response. It was done so to not interrupt the user’s actual interaction with the chatbot - we wait first until the conversation ends and send the response only then.

If after 30 minutes of idle time you still have not received a message from our dashboard please inform your account manager!

Go Live and Beyond 

Catching and tracking events

In the case of chatbots running on the WebMessenger platform, events occurring before, during, and at the end of a chatbot conversation can be caught and used for various purposes. Such typical purposes among others are debugging or tracking.

Setting up event catching

Events can be caught by adding handlers to them in the chatbot integration code snippet. Make sure that event handlers are added before calling JobPal.init.

To bind an event, use JobPal.on(<event name>, <handler>);. To unbind events, you can either call<event name>, handler) to remove one specific handler, call<event name>) to remove all handlers for an event, or call to unbind all handlers.

Events to catch


// This event triggers when init completes successfully.

JobPal.on('ready', function(){

    console.log('the init has completed!');



JobPal.init(...).then(function() {

    // init also returns a promise, so you can alternatively specify a .then() callback




// This event triggers when the widget is destroyed.

JobPal.on('destroy', function(){

    console.log('the widget is destroyed!');



message received

// This event triggers when the user receives a message

JobPal.on('message:received', function(message) {

    console.log('the user received a message', message);



message sent

// This event triggers when the user sends a message

JobPal.on('message:sent', function(message) {

    console.log('the user sent a message', message);




// This event triggers when a message was added to the conversation

JobPal.on('message', function(message) {

    console.log('a message was added to the conversation', message);



unreadCount change

// This event triggers when the number of unread messages changes

JobPal.on('unreadCount', function(unreadCount) {

    console.log('the number of unread messages was updated', unreadCount);



chatbot widget opened

// This event triggers when the chatbot widget is opened

JobPal.on('widget:opened', function() {

    console.log('Widget is opened!');



chatbot widget closed

// This event triggers when the chatbot widget is closed

JobPal.on('widget:closed', function() {

    console.log('Widget is closed!');


Tracking user user/chatbot interaction in Google Analytics

A typical use case of event handling is to track user actions in external tracking systems; for example, in Google Analytics.

The code snippet below will record a “MessageSent” event action in Google Analytics whenever an enduser sends a message to the chatbot:

// Google Analytics tracking of messages sent by endusers

JobPal.on('message:sent', function(message){

ga('send', 'event', 'Chatbot', 'MessageSent');


In this example, “Chatbot” is the event category, and “MessageSent” is the event action. Additionally you might add parameters for event label and event value. You can learn more about event measurement in Google Analytics here.

Please don’t forget to add these lines before invoking the JobPal.init function!

The setup above will allow you to compare behavioural differences (like conversion rates) in Google Analytics between endusers interacting with the chatbot, and endusers not using the chatbot.

Dashboard overview

The SmartPal dashboard is the central monitoring and management interface.

The dashboard has two main purposes:

  • to provide the tools to train the FAQ dataset of your chatbot(s)

  • to provide access to data on how end-users interact with your chatbot(s)

Components of the dashboard

Main pages of the dashboard:

  • FAQ - Dataset training and dataset management. 

  • Job Openings - Job openings management. In case of an ATS connection, job openings are automatically fetched from your ATS. 

  • Analytics - Statistical data on candidates interacting with your chatbot(s) 

  • Company - Dashboard access management. 

  • Profile - User profile management, setting the password, managing notification rules, and so on. 

  • Company - Granting / revoking dashboard access to / from users 

  • Logout - Logs you out from the dashboard


Managing the dataset

Before reading this part, consider familiarizing yourself with the basic terminology.

Dataset management takes place on the FAQ page of the dashboard, which page has two tabs (Training and Categories) and a button (Try it now).

Training tab

On this tab you can:

  • see an overview of automatically responded incoming questions, and approve or change these automated category classifications

  • check and answer the questions that weren’t automatically replied.


clipboard_e9ecbf808ed768e77b9cfe84c725dde5a.pngThe question list view (depicted above) shows all incoming questions sent by candidates.

When a question is asked, the system tries to pair it with a category in the FAQ dataset. If this categorisation is successful, an automated category- (and context-) dependent response is given and the category field in the question list view filled out. In case of the an unsuccessful question categorisation the chatbot is not able to respond to the candidate and requires your assistance (and the category field is left empty).

In case of questions that got matched to a category (those where the category field is not empty) you have the following options:

  • If you agree with the category matching, you can approve the system’s decision simply by moving your mouse over the question list item and clicking on the checkmark icon.

  • If you disagree with the category matching, click on the question list item and in the appearing pop up window assign the question to a different category. Note: the candidate who asked the question won’t receive the new category’s answer, but in the future similar (or the same) questions will be matched with your selected category.

In case the question was associated with a context dependent category and there’s no answer in the category for the specific context (as there were no incoming question in the past in that context in the category) the category’s general answer won’t be sent to the candidate (there’s no fall back). In such a case the candidate will receive an answer if:

  • You add a context specific answer when approving the question

  • You approve the question into a context dependent category that has a context answer for the question’s context

  • You approve the questions into a context independent category

  • You give a direct answer

In case a question wasn’t matched to any of the categories, click on the particular question in the list and select one of these options:

  • Assign the question to a category - in this case a category (and context) dependent answer will be sent to the candidate. The question will be added to the selected category, training the system to classify similar (or the same) questions into the selected category in the future.

  • Give a direct answer - only the candidate who asked the question will receive your response and the question won’t be added to any category. 

Questions that got approved to a category will be removed from the default “unapproved” list of questions and moved to the “approved” list. You can switch the display of “approved” or “unapproved” questions using the main filter in the upper left corner, next to the search box.

Categories tab

The categories tab is to manage categories, add new questions to them, and move questions between categories. This tab has three or four sections:

  • On the left side you can select one from the list of categories of the dataset

  • On the top of the right side you find the core information of the selected category

  • If the selected category is marked as context dependent, you’ll find the “Context Answers” section on the right side

  • On the bottom of the right side you find the list of questions belonging to the selected category

Category list


On the top you can filter for context dependent or context independent categories.

Use the search field to quickly find categories: the entered search criteria will be matched against category names, category answers and questions.

Press the “new category” button to create a new category.

Category core data


You can set the key information on the selected category:

  • name

  • context dependency

  • general answer

Also, you can delete a category if it’s not needed anymore. Note: questions belonging to the deleted category will be removed from the dataset!

Context answers

clipboard_ea73474ad024b7d08879e98181d00632a.pngThis section is shown only if the selected category is context dependent.

The section is expandable, if collapsed only those contexts are shown that already have an answer. If expanded, all the contexts are shown.

If the category contains a question that was asked in a specific context there must be an associated answer. Because of this rule, not all the context answers can be deleted, only those that belong to a context that has no question in the category.

Please remember: if a specific context doesn’t have an answer (as there were no incoming question in the past in that context in the category) the category’s general answer won’t be sent to the candidate (there’s no fall back).

Question list



Lists the questions belonging to the category. If you need more questions, please use the “Add question(s)” button to add multiple questions at once.

Alternatively, you can delete one or more questions or move them to a different category. If you move questions to a context dependent category, you might be asked to enter context specific answers.

Try it now button


clipboard_e1ce563b1d930a184f6d337fb711a43fb.pngYou can quickly test your dataset by imitating candidate questions to see if they are correctly categorised. The questions asked here will appear on the training view for approval. Please note: if you need to add questions to a category, please use the  “Add question(s)” functionality on the question list of the category (in the Categories tab) instead of adding the questions one by one using the “Try it now” button.