• Resolved Ambyomoron

    (@josiah-s-carberry)


    I am having a problem troubleshooting a skill. It is not an issue with the plugin. But where can I get some help about this, where I can explain the problem and get some feedback? Is there a user forum somewhere? Other?

    For what it is worth, I have a problem with a dialogue deciding that some user input (the word “again”) matches an intent with a very high level of confidence (almost .9). But in the list of terms used to train that intent, there is nothing that remotely resembles the word “again”.

Viewing 14 replies - 1 through 14 (of 14 total)
  • Hello. There are a few resources I can recommend:

    Q&A sites where hundreds of willing developers can answer your questions
    IBM Developer Answers: https://developer.ibm.com/answers/index.html
    Tag “watson-assistant” on Stack Overflow: https://stackoverflow.com/questions/tagged/watson-conversation

    If you aren’t confident in your skills or feel that you need to know some basic concepts, you can check out IBM’s “How to Build a Chatbot Without Coding” course on Coursera.
    https://www.coursera.org/learn/how-to-build-your-own-chatbot-without-coding

    At last, but not least, there are lots of docs on IBM’s website: https://www.ibm.com/cloud/watson-assistant/docs-resources/ From my experience, it’s always better to check them first before posting questions elsewhere. There’s also a link to the Slack channel where you can also ask bot development related questions.

    Regarding your current question, you can go to your skill’s dashboard, then in Improve > User Conversations you can edit mistakenly recognized intents.

    • This reply was modified 7 years, 1 month ago by kaneua.
    Thread Starter Ambyomoron

    (@josiah-s-carberry)

    Thanks, but none of those places are really suited to the sort of question I have. It’s not a programming question and there is no one right answer so stackexchange will simply reject the question – but I’ll try, anyway.

    I have marked the particular inputs as irrelevant. What I am lacking is an understanding of how a single term that is structurally and semantically completely unlike all the training terms for an intent could possibly return a match with an extremely high level of confidence.

    While Stackexchange may reject the question, IBM Developer Answers (https://developer.ibm.com/answers/index.html) shouldn’t. It’s relevant to their product and you need a solution. It’s more likely to find someone with the same issue on IBM’s site than there.

    You can also join IBM Watson Slack channel and ask your question there. (Link is on this page: https://www.ibm.com/cloud/watson-assistant/docs-resources/)

    Plugin Author Intela-bot Help

    (@intelahelp)

    Hello @josiah-s-carberry,
    could you please provide the skill exported as a JSON file?
    We’ll try to look into it.

    You can send it to help@intela.io.
    Thank you.

    Thread Starter Ambyomoron

    (@josiah-s-carberry)

    @kaneua Thanks very much for the useful hints.
    @intelahelp The json file has been sent to you.

    Plugin Author Intela-bot Help

    (@intelahelp)

    Thank you, @josiah-s-carberry.
    We’ve got the skill.

    Could you please describe how to reproduce the issue?
    Which is the relevant intent?
    (For example, what are the user inputs that lead to the intent)

    Thread Starter Ambyomoron

    (@josiah-s-carberry)

    The relevant intent is “goodbyes”
    To reproduce the behavior, simply start a session and type “again”. There is a hit in the dialogue node “Goodbyes”, which returns one of the responses defined there.
    This will happen both in the “Try it out” panel and also when starting a session using the WordPress plugin.
    I first noticed this behavior when I entered (as a visitor) “are you back again”. I discovered that simply typing “again” or even “back” produced the same result.

    Plugin Author Intela-bot Help

    (@intelahelp)

    For us it doesn’t work like this (it hits “Anything else” node, please see the attached screenshot: https://imgur.com/DYkjj32). Looks like it’s an issue with Watson Assistant service (like cache problem, or so).

    Could you, please, try to create a new skill and import into it the JSON file you’ve sent us? Then please try to reproduce the issue on the newly imported skill. If this helps you can switch to the new one and remove the original.

    Please let us know of your progress.

    Thank you.

    Thread Starter Ambyomoron

    (@josiah-s-carberry)

    Have you first removed “again” from the counterexamples?

    Plugin Author Intela-bot Help

    (@intelahelp)

    Nope, no modifications from our side.
    Just imported the JSON into the brand new skill.

    Thread Starter Ambyomoron

    (@josiah-s-carberry)

    To reproduce the problem I have seen, you would need to remove the counterexample before importing the json. I was obliged to flag “again” as irrelevant to avoid this problem in my production system.

    Plugin Author Intela-bot Help

    (@intelahelp)

    Yep, this way it’s reproducible.

    It figures out that Watson Assistant considers “again” (and some other words) as a word similar to “later” from #goodbyes intent. So, your current solution (with counter examples) may be the easiest way to overcome the issue. This is the side effect of Neural Network training process: sometimes you can’t explain why it works as it comes.

    Please let us know if we can assist you further.
    Thanks.

    Thread Starter Ambyomoron

    (@josiah-s-carberry)

    Thanks very much for your feedback on this. I’m glad the result could be reproduced. What remains very strange is why the confidence level is so high. Treating “again” as a synonym of “later” in “see you later” is vaguely conceivable, but hardly merits a .9 level of confidence

    Plugin Author Intela-bot Help

    (@intelahelp)

    You are welcome.
    Thank you for your inquire.

Viewing 14 replies - 1 through 14 (of 14 total)

The topic ‘Getting help for troubleshooting a skill’ is closed to new replies.