[PYTHON] I tried Mind Meld for the first time

This article was posted as Day 10 of Cisco Systems Japan Advent Calendar 2019 by Cisco Comrades. Click here for past calendars 2017 version: https://qiita.com/advent-calendar/2017/cisco -> Distributed policy tags with LISP and tried TrustSec with CML / VIRL 2018 edition: https://qiita.com/advent-calendar/2018/cisco -> I tried using Cisco's SaaS type monitoring / behavior detection tool for multi-cloud environment monitoring

This year, my third year at the company (Advent Calendar also participated for three consecutive years!), I have been studying around "machine learning / AI", so I will write ** MindMeld ** as a theme that matches Cisco technology.

What is Mindmeld?

Mindmeld is an industry-based interactive AI platform acquired by Cisco in 2017, a developer toolkit released as open source in May 2019. To give a little more detailed explanation, MindMeld uses the MindMeld bot created to reflect what developers and users want to do (intent), "I want to do XX", "I want you to find XX", "△" This is a customizable toolkit for fulfilling requests such as "I want you to pull in the information of △" through conversation with the bot. The detailed explanation is published as this blog. See the GitHub repository (https://github.com/cisco/mindmeld/blob/master/README.md).

What to do in this article

I'm sorry to have no tricks because I'm trying to touch this article, but I want to make an interactive AI using the ** Video Discovery ** blueprint prepared as a sample according to the playbook think. In the future, I would like to use this as a reference to create a custom blueprint that will help me discover the papers I want to read. We will start from the environment preparation, so if you have a Japanese document, you are interested and would like to try it, please refer to this article.

Before you start

First of all, as a preliminary preparation for using MindMeld, we will start by installing MindMeld. There are two installation options,

  1. Docker
  2. virtualenv You can select from. For those who want to use MindMeld heavily in the future, 2. virtualenv is recommended. I would like to make this article with 2. virtualenv.
Your environment and preparations
sw_vers
  ProductName:	Mac OS X
  ProductVersion:	10.15.1
  BuildVersion:	19B88
java -version
  openjdk version "12" 2019-03-19
  OpenJDK Runtime Environment (build 12+33)
  OpenJDK 64-Bit Server VM (build 12+33, mixed mode, sharing)

#Homebrew update
brew update

#Install virtualenv
#Python has a module that allows you to create a virtual environment, and you can use this virtualenv to create a virtual environment.
#You can have different versions of Python for each virtual environment.
sudo -H pip install --upgrade virtualenv

# Elasticsearch 6.7 ready to use
#Due to changes in Java usage policy, macOS users will need an Elasticsearch environment.
export JAVA_HOME=/path/to/java
curl https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.7.0.tar.gz -o elasticsearch-6.7.0.tar.gz
tar -zxvf elasticsearch-6.7.0.tar.gz
cd elasticsearch-6.7.0/bin
./elasticsearch

#Elasticsearch startup confirmation
curl localhost:9200
{
  "name" : "hoBpMt3",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "5G91TzSLTDO4WCFg9h5hhg",
  "version" : {
    "number" : "6.7.0",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "8453f77",
    "build_date" : "2019-03-21T15:32:29.844721Z",
    "build_snapshot" : false,
    "lucene_version" : "7.7.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

#Preparing the virtual environment
mkdir my_mm_workspace4advent
cd my_mm_workspace4advent
virtualenv -p python3 .
source bin/activate
(deactivate)

#MindMeld package installation and startup confirmation
pip install mindmeld #If you get an error here, please reinstall for the specific package that caused the error.
mindmeld #If cautionWarning is displayed, you can ignore it.
  Usage: mindmeld [OPTIONS] COMMAND [ARGS]...

    Command line interface for MindMeld.

  Options:
    -V, --version        Show the version and exit.
    -v, --verbosity LVL  Either CRITICAL, ERROR, WARNING, 
  INFO or DEBUG
    -h, --help           Show this message and exit.

  Commands:
    blueprint  Sets up a blueprint application.
    convert    Converts a Rasa or DialogueFlow project to a...
    load-kb    Loads data into a question answerer index.
    num-parse  Starts or stops the local numerical parser...

#Start of Numeric Parsers.
# (note)If an error occurs here, you cannot proceed. Make sure you succeed before proceeding.
mindmeld num-parse --start
  ...
  Numerical parser running, PID XXXXX
#Alternatively, you can check it with the following command.
mindmeld num-parse --start -p 9000

[Supplement] MindMeld uses a purely functional programming language-based numeric parser called Haskell (https://ja.wikipedia.org/wiki/Haskell) to display specific numeric representations (such as time and date) in user queries. It is being detected. If you do not specify a port, it starts locally on port 7151 by default.

mindmeld num-parse --start -If you run p 9000, it will start on port 9000 and will be http://0.0.0.0:When you access 9000, quack!Is displayed only.



### Start of blueprint
 If you are new to MindMeld, we recommend that you start by trying out the Blueprint application provided.
 The following four types of blueprints are prepared as samples.
1. Food Ordering (```food_ordering```)
2. Home Assistant (```home_assistaant```)
 3. Video Discovery (``` video_discovery```) <-Blueprints covered in this article
4. Kwik-E-Mart (```kwik_e_mart```)
 The goal of ** Video Discovery ** covered in this article is to find the movie or TV show I'm looking for through dialogue with MindMeld, as shown in the image below.
 <img width="590" alt="スクリーンショット 2019-11-26 17.32.46.png " src="https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/214756/6ebc9852-a3fd-9dba-8cd3-bf0a14676c3f.png ">

```python
#Start python
python

#Download and set up the MindMeld Blueprint application
import mindmeld as mm
mm.configure_logs()
bp_name = 'video_discovery'
mm.blueprint(bp_name)
#When the setup is completed, the following log will be displayed.
# Created 'video_discovery' knowledge base at 'localhost'
# '/Users/XXX/my_mm_workspace4advent/video_discovery'

[Supplement]

bp_name = 'video_discovery'Skip the line and directly mm.blueprint('video_discovery')You can also do it as.


 When the MindMeld project folder (`` `video_discovery```) is created, a directory will be created as shown in the image below.
 <img width="554" alt="スクリーンショット 2019-11-26 17.50.59.png " src="https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/214756/7193a949-1cbc-f24b-ce05-6dcb2ff32d16.png ">

### Test run ①
```python
#QuestionAnswerer Application import and setup
from mindmeld.components.question_answerer import QuestionAnswerer
qa = QuestionAnswerer(app_path='video_discovery')

#Search for "Movie Minions" from Knowledge Base entries
qa.get(index='videos', title='Minions')[0]

[Supplement] The result of the test run is as follows.

{'overview': 'Minions Stuart, Kevin and Bob are recruited by Scarlet Overkill, a super-villain who, alongside her inventor husband Herb, hatches a plot to take over the world.', 'imdb_id': 'tt2293640', 'directors': ['Kyle Balda', 'Pierre Coffin'], 'release_year': 2015, 'runtime': 91, 'doc_type': 'movie', 'countries': ['US'], 'title': 'Minions', 'cast': ['Sandra Bullock', 'Jon Hamm', 'Michael Keaton', 'Allison Janney', 'Steve Coogan', 'Jennifer Saunders', 'Geoffrey Rush', 'Steve Carell', 'Pierre Coffin', 'Katy Mixon', 'Michael Beattie', 'Hiroyuki Sanada', 'Dave Rosenbaum', 'Alex Dowding', 'Paul Thornley', 'Kyle Balda', 'Ava Acres'], 'img_url': 'http://image.tmdb.org/t/p/w185//q0R4crx2SehcEEQEkYObktdeFy.jpg', 'release_date': '2015-06-17', 'genres': ['Family', 'Animation', 'Adventure', 'Comedy'], 'vote_average': 6.4, 'popularity': 2.295467321653707, 'id': 'movie_211672', 'vote_count': 3660}

This ** Video Discovery ** application refers to a website called The Movie DB as a knowledge base.

MindMeld Blueprint Application Baseline NLP System Training

Train the NLP system (`` `NaturalLanguageProcessor``` class) by feeding on labeled data.

# (option)Change log level
import logging
logging.getLogger('mindmeld').setLevel(logging.INFO)

#Perform training for baseline NLP systems
from mindmeld import configure_logs; configure_logs()
from mindmeld.components.nlp import NaturalLanguageProcessor
nlp = NaturalLanguageProcessor(app_path='video_discovery')
nlp.build()

[Supplement] By changing the log level, you can see the machine learning process as follows.

`` `nlp.build ()` `` error log
Fitting domain classifier
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/unrelated/compliment/train.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/unrelated/general/train.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/unrelated/insult/train.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/browse/train_00.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/browse/train_01.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/browse/train_02.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/browse/train_03.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/browse/train_04.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/browse/train_05.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/browse/train_mturk_00.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/browse/train_range_00.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/exit/train.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/greet/train.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/help/train.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/start_over/train.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/unsupported/train_get_channel_00.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/unsupported/train_get_channel_01.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/unsupported/train_get_channel_02.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/unsupported/train_get_time_00.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/unsupported/train_get_time_01.txt
Loading raw queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/video_content/unsupported/train_get_time_02.txt
Loading queries from file /Users/yunambu/my_mm_workspace4advent/video_discovery/domains/unrelated/compliment/train.txt
Unable to connect to the system entity recognizer. Make sure it's running by typing 'mindmeld num-parse' at the command line.

An error was thrown. .. .. As I was told, when I check with mindmeld num-parse, Num-parse``` seems to be working fine.

mindmeld num-parse
  ...
  Numerical parser running, PID 94958

Let's first find out what this ``` system entity`` is. Then, there was such a description in the document.

""" Entities in MindMeld are categorized into two types:

System Entities Generic entities that are application-agnostic and are automatically detected by MindMeld. Examples include numbers, time expressions, email addresses, URLs and measured quantities like distance, volume, currency and temperature.

Custom Entities ...

System entities are generic application-agnostic entities that all MindMeld applications detect automatically. There is no need to train models to learn system entities; they just work. ... MindMeld does not assume that any of the system entities are needed in your app. It is the system entities that you annotate in your training data that MindMeld knows are needed. """

In short, it is a general-purpose entity (numerical value, time representation, email address, URL, measurement of distance, volume, currency, temperature, etc.) that is used in the same way for all applications provided by MindMeld. I'm getting an error message saying it's not done. I thought that there might be a problem with the nlp.py itself that I am trying to use here, so I took a look at the source code as follows.

Check `` `nlp.py```

nlp.py


#Excerpt only from the parts that are likely to be applicable
# system_entity_recognizer module(nlp.parent module of py)Import SystemEntityRecongnizer class from
from ..system_entity_recognizer import SystemEntityRecognizer

"""
For details, see system_entity_recongizer.It is commented out in py.
This SystemEntityRecognizer is a singleton(That is, ensure that there is always one instance of this class.)It is supposed to be used as.
Therefore, the specification is such that it is initialized only once during the construction of the NLP object.
The work to initialize it is in the following line.
"""
# initialize the system entity recognizer singleton
SystemEntityRecognizer.get_instance(app_path)
Check `` `system_entity_recognizer.py```

system_entity_recognizer.py


#Excerpt only from the parts that are likely to be applicable
# nlp.Creating a class for an external parsing service used by other apps such as py.
class SystemEntityRecognizer:
    """SystemEntityRecognizer is the external parsing service used to extract
    system entities. It is intended to be used as a singleton, so it's
    initialized only once during NLP object construction.

    TODO: Abstract this class into an interface and implement the duckling
    service as one such service.
    """

    _instance = None #A magical thing that always comes out when using Singleton

    def __init__(self, app_path=None): 
     #Special method for initializing the instance that is called when the instance is created__init__Definition of. app_path has None as the default variable
        #It says that this constructor is for SystemEntityReconizer only. Annotation that be sure to use gettter when using
        """Private constructor for SystemEntityRecognizer. Do not directly
        construct the SystemEntityRecognizer object. Instead, use the
        static get_instance method.

        Args:
            app_path (str): A application path
        """
        if SystemEntityRecognizer._instance:
            raise Exception("SystemEntityRecognizer is a singleton") #Use the raise statement to raise an exception under this condition
        else: #The following duckling api(Duckling =Open source natural language text analysis)It seems that the date / time and measured values are brought from the information in natural language using.
            if not app_path:
                # The service is turned on by default
                self._use_duckling_api = True
            else:
                self._use_duckling_api = is_duckling_configured(app_path)

        self.app_path = app_path
        SystemEntityRecognizer._instance = self

    @staticmethod #Used as a class method
    def get_instance(app_path=None):
        """ Static access method.

        Args:
            app_path (str): A application path

        Returns:
            (SystemEntityRecognizer): A SystemEntityRecognizer instance
        """
        if not SystemEntityRecognizer._instance:
            SystemEntityRecognizer(app_path)
        return SystemEntityRecognizer._instance

    def get_response(self, data):

        if not self._use_duckling_api:
            return [], NO_RESPONSE_CODE

        url = get_system_entity_url_config(app_path=self.app_path)

        try: #Describe the process to be forced even when an exception occurs
            response = requests.request('POST', url, data=data, timeout=1)

            if response.status_code == requests.codes['ok']:
                response_json = response.json()

                # Remove the redundant 'values' key in the response['value'] dictionary
                for i, entity_dict in enumerate(response_json):
                    if 'values' in entity_dict['value']:
                        del response_json[i]['value']['values']

                return response_json, response.status_code
            else:
                raise SystemEntityError('System entity status code is not 200.')
        except requests.ConnectionError: #Describe the processing when an exception occurs
            sys.exit("Unable to connect to the system entity recognizer. Make sure it's "
                     "running by typing 'mindmeld num-parse' at the command line.") #Terminate the main process in the running program
        except Exception as ex:  # pylint: disable=broad-except
            logger.error('Numerical Entity Recognizer Error: %s\nURL: %r\nData: %s', ex, url,
                         json.dumps(data))
            sys.exit('\nThe system entity recognizer encountered the following ' +
                     'error:\n' + str(ex) + '\nURL: ' + url + '\nRaw data: ' + str(data) +
                     "\nPlease check your data and ensure Numerical parsing service is running. "
                     "Make sure it's running by typing "
                     "'mindmeld num-parse' at the command line.")

As far as I can see, the processing at the time of exception of the last `try-except``` statement of system_entity_recognizer.py``` is working, and it seems that the `` Parsing Service``` cannot be reached after all. So, give up what you do with `` virtualenv``` and start over using the Docker container.

  • Since the cause could not be identified by the posting date, I would like to add it as soon as the troubleshooting is completed.

Building a Docker environment

Re-preparation in Docker environment
#Check Docker version
docker version
Client: Docker Engine - Community
 Version:           19.03.5
 API version:       1.40
 Go version:        go1.12.12
 Git commit:        633a0ea
 Built:             Wed Nov 13 07:22:34 2019
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.5
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.12
  Git commit:       633a0ea
  Built:            Wed Nov 13 07:29:19 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

#Get the latest image of MindMeld from the registry
docker pull mindmeldworkbench/dep:latest
#New container, 9200/7151/0 for each container port 9300.0.0.9200 on 0/7151/Bind it to port 9300 and run it in the background.
docker run -p 0.0.0.0:9200:9200 -p 0.0.0.0:7151:7151 -p 0.0.0.0:9300:9300 mindmeldworkbench/dep -ti -d

#Preparing and starting a virtual environment
mkdir my_mm_workspace4advent2
cd my_mm_workspace4advent2
virtualenv -p python3 .
source bin/activate

#Install and launch the MindMeld package
pip install mindmeld
mindmeld

#Download MindMeld Blueprint
mindmeld blueprint video_discovery #You can also run it from a terminal instead of from the Python Shell

Test run ②

#QuestionAnswerer Application import and setup
from mindmeld.components.question_answerer import QuestionAnswerer
qa = QuestionAnswerer(app_path='video_discovery')

#Search for "Movie Minions" from Knowledge Base entries
qa.get(index='videos', title='Minions')[0]

[Supplement] The result of the test run is as follows.

{'overview': 'Minions Stuart, Kevin and Bob are recruited by Scarlet Overkill, a super-villain who, alongside her inventor husband Herb, hatches a plot to take over the world.', 'imdb_id': 'tt2293640', 'directors': ['Kyle Balda', 'Pierre Coffin'], 'release_year': 2015, 'runtime': 91, 'doc_type': 'movie', 'countries': ['US'], 'title': 'Minions', 'cast': ['Sandra Bullock', 'Jon Hamm', 'Michael Keaton', 'Allison Janney', 'Steve Coogan', 'Jennifer Saunders', 'Geoffrey Rush', 'Steve Carell', 'Pierre Coffin', 'Katy Mixon', 'Michael Beattie', 'Hiroyuki Sanada', 'Dave Rosenbaum', 'Alex Dowding', 'Paul Thornley', 'Kyle Balda', 'Ava Acres'], 'img_url': 'http://image.tmdb.org/t/p/w185//q0R4crx2SehcEEQEkYObktdeFy.jpg', 'release_date': '2015-06-17', 'genres': ['Family', 'Animation', 'Adventure', 'Comedy'], 'vote_average': 6.4, 'popularity': 2.295467321653707, 'id': 'movie_211672', 'vote_count': 3660}

NLP system training retry

# (option)Change log level
import logging
logging.getLogger('mindmeld').setLevel(logging.DEBUG) #Let's change it to DEBUG and take a look at the detailed log.

#Perform training for baseline NLP systems
from mindmeld import configure_logs; configure_logs()
from mindmeld.components.nlp import NaturalLanguageProcessor
nlp = NaturalLanguageProcessor(app_path='video_discovery')
nlp.build()
`` `nlp.build ()` `` detailed log
Fitting domain classifier
Loading raw queries from file video_discovery/domains/unrelated/compliment/train.txt
Loading raw queries from file video_discovery/domains/unrelated/general/train.txt
Loading raw queries from file video_discovery/domains/unrelated/insult/train.txt
Loading raw queries from file video_discovery/domains/video_content/browse/train_00.txt
Loading raw queries from file video_discovery/domains/video_content/browse/train_01.txt
Loading raw queries from file video_discovery/domains/video_content/browse/train_02.txt
Loading raw queries from file video_discovery/domains/video_content/browse/train_03.txt
Loading raw queries from file video_discovery/domains/video_content/browse/train_04.txt
Loading raw queries from file video_discovery/domains/video_content/browse/train_05.txt
Loading raw queries from file video_discovery/domains/video_content/browse/train_mturk_00.txt
Loading raw queries from file video_discovery/domains/video_content/browse/train_range_00.txt
Loading raw queries from file video_discovery/domains/video_content/exit/train.txt
Loading raw queries from file video_discovery/domains/video_content/greet/train.txt
Loading raw queries from file video_discovery/domains/video_content/help/train.txt
Loading raw queries from file video_discovery/domains/video_content/start_over/train.txt
Loading raw queries from file video_discovery/domains/video_content/unsupported/train_get_channel_00.txt
Loading raw queries from file video_discovery/domains/video_content/unsupported/train_get_channel_01.txt
Loading raw queries from file video_discovery/domains/video_content/unsupported/train_get_channel_02.txt
Loading raw queries from file video_discovery/domains/video_content/unsupported/train_get_time_00.txt
Loading raw queries from file video_discovery/domains/video_content/unsupported/train_get_time_01.txt
Loading raw queries from file video_discovery/domains/video_content/unsupported/train_get_time_02.txt
Loading queries from file video_discovery/domains/unrelated/compliment/train.txt
Loading queries from file video_discovery/domains/unrelated/general/train.txt
Loading queries from file video_discovery/domains/unrelated/insult/train.txt
Loading queries from file video_discovery/domains/video_content/browse/train_00.txt
Loading queries from file video_discovery/domains/video_content/browse/train_01.txt
Loading queries from file video_discovery/domains/video_content/browse/train_02.txt
Loading queries from file video_discovery/domains/video_content/browse/train_03.txt
Loading queries from file video_discovery/domains/video_content/browse/train_04.txt
Loading queries from file video_discovery/domains/video_content/browse/train_05.txt
Loading queries from file video_discovery/domains/video_content/browse/train_mturk_00.txt
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for "80's".
Unable to load query: Unable to resolve system entity of type 'sys_interval' for "1990's".
Unable to load query: Unable to resolve system entity of type 'sys_interval' for "1990's".
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '1980s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for 'all time'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '70s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for "80's".
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '1980s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for "90's".
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for 'all time'.
Unable to load query: Unable to resolve system entity of type 'sys_time' for 'the year'.
Unable to load query: Unable to resolve system entity of type 'sys_time' for 'this moment'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for "90's".
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '2010-2015'. Entities found for the following types ['sys_temperature', 'sys_amount-of-money', 'sys_phone-number']
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '1980s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '1980s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '2000s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for "90's".
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '2000 to now'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '2000s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for 'last year'. Entities found for the following types ['sys_time']
Unable to load query: Unable to resolve system entity of type 'sys_interval' for "40's".
Unable to load query: Unable to resolve system entity of type 'sys_interval' for 'last decade'.
Unable to load query: Unable to resolve system entity of type 'sys_time' for 'this summer'. Entities found for the following types ['sys_interval']
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '1950s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '2000s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '1960s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Loading queries from file video_discovery/domains/video_content/browse/train_range_00.txt
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90 s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '2000 s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '1970s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '1900s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '1970s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80 s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '1970 s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '1970s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '1990 to 1995'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80 s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '60s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90 s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for 'eighties'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '70s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '1980s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for 'last year'. Entities found for the following types ['sys_time']
Unable to load query: Unable to resolve system entity of type 'sys_interval' for 'last year'. Entities found for the following types ['sys_time']
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80 s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '1960s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for 'late 1990 s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for 'past year'. Entities found for the following types ['sys_time']
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80 s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '2000s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90 s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '60s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '2010s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '1980s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90 s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '80s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '90s'.
Unable to load query: Unable to resolve system entity of type 'sys_interval' for '1950s'.
Loading queries from file video_discovery/domains/video_content/exit/train.txt
Loading queries from file video_discovery/domains/video_content/greet/train.txt
Loading queries from file video_discovery/domains/video_content/help/train.txt
Loading queries from file video_discovery/domains/video_content/start_over/train.txt
Loading queries from file video_discovery/domains/video_content/unsupported/train_get_channel_00.txt
Loading queries from file video_discovery/domains/video_content/unsupported/train_get_channel_01.txt
Loading queries from file video_discovery/domains/video_content/unsupported/train_get_channel_02.txt
Loading queries from file video_discovery/domains/video_content/unsupported/train_get_time_00.txt
Loading queries from file video_discovery/domains/video_content/unsupported/train_get_time_01.txt
Loading queries from file video_discovery/domains/video_content/unsupported/train_get_time_02.txt
Building gazetteer 'sort'
Loading entity data from 'video_discovery/entities/sort/gazetteer.txt'
52/52 entities in entity data file exceeded popularity cutoff and were added to the gazetteer
Loading synonyms from entity mapping
Added 49/49 synonyms from file into gazetteer
Building gazetteer 'director'
Loading entity data from 'video_discovery/entities/director/gazetteer.txt'
89750/89750 entities in entity data file exceeded popularity cutoff and were added to the gazetteer
Loading synonyms from entity mapping
Added 0/0 synonyms from file into gazetteer
Building gazetteer 'title'
Loading entity data from 'video_discovery/entities/title/gazetteer.txt'
276728/276728 entities in entity data file exceeded popularity cutoff and were added to the gazetteer
Loading synonyms from entity mapping
Added 0/0 synonyms from file into gazetteer
Building gazetteer 'cast'
Loading entity data from 'video_discovery/entities/cast/gazetteer.txt'
486719/486719 entities in entity data file exceeded popularity cutoff and were added to the gazetteer
Loading synonyms from entity mapping
Added 0/0 synonyms from file into gazetteer
Building gazetteer 'genre'
Loading entity data from 'video_discovery/entities/genre/gazetteer.txt'
101/101 entities in entity data file exceeded popularity cutoff and were added to the gazetteer
Loading synonyms from entity mapping
Added 70/70 synonyms from file into gazetteer
Building gazetteer 'type'
Loading entity data from 'video_discovery/entities/type/gazetteer.txt'
18/18 entities in entity data file exceeded popularity cutoff and were added to the gazetteer
Loading synonyms from entity mapping
Added 20/20 synonyms from file into gazetteer
Building gazetteer 'country'
Loading entity data from 'video_discovery/entities/country/gazetteer.txt'
477/477 entities in entity data file exceeded popularity cutoff and were added to the gazetteer
Loading synonyms from entity mapping
Added 243/243 synonyms from file into gazetteer
Fitting intent classifier: domain='video_content'
Selecting hyperparameters using k-fold cross-validation with 5 splits
Best accuracy: 98.12%, params: {'C': 10, 'class_weight': {0: 0.5805006811989102, 1: 3.431368821292776, 2: 0.9903185247275775, 3: 5.1444117647058825, 4: 2.906170886075949, 5: 0.6776020174232005}, 'fit_intercept': True}
Fitting entity recognizer: domain='video_content', intent='browse'
No entity_resolution model configuration set. Using default.
No entity_resolution model configuration set. Using default.
No entity_resolution model configuration set. Using default.
No entity_resolution model configuration set. Using default.
No entity_resolution model configuration set. Using default.
No entity_resolution model configuration set. Using default.
No entity_resolution model configuration set. Using default.
No entity_resolution model configuration set. Using default.
No entity_resolution model configuration set. Using default.
Fitting role classifier: domain='video_content', intent='browse', entity_type='type'
No role model configuration set. Using default.
Importing synonym data to synonym index 'synonym_type'
Creating index 'synonym_type'
100%|████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 23.57it/s]
Loaded 2 documents
Fitting role classifier: domain='video_content', intent='browse', entity_type='country'
No role model configuration set. Using default.
Importing synonym data to synonym index 'synonym_country'
Creating index 'synonym_country'
100%|███████████████████████████████████████████████████████████████████████████| 235/235 [00:00<00:00, 555.92it/s]
Loaded 235 documents
Fitting role classifier: domain='video_content', intent='browse', entity_type='sys_interval'
No role model configuration set. Using default.
Fitting role classifier: domain='video_content', intent='browse', entity_type='cast'
No role model configuration set. Using default.
Importing synonym data to synonym index 'synonym_cast'
Creating index 'synonym_cast'
0it [00:00, ?it/s]
Loaded 0 documents
Fitting role classifier: domain='video_content', intent='browse', entity_type='genre'
No role model configuration set. Using default.
Importing synonym data to synonym index 'synonym_genre'
Creating index 'synonym_genre'
100%|█████████████████████████████████████████████████████████████████████████████| 31/31 [00:00<00:00, 270.29it/s]
Loaded 31 documents
Fitting role classifier: domain='video_content', intent='browse', entity_type='sort'
No role model configuration set. Using default.
Importing synonym data to synonym index 'synonym_sort'
Creating index 'synonym_sort'
100%|████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 43.20it/s]
Loaded 4 documents
Fitting role classifier: domain='video_content', intent='browse', entity_type='sys_time'
No role model configuration set. Using default.
Fitting role classifier: domain='video_content', intent='browse', entity_type='director'
No role model configuration set. Using default.
Importing synonym data to synonym index 'synonym_director'
Creating index 'synonym_director'
0it [00:00, ?it/s]
Loaded 0 documents
Fitting role classifier: domain='video_content', intent='browse', entity_type='title'
No role model configuration set. Using default.
Importing synonym data to synonym index 'synonym_title'
Creating index 'synonym_title'
0it [00:00, ?it/s]
Loaded 0 documents
Fitting entity recognizer: domain='video_content', intent='greet'
There are no labels in this label set, so we don't fit the model.
Fitting entity recognizer: domain='video_content', intent='help'
There are no labels in this label set, so we don't fit the model.
Fitting entity recognizer: domain='video_content', intent='start_over'
There are no labels in this label set, so we don't fit the model.
Fitting entity recognizer: domain='video_content', intent='exit'
There are no labels in this label set, so we don't fit the model.
Fitting entity recognizer: domain='video_content', intent='unsupported'
There are no labels in this label set, so we don't fit the model.
Fitting intent classifier: domain='unrelated'
Selecting hyperparameters using k-fold cross-validation with 5 splits
Best accuracy: 70.87%, params: {'C': 1, 'class_weight': {0: 0.9618644067796609, 1: 1.009, 2: 1.0395604395604394}, 'fit_intercept': True}
Fitting entity recognizer: domain='unrelated', intent='insult'
There are no labels in this label set, so we don't fit the model.
Fitting entity recognizer: domain='unrelated', intent='general'
There are no labels in this label set, so we don't fit the model.
Fitting entity recognizer: domain='unrelated', intent='compliment'
There are no labels in this label set, so we don't fit the model.

The error can be avoided for the time being, `` Best accuracy: 70.87%, params: {'C': 1,'class_weight': {0: 0.9618644067796609, 1: 1.009, 2: 1.0395604395604394},'fit_intercept': True} I received the result.

Unable to load query: Unable to resolve system entity of type 'sys_interval'、Unable to load query: Unable to resolve system entity of type 'sys_time'I was worried that there was a log saying that, but I will add it as soon as I understand it because the cause could not be investigated by the posting date.


 Also, at this point, there is `` `No entity_resolution model configuration set. Using default.```, and the accuracy is as low as less than 70%, but in the test run ③ after this, the feature extraction method will be explained. We will improve the accuracy by selecting and fitting.


### Test run ③
```python
#Throw a test query against a trained NLP system
nlp.process("Show me movies with Yo Oizumi")
{'text': 'Show me movies with Yo Oizumi', 'domain': 'video_content', 'intent': 'browse', 'entities': [{'text': 'movies', 'type': 'type', 'role': None, 'value': [{'cname': 'movie', 'score': 19.448803, 'top_synonym': 'movies'}, {'cname': 'tv-show', 'score': 1.684855, 'top_synonym': 'series'}], 'span': {'start': 8, 'end': 13}}, {'text': 'Yo Oizumi', 'type': 'cast', 'role': None, 'value': [], 'span': {'start': 20, 'end': 28}}]}
#Creating an Intent Classifier. Use it to access model and feature extraction settings
>>> ic = nlp.domains['video_content'].intent_classifier
>>> ic.config.model_settings['classifier_type']
'logreg'

>>> ic.config.features
{'bag-of-words': {'lengths': [1, 2]}, 'edge-ngrams': {'lengths': [1, 2]}, 'in-gaz': {}, 'exact': {'scaling': 10}, 'gaz-freq': {}, 'freq': {'bins': 5}}

#This classifier fits()You can try various learning algorithms, functions, hyperparameters, mutual verification settings, etc. by passing appropriate parameters using methods.
#For example, by default bag-of-words are used in feature extraction, but bag-of-N-You can also add and use grams.
#I'm using a sample blueprint here, so it's already included.
>>> ic.config.features['bag-of-words']['lengths'].append(3)
>>> ic.fit() # fit(learan_data, learn_label)Training data and results can be trained using methods
Fitting intent classifier: domain='video_content'
Selecting hyperparameters using k-fold cross-validation with 5 splits
Best accuracy: 98.12%, params: {'C': 10, 'class_weight': {0: 0.8202145776566757, 1: 2.0420152091254753, 2: 0.9958507963118188, 3: 2.7761764705882355, 4: 1.8169303797468355, 5: 0.8618294360385144}, 'fit_intercept': True}

Deployment

Once you've completed all the steps up to this point, use the `` `Conversation``` class to test your interactive application.

from mindmeld.components.dialogue import Conversation
conv = Conversation(nlp=nlp, app_path='video_discovery')
res = conv.say("Show me movies with Yo Oizumi")
print(res[1])
Results
{
    "popularity": 1.6843605812038211,
    "release_year": 2001,
    "title": "Spirited Away",
    "type": "movie"
}
{
    "popularity": 1.331518694517026,
    "release_year": 2004,
    "title": "Howl's Moving Castle",
    "type": "movie"
}
{
    "popularity": 0.9276412700980977,
    "release_year": 2014,
    "title": "When Marnie Was There",
    "type": "movie"
}
{
    "popularity": 0.5483075183578056,
    "release_year": 2002,
    "title": "The Cat Returns",
    "type": "movie"
}
{
    "popularity": 0.48987671974559993,
    "release_year": 2015,
    "title": "I Am a Hero",
    "type": "movie"
}
{
    "popularity": 0.40345108131710566,
    "release_year": 2015,
    "title": "The Boy and the Beast",
    "type": "movie"
}
{
    "popularity": 0.21568218441881915,
    "release_year": 2006,
    "title": "Brave Story",
    "type": "movie"
}
{
    "popularity": 0.14845276320241113,
    "release_year": 2015,
    "title": "Kakekomi",
    "type": "movie"
}
{
    "popularity": 0.12538694143866852,
    "release_year": 2009,
    "title": "Professor Layton and the Eternal Diva",
    "type": "movie"
}
{
    "popularity": 0.0982032635261464,
    "release_year": 2007,
    "title": "Kitaro",
    "type": "movie"
}

Simulation of dialogue

say()The method packages the input text into a user request object and passes it to the mindmeld application manager to simulate the user interacting with the application. This method outputs the portion of the response text sent by the dialogue manager.

>>> conv.say('Hi, there!')
['Hi!', 'Talk to me to browse movies and TV shows.', '{\n    "popularity": 4.904354681204688,\n    "release_year": 2017,\n    "title": "Wonder Woman",\n    "type": "movie"\n}\n{\n    "popularity": 4.743179401930947,\n    "release_year": 2017,\n    "title": "Beauty and the Beast",\n    "type": "movie"\n}\n{\n    "popularity": 4.390633761852561,\n    "release_year": 2017,\n    "title": "Transformers: The Last Knight",\n    "type": "movie"\n}\n{\n    "popularity": 4.316296603970634,\n    "release_year": 2017,\n    "title": "Logan",\n    "type": "movie"\n}\n{\n    "popularity": 4.088732678349925,\n    "release_year": 2017,\n    "title": "The Mummy",\n    "type": "movie"\n}\n{\n    "popularity": 3.9604223694356175,\n    "release_year": 2017,\n    "title": "Kong: Skull Island",\n    "type": "movie"\n}\n{\n    "popularity": 3.920513974901613,\n    "release_year": 2005,\n    "title": "Doctor Who",\n    "type": "tv-show"\n}\n{\n    "popularity": 3.8653508696521426,\n    "release_year": 2011,\n    "title": "Game of Thrones",\n    "type": "tv-show"\n}\n{\n    "popularity": 3.7942142869419615,\n    "release_year": 2010,\n    "title": "The Walking Dead",\n    "type": "tv-show"\n}\n{\n    "popularity": 3.575316921466775,\n    "release_year": 2017,\n    "title": "Pirates of the Caribbean: Dead Men Tell No Tales",\n    "type": "movie"\n}', "Suggestions: 'Most popular', 'Most recent', 'Movies', 'TV Shows', 'Action', 'Dramas', 'Sci-Fi'"]
>>> 
>>> conv.say('Show me Yo Oizumi movies')
['Ok. Here are some results:', '{\n    "popularity": 1.6843605812038211,\n    "release_year": 2001,\n    "title": "Spirited Away",\n    "type": "movie"\n}\n{\n    "popularity": 1.331518694517026,\n    "release_year": 2004,\n    "title": "Howl\'s Moving Castle",\n    "type": "movie"\n}\n{\n    "popularity": 0.9276412700980977,\n    "release_year": 2014,\n    "title": "When Marnie Was There",\n    "type": "movie"\n}\n{\n    "popularity": 0.5483075183578056,\n    "release_year": 2002,\n    "title": "The Cat Returns",\n    "type": "movie"\n}\n{\n    "popularity": 0.48987671974559993,\n    "release_year": 2015,\n    "title": "I Am a Hero",\n    "type": "movie"\n}\n{\n    "popularity": 0.40345108131710566,\n    "release_year": 2015,\n    "title": "The Boy and the Beast",\n    "type": "movie"\n}\n{\n    "popularity": 0.21568218441881915,\n    "release_year": 2006,\n    "title": "Brave Story",\n    "type": "movie"\n}\n{\n    "popularity": 0.14845276320241113,\n    "release_year": 2015,\n    "title": "Kakekomi",\n    "type": "movie"\n}\n{\n    "popularity": 0.12538694143866852,\n    "release_year": 2009,\n    "title": "Professor Layton and the Eternal Diva",\n    "type": "movie"\n}\n{\n    "popularity": 0.0982032635261464,\n    "release_year": 2007,\n    "title": "Kitaro",\n    "type": "movie"\n}']
>>> 
>>> conv.say('from 2015')
['Perfect. Here are some movies with Yo Oizumi:', '{\n    "popularity": 0.48987671974559993,\n    "release_year": 2015,\n    "title": "I Am a Hero",\n    "type": "movie"\n}\n{\n    "popularity": 0.40345108131710566,\n    "release_year": 2015,\n    "title": "The Boy and the Beast",\n    "type": "movie"\n}\n{\n    "popularity": 0.14845276320241113,\n    "release_year": 2015,\n    "title": "Kakekomi",\n    "type": "movie"\n}']
>>>
>>> conv.say('Thank you')
['See you later.']
>>>
>>> conv.say('do you have comedy movies?')
['Perfect. Here are some results:', '{\n    "popularity": 3.575316921466775,\n    "release_year": 2017,\n    "title": "Pirates of the Caribbean: Dead Men Tell No Tales",\n    "type": "movie"\n}\n{\n    "popularity": 3.16142885645921,\n    "release_year": 2017,\n    "title": "Guardians of the Galaxy Vol. 2",\n    "type": "movie"\n}\n{\n    "popularity": 2.864445217663442,\n    "release_year": 2017,\n    "title": "Despicable Me 3",\n    "type": "movie"\n}\n{\n    "popularity": 2.8304034588472593,\n    "release_year": 2017,\n    "title": "Cars 3",\n    "type": "movie"\n}\n{\n    "popularity": 2.824917558393832,\n    "release_year": 2017,\n    "title": "Baywatch",\n    "type": "movie"\n}\n{\n    "popularity": 2.7538905585490667,\n    "release_year": 2016,\n    "title": "Deadpool",\n    "type": "movie"\n}\n{\n    "popularity": 2.6281775734086983,\n    "release_year": 2016,\n    "title": "Tomorrow Everything Starts",\n    "type": "movie"\n}\n{\n    "popularity": 2.561422298541789,\n    "release_year": 2017,\n    "title": "The Lego Batman Movie",\n    "type": "movie"\n}\n{\n    "popularity": 2.4652902501503737,\n    "release_year": 2000,\n    "title": "DragonHeart: A New Beginning",\n    "type": "movie"\n}\n{\n    "popularity": 2.4633532858463862,\n    "release_year": 2017,\n    "title": "Smurfs: The Lost Village",\n    "type": "movie"\n}']
>>> 
>>> conv.say('show me the ones with Yo Oizumi')
['Ok. Here are some comedy movies:', '{\n    "popularity": 0.14845276320241113,\n    "release_year": 2015,\n    "title": "Kakekomi",\n    "type": "movie"\n}\n{\n    "popularity": 0.08592812381604169,\n    "release_year": 2013,\n    "title": "Detective in the Bar",\n    "type": "movie"\n}\n{\n    "popularity": 0.058488695289761736,\n    "release_year": 2013,\n    "title": "The Kiyosu Conference",\n    "type": "movie"\n}\n{\n    "popularity": 0.015350574738277112,\n    "release_year": 2016,\n    "title": "Gold Medal Man",\n    "type": "movie"\n}\n{\n    "popularity": 0.015203833742272778,\n    "release_year": 2011,\n    "title": "Drucker in the Dug-Out",\n    "type": "movie"\n}\n{\n    "popularity": 0.01330410659551587,\n    "release_year": 2012,\n    "title": "Bread of Happiness",\n    "type": "movie"\n}\n{\n    "popularity": 0.011187189390564376,\n    "release_year": 2006,\n    "title": "Sugar & Spice: F\\u00fbmi zekka",\n    "type": "movie"\n}\n{\n    "popularity": 0.006566393969770445,\n    "release_year": 2014,\n    "title": "A Drop of the Grapevine",\n    "type": "movie"\n}\n{\n    "popularity": 0.0005218638053935734,\n    "release_year": 2006,\n    "title": "Simsons",\n    "type": "movie"\n}']
>>> 
>>> conv.say('Released in 2012')
['Done. Here are some comedy movies starring Yo Oizumi:', '{\n    "popularity": 0.01330410659551587,\n    "release_year": 2012,\n    "title": "Bread of Happiness",\n    "type": "movie"\n}']
>>> conv.say('Thank you')
['Bye!']

I was able to do it for the time being!

So, for the time being, I was able to try all the basics. I think the good thing about MindMeld is that it can be customized, so I would like to try creating custom apps (Article_Discovery blueprint), Japanese localization, Chat Bot integration, etc. in the future.

Disclaimer

The opinions expressed on this site and the corresponding comments are the personal opinions of the contributor and not the opinions of Cisco. The content of this site is provided for informational purposes only and is not intended to be endorsed or expressed by Cisco or any other party. By posting on this website, each user is solely responsible for the content of all information posted, linked or otherwise uploaded, and disclaims Cisco from any liability regarding the use of this website. I agree.

Recommended Posts