This article Evacuation advisory system using drone This is the second chapter of. Please refer to it for the production background.
In addition, the contents of this chapter are based on the following contents, so if you have not read it yet, please read that first. Chapter 1 [AWS / Tello] Building a system for operating drones on the cloud
Let's operate the Tello drone by voice using Alexa. AWS is used as the infrastructure. First, let's see the completed form in the video.
I confirmed the communication between IoT Core and Tello until the last time, so this time Host your Alexa skill endpoint on lambda and publish it to IoT Core. Part1. Skill development / cooperation with lambda Part2. Communication from lambda to IoT Core
Log in to the alexa developer console (ADC) and create a custom skill.
If you select "User-defined provisioning" as the backend of the skill, you need to prepare your own lambda function. If you select "Alexa-Hosted", you can use the code editor on the ADC and feel that it is easy for beginners to get involved. Since it communicates with IoT Core, considering future extensibility, here we will do it by the former method of preparing a lambda function by ourselves. That's why I will prepare a new lambda. Make a note of the ARN of the lambda and set it to the default region in the skill endpoint.
Since it is kind, I will reconfirm the basic terms of the skill. I will touch on it in detail later.
__Call name __: Password to start a dialogue (session) __Intent __: Intention of dialogue (developer can set freely) __Built-in intent __: Built-in intent by default (cancel, stop, help, etc.) __ Sample utterance __: Password to call a specific intent in a session __Slot __: Something like a variable held in a sample utterance __Built-in slot __: Slots already prepared (there are numbers, facility names, actress names ... amazing) __ Custom Slots __: Slots that developers can set freely
By the way, slots are wonderful because you can flexibly set synonyms and ID specifications.
Details of the skills set in this drone control system. (For your reference)
__Call name __: "Controller" __Intent __: Controller / Land Controller intent when you want to move the drone, Land intent when you want to land (Flip intent looks interesting) __ Sample utterance __: For Controller intent
__Built-in slot __: num (number) __ Custom slot __: direction
In other words, the Controller intent requires two pieces of information, the direction of travel and the distance. It will be your preference, but you should devise a way to listen back to the distance when only the direction is input by voice with the actual skill. It seems that the dialogue model here can be defined by json, so I will post it in the appendix.
Play with the lambda you created earlier. It is definitely better to use the "Alexa Skills Kit SDK" here. You can do it without using it, but the json nesting of the request parameter is deep and the processing seems to be troublesome. The language will be Python. Since there are many people who use Nodejs and few people who use Python, I dared to use Python here.
Alexa Skills Kit SDK for Python
First, import ask_sdk_core with lambda.
$ import ask_sdk_core
However, if it is left as it is, the external library cannot be read, so let's add ask_sdk_core as a layer.
You can also pip install to the project locally, zip the entire project and upload it to lambda. The link below summarizes the method and the ecosystem of local development of lambda, so please have a look if you are interested. [AWS / Lambda] How to load Python external library
Since ask_sdk_core is a PurePython library, any environment for creating a zip file for layer should be okay. It was done on MacOS. (Example of python3.7)
$ mkdir -p build/python/lib/python3.7/site-packages
$ pip3 install ask_sdk_core -t build/python/lib/python3.7/site-packages/
$ cd build
$ zip -r ask_sdk.zip .
Add the generated ask_sdk.zip to layer and adapt the layer in lambda to load the library.
Of course, this lambda trigger is Alexa Skill Kit, and the skill ID is the ID of the skill you created earlier.
The interaction model of Alexa's skill and lambda have been linked. Next, we will develop lambda code and link IoT Core.
Since it has become longer, I will move on to Part 2. Chapter 3 [AWS / Tello] I tried to operate the drone with voice Part2
I will put the dialogue model in json. If you have a new intent, please publicize it on GitHub: pray: https://github.com/shoda888/tello_ask_model
{
"interactionModel": {
"languageModel": {
"invocationName": "controller",
"intents": [
{
"name": "AMAZON.CancelIntent",
"samples": []
},
{
"name": "AMAZON.HelpIntent",
"samples": []
},
{
"name": "AMAZON.StopIntent",
"samples": []
},
{
"name": "Controller",
"slots": [
{
"name": "num",
"type": "AMAZON.NUMBER"
},
{
"name": "direction",
"type": "direction"
}
],
"samples": [
"{direction} {num}",
" {num}centimeter{direction}Go to",
"{direction}To{num}centimeter",
"{direction}To{num}Move a centimeter"
]
},
{
"name": "AMAZON.NavigateHomeIntent",
"samples": []
},
{
"name": "Land",
"slots": [],
"samples": [
"landing",
"Landing",
"land"
]
}
],
"types": [
{
"name": "direction",
"values": [
{
"id": "back",
"name": {
"value": "Behind",
"synonyms": [
"back",
"Rear"
]
}
},
{
"id": "forward",
"name": {
"value": "Before",
"synonyms": [
"Before",
"Forward"
]
}
},
{
"id": "down",
"name": {
"value": "did",
"synonyms": [
"under",
"Descent",
"Down"
]
}
},
{
"id": "up",
"name": {
"value": "up",
"synonyms": [
"Up",
"Upward",
"Rise"
]
}
},
{
"id": "left",
"name": {
"value": "Hidari",
"synonyms": [
"left"
]
}
},
{
"id": "right",
"name": {
"value": "Migi",
"synonyms": [
"right"
]
}
}
]
}
]
}
}
}
Recommended Posts