I participated in the Alexa Developer Skill Awards this year as well. Three in-house volunteers perform short-term intensive development like a hackathon, ["Industry terminology conversion"](https://www.amazon.co.jp/HANDS-LAB-INC-%E6%A5%AD%E7 % 95% 8C% E7% 94% A8% E8% AA% 9E% E5% A4% 89% E6% 8F% 9B / dp / B07WW4S8GL).
People people people people people people people people > Zagin de Shisu: sushi: <  ̄Y^Y^Y^Y^Y^Y^Y^Y^Y^Y ̄
Overall skill @daisukeArk Morphological analysis part @ryosukeeeee APL part @ sr-mtmt
I decided to touch APL because it was an opportunity for technical catch-up. Furthermore, when I created the Alexa skill before, I wrote it in Node.js (TypeScript), This time I decided to write it in Python.
Alexa and other voice assistants are finally starting to get screens, It is now possible to provide visual information as well.
Alexa Presentation Language (APL) Visual elements including animations, graphics, images, slideshows and videos You can create various effects by changing it according to the exchange of skills.
For example, the skill of PIZZA SALVATORE CUOMO is exhibited at the Alexa event. You can try only the operation of the skill without actually purchasing it in the trial version.
By the way, for the Node.js version, I referred to this series. https://qiita.com/Mount/items/72d9928ff2c0ae5de737
How to use the skill development console screen and the story of JSON are the same in Python, so I will omit it.
#Launch Request handle
def handle(self, handler_input):
# type: (HandlerInput) -> Response
session_attr = AttributesManager.session_attributes
logger.info(session_attr)
speak_output = "Welcome.\n Convert your words into those of industry people.\n Please ask like "What is" Sushi in Ginza "?""
builder = (
handler_input.response_builder
.speak(speak_output)
.ask(speak_output)
)
#Add APL information to builder only for APL compatible devices
if(is_apl_supported(handler_input.request_envelope)):
builder.add_directive(
RenderDocumentDirective(
document=GetAplJsonDocument.get_apl_json("launch"),
datasources=GetAplJsonDatasources.get_apl_json("launch")
)
)
return builder.response
#Fetch a JSON file containing APL information
class GetAplJsonBase(object):
_document_type = None
@classmethod
def get_apl_json(cls, intent_name):
#Note that it is an absolute path as seen from Lambda, not a relative path from the location of this file
file_path_base = '/var/task/'
with open(file_path_base + cls._document_type + '/' + intent_name + '.json', 'r', encoding="utf-8") as f:
return json_load(f)
class GetAplJsonDocument(GetAplJsonBase):
_document_type = 'document'
class GetAplJsonDatasources(GetAplJsonBase):
_document_type = 'datasources'
def is_apl_supported(request):
apl_interface = request.context.system.device.supported_interfaces.alexa_presentation_apl
return apl_interface is not None
It was pointed out that JSON of APL information may be easier to use if it is defined in js instead of a text file. I tried various things and the format was not decided, so this time I read JSON from a text file and pass it.
Click here for reference when defining with js
function createDatasource(attributes) {
return {
"riddleGameData": {
"properties": {
"currentQuestionSsml": "<speak>"
+ attributes.currentRiddle.question
+ "<speak>",
"currentLevel": attributes.currentLevel,
"currentQuestionNumber": (attributes.currentIndex + 1),
"numCorrect": attributes.correctCount,
"currentHint": attributes.currentRiddle.hints[attributes.currentHintIndex]
},
"transformers": [
{
"inputPath": "currentQuestionSsml",
"outputName": "currentQuestionSpeech",
"transformer": "ssmlToSpeech"
},
{
"inputPath": "currentQuestionSsml",
"outputName": "currentQuestionText",
"transformer": "ssmlToText"
}
]
}
};
}
Reference source) https://github.com/alexa-labs/skill-sample-nodejs-level-up-riddles/blob/master/Step%203%20-%20Add%20APL/lambda/custom/index.js # L461-L487
With this, if you devise JSON, you should be able to put out various things
I find it difficult to think about supporting visual information and touch panels while taking advantage of the individuality of VUI. I wanted to collaborate with not only engineers but also various people to find out how to use them properly with smartphones.
Smart Speaker Advent Calender 2019 There is a free space, so please post if you are interested: sunny:
Recommended Posts