Developing a new Skill
This page will walk you through developing a new Mycroft Skill. It assumes you have read through the basic skills information
Prerequisites
It's a good idea to get prepared before writing your new Skill, as this will make your skill-writing experience go much smoother.
Git - You will need to know some basic Git commands in order to create a new Skill for Mycroft. If you're not familiar with Git, that's OK, but you will need to have Git installed on your system. .
Python - You will need to know some basic Python programming to get started. If you've programmed in other object-oriented languages, like Javascript or C#, then you'll be able to pick it up, but if you're totally new to programming, you'll need to do an introductory programming course.
Naming your Skill - Choose a name for your Skill before creating a new repository. It's a good idea to check the Mycroft Skills Repo so that you don't create a duplicate name.
Set up your environment - Most people will find it easiest to test new Skills by setting up Mycroft for Linux.
cd
into the directory where you havemycroft-core
installed and type./start-mycroft.sh debug
. This should open a command line interface (CLI) like that shown below:
Understand the flow of your Skill - It's a good idea to write down on paper how your Skill will work, including
What words will the User speak to activate the Skill?
What will Mycroft speak in response?
What data will you need to deliver the Skill?
Will you need any additional packages or dependencies?
Once you've given these some thought, you can get started.
Skill terminology
You'll notice some new terms as you start to develop Skills.
dialog - A dialog is a phrase that is spoken by Mycroft. Different Skills will have different dialogs, depending on what the Skill does. For example, in a weather Skill, a dialog might be
the.maximum.temperature.is.dialog
.intent - Mycroft matches utterances that a User speaks with a Skill by determining an intent from the utterance. For example, if a User speaks
Hey Mycroft, what's the weather like in Toronto?
then the intent will be identified as weather and matched with the Weather Skill. When you develop new Skills, you need to define new intents.utterance - An utterance is a phrase spoken by the User, after the User says the Wake Word.
what's the weather like in Toronto?
is an utterance.
Make a new repo using the Template Skill
In GitHub, fork
the Mycroft Skills repo into your own GitHub account.
Do this by clicking the 'Fork' button.
Then,
git clone
the repo you've just forked to your local machine.
For example, if your GitHub username is "JaneBloggs" then you will need to
git clone
from https://github.com/JaneBloggs/mycroft-skills.git
Now, we'll made a new repository for your Skill. The new repository has to follow a strict file structure. A Template Skill is available to clone from. If you're new to GitHub, you might find this guide on how to make a repo useful.
Copy the Template Skill into a new directory. Here, we've called the new Skill skill-training
, but your Skill will have a different name.
Structure of the Skill repo
The structure of the Template Skill directory looks like this:
dialog
directory
dialog
directoryThe dialog
directory contains subdirectories for each spoken language the skill supports. Each subdirectory has .dialog
files which specify what Mycroft should say when a Skill is executed.
The subdirectories are named using the IETF language tag for the language. For example, Brazilian Portugues is 'pt-br', German is 'de-de', and Australian English is 'en-au'.
Here is an example where one language is supported. By default, the Template Skill contains one subdirectory for United States English - 'en-us'. If more languages were supported, then there would be additional language directories.
There will be one file in the language subdirectory (ie. en-us
) for each type of dialog the Skill will use. In the example above, there are three types of dialog used by the Skill. Let's take a look at a dialog file.
You will notice that each line of dialog is slightly different. When instructed to use a particular dialog, Mycroft will chose one of these lines at random. This is closer to natural speech. That is, many similar phrases mean the same thing.
For example, how do you say 'goodbye' to someone?
Bye for now
See you round
Catch you later
Goodbye
See ya!
vocab directory and defining Intents
Each Skill defines one or more Intents. Intents are defined in the 'vocab' directory. The 'vocab' directory is organized by language, just like the 'dialog' directory.
In this example, we can see that there are three Intents, each defined in
'IntentKeyword.voc`
vocab files:
Just like dialog files, vocab files can have multiple lines. Mycroft will match any of these phrases with the Intent. If we have a look at the
ThankYouKeyword.voc
file, we can see this in action:
If the User speaks either
thank you
or
thanks
Mycroft will match this to the
ThankYou
intent in the Skill.
NOTE: One of the most common mistakes when getting started with Skills is that the vocab file doesn't include all the phrases that the User might use to trigger the intent.
init.py
__init__.py
is where most of the Skill is defined, using Python code.
Let's take a look:
This section of code imports the required libraries. These libraries will be required on every Skill. Your skill may need to import additional libraries.
This section defines the author of the Skill. This value is usually set to the GitHub username of the author.
This section starts logging of the Skill in the mycroft-skills.log
file. If you remove this line, your Skill will not log any errors, and you will have difficulty debugging.
The
class
definition extends the
MycroftSkill
class:
The class should be named logically, for example "TimeSkill", "WeatherSkill", "NewsSkill", "IPaddressSkill". If you would like guidance on what to call your Skill, please join the ~skills Channel on Mycroft Chat.
Inside the class, methods are then defined.
This method is the constructor, and the key function it has is to define the name of the Skill.
NOTE: You don't have to include the constructor unless you plan to declare state variables for the Skill object. If you plan to declare state variables, then they should be defined in this block. If you don't include the constructor, the name of the Skill will be taken from the name of the class
, in this case 'HelloWorldSkill'.
Example:
The
initialize()
function defines each of the Intents of the Skill. Note that there are three Intents defined in
initialize()
, and there were three Intents defined in vocab files.
Next, there are methods that handle each of the Intents.
In the
handle_hello_world_intent()
method above, the method receives two parameters,
self
and
message
self
is the reference to the object itself, and message
is an incoming message from the messagebus
. This method then calls the
speak_dialog()
method, passing to it the
hello.world
dialog. Remember, this is defined in the file "hello.world.dialog".
Can you guess what Mycroft will Speak?
You will usually also have a
stop()
method. This method tells Mycroft what to do if a stop intent is detected.
In the above code block, the pass
statement is used as a placeholder; it doesn't actually have any function. However, if the Skill had any active functionality, the stop() method would terminate the functionality, leaving the Skill* in a known good state.
Intents and regular expressions (regex)
In the examples above, we walked through how to use phrases in a .voc
file to build an Intent
using entities. In this section, we expand on how Intents
are built, and introduce multiple entities, and regular expressions.
Throughout this section, we will be using examples from the Date and Time Skill.
How .voc files are used to handle Intents
At the top of your Skill file, you will have a line that looks like this:
from adapt.intent import IntentBuilder
This tells your Skill to import the IntentBuilder
class from Adapt. Adapt is an Intent-handling engine. Its job is to understand what a user Speaks to Mycroft, and to pass that information to a Skill for handling.
Different Skills require different information from the user. For example, the Skill to change the color of Mycroft's eyes just has one parameter - color
. That parameter is mandatory - because you can't change the color of Mycroft's eyes without knowing what color to change them to.
Later in your Skill file, you will call IntentBuilder
, with one or more parameters. The parameters can be either required
or optional
.
For example, here is the @intent_handler
decorator used in the Date and Time Skill. It has three parameters; two are required
and one is optional
.
This call is then interpreted by the Adapt Intent Parser.
Internally, Adapt uses a function called register_entity
, and tries to register entities
based on the parameters passed to IntentBuilder
. There are several ways that Adapt can register entities
.
If we were building Intent
s manually, we would do something like this:
But what if we want to support more locations
? Or make the location
available to the Skill to use as a parameter in an API call?
First, Adapt will look in .voc
files to try and register an Intent
. For example, in the Date and Time Skill, in the vocab
directory, you will see several .voc
files. Note that they each correspond to one of the parameters passed to Intentbuilder()
.
If we take a look inside each of these files, they contain only a single word each:
Date.voc
=> "date"`Display.voc => "display"
Query.voc
=> "what"Time.voc
=> "time"
Now, remember back to IntentBuilder
and the mandatory and optional parameters? Only Query
and Time
were mandatory. So if a user Spoke:
"Hey Mycroft, **what** **time** is it?"
then Adapt would match that Utterance to the Date and Time Skill, by registering the Intent
, and within the Skill, this would be handled by the handle_query_time()
function.
If the user Spoke:
"Hey Mycroft, **display** the **time** "
which function within the Date and Time Skill do you think would handle the Utterance?
ANSWER: handle_show_time()
But what about Location
? There isn't a .voc
file for Location
, so how does Adapt register an entity
for Location
so that Location
can be included in an Utterance, recognised as an Intent
, and handled properly by the Date and Time Skill?
This is done using regular expressions.
In the Date and Time Skill directory, you will see a sub-directory called regex
. This sub-directory follows the same file structure as the voc
directory (eg. there will be an en-us
directory inside), and contains a file called 'location.rx':
Inside location.rx
is a regular expression:
(at|in|for) (?P<Location>.*)
Because a .voc
file is not present for the Location
parameter, Adapt will then search for an equivalent .rx
file in the regex
directory. Instead of being restricted to the specified words in the .voc
file, Adapt can register Intents
using regular expressions, and thus support a wider range of input from the user.
Can you think of another Skill where a regular expression Location
would be useful?
ANSWER: Weather Skill
For those who are new to Python, the regex
used is a Python named group. The name of the group is case-sensitive, and correlates with the variable name used to extract the named group value.
For example, in the Date and Time Skill, we can see one of the functions uses Location
as an optional parameter to the function.
Link to the code snipped below
The Location
value is extracted by calling message.data.get("Location")
. If the named group was named differently, such as TheUserLocation
, then this code would look like:
Simplifying your Skill code with intent_handler
decorators
intent_handler
decoratorsYour Skill code can be simplified using the intenthandler() _decorator. The major advantage in this approach is that the Intent is described together with the method that handles the Intent. This makes your code easier to read, easier to write, and errors will be easier to identify.
Learn more about what decorators are in Python at this link.
The intenthandler() _decorator tags a method to be an intent handler for the intent, removing the need for separate registration.
First, you need to import
the intent_handler()
library. Include the following line in the import
section:
Then, you will be able to use the @intent_handler
decorator:
Using these decorators the Skill becomes:
As seen above the entire initialize() method is removed and the Intent registration is moved to the the method declaration.
Ideally, you should use approach to Intent registration.
How do I disable a Skill?
During Skill development you may have reason to disable one or more Skills. Rather than constantly install or uninstall them via voice, or by adding and removing them from /opt/mycroft/skills/
, you can disable them in the mycroft.conf
file.
First, identify the name of the Skill. The name of the Skill is the path
attribute in the .gitmodules
file.
To disable one or more Skills on a Mycroft Device, find where your mycroft.conf
file is stored, then edit it using an editor like nano
or vi
.
Search for the string blacklisted
in the file. Then, edit the line below to include the Skill you wish to disable, and save the file. You will then need to reboot, or restart the mycroft
services on the Device.
How to increase the priority of Skills during loading
During Skill development, you may wish to increase the priority of your Skill loading during the startup process. This allows you to start using the Skill as soon as possible.
First, identify the name of the Skill. The name of the Skill is the path
attribute in the .gitmodules
file.
To prioritize loading one or more Skills on a Mycroft Device, find where your mycroft.conf
file is stored, then edit it using an editor like nano
or vi
.
Search for the string priority
in the file. Then, edit the line below to include the Skill you wish to prioritize, and save the file. You will then need to reboot, or restart the mycroft
services on the Device.
How do I find more information on Mycroft functions?
You can find documentation on Mycroft functions and helper methods at the Mycroft Core API documentation
Last updated