Blog

Technology
Integrating Azure IoT : Cognitive Services and Raspberry pi
30 November 2017
by Eduardo Mazza & Marc Bourdeau

In the last few years, Microsoft has made a significant effort to provide a simple way to develop software for IoT devices. Without a doubt this investment is connected with the estimations that the number of IoT devices will be nearly 40 billion, approximately 30 devices for each and every social network user.

In this article, we will describe how easy it is to develop for IoT devices using the technologies and tools provided by Microsoft. Furthermore, we will show how we can integrate our IoT device with the Cognitive Services in Azure.

 

The project

The project consists of a text command interface that will interpret orders (given in a text format) through the IoT device interface and active sensors based on the given orders. The IoT device of choice is the Raspberry Pi 3 (model B) that consists of a credit-card-sized computer capable to plug into a monitor and keyboard and which can be used in electronics projects.

Besides the Raspberry Pi, we also used few pieces of hardware:

1 x Power adapter for the Pi 5V 2.5A (sometimes sold separately)
1 x Breadboard
2 x Resistors 20 Ohms
4 x Jumper wires
2 x leds of 2 different colors (green and yellow in this demonstration)

In order to setup the hardware for this project, we do as follow:

Green LED:

Yellow LED:

Note that the polarity of the LED is important. (This configuration is commonly known as Active Low)

At the end you have a setup that looks like the following:

 

Starting to code – make the lights work

To make the coding easier, we adapted an example project called “Blinky” (basically the Hello World project for IoT devices) that contains the code for powering one led on a breadboard. First configure and connect your Raspberry Pi with Visual Studio using the tutorial at: https://developer.microsoft.com/en-us/windows/iot/getstarted

Once you are able to run code from Visual Studio into the Raspberry Pi, download the source code for the project from GitHub.

The core of the project is on the “MainPage class” which calls the function InitGPIO:

private void InitGPIO()
{
    var gpio = GpioController.GetDefault();
    pinGreen = gpio.OpenPin(LED_GREEN_PIN);
    pinYellow = gpio.OpenPin(LED_YELLOW_PIN);
    pinGreen.SetDriveMode(GpioPinDriveMode.Output);
    pinYellow.SetDriveMode(GpioPinDriveMode.Output);
}

This method basically setups the IoT device for controlling the output power on pins 5 and 6 on the Raspberry Pi (represented by constants LED_GREEN_PIN and YELLOW_GREEN_PIN respectively).

…and that’s it! Now the only code we need to execute to switch on the green led is:

pinGreen.Write(GpioPinValue.Low);

Easy peasy lemon squeezy, right?

 

Creating the Cognitive Services

Now that we know how to turn on/off lights using our hardware, it is time to teach the device how to understand a command. Let’s say we order it using the sentence “I want to switch the green light on”, we want it to have the same effect than when we say “green light, on now“. In order to accomplish this we will use a service (among the various offered by Microsoft Cognitive Services) called LUIS (Language Understanding Intelligent Service). As described in the documentation of LUIS:

“LUIS enables developers to build smart applications that can understand human language and react accordingly to user requests.”

Basically, we will provide LUIS with a given written sentence and we obtain as output a JSON structure that describes possible intents and entities mentioned in the given sentence (and how confident the service is about its result).

In order to create and configure LUIS we followed the tutorial at https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-get-started-create-app and by simply adding three entities (a green light, a yellow light, and both lights) and two intents (switch on, and switch off).

Finally, we added a few sentences to train our service to understand our commands. Sentences such as “green lights on”, “I want the green light to be on” and “please, switch the green light on” can make our application understand a few examples of how to order to turn the green light on.

By the end, we trained and published our application, the result of the publication is a REST service able to receive queries and output an answer in JSON describing the possible action (either LightOn or LightOff) and the entities involved (Green, Yellow or AllLight).

 

 

Integrating Everything

We are almost finished. The last step is to introduce the necessary code to communicate with the REST service created in the previous step and hand-in its outputs.

Microsoft offers various libraries (in form of NuGet packages) which that already implements integration of many of the Cognitive services. These packages are prefixed with the name “Microsoft.ProjectOxford” (in fact, Project Oxford was the previous name of Cognitive Services) – https://www.nuget.org/packages?q=Microsoft.ProjectOxford.

However, not all services are available, and when available, not all of them have a version able to work with IoT devices. In our case, we just need to develop a class that will communicate with the REST service endpoint. First we make a method for making GET request to a given URL:

private static async Task<string> Get(string url)
{
    HttpClient client = new HttpClient();
    HttpResponseMessage response = await client.GetAsync(new Uri(url));
    var responseString = await response.Content.ReadAsStringAsync();
    return responseString;
}

Then we can just create a class to hold the values of the output:

public class Command
{
    public Intent Intent { get; set; }
    public Entity Entity { get; set; }
}
public enum Entity
{
    Green,
    Yellow,
    All
}
public enum Intent
{
    LightOn,
    LightOff,
    None
}

Finally we create the method that passes the query to the REST service and parses the resulting JSON:

public static async Task<Command> Order(string sentence)
{
    var result = new Command { Entity = Entity.All, Intent = Intent.None };
    var queryUrl = LuisUrl + sentence;
    var resultJson = await Get(queryUrl);

    //read json and turn into command
    JsonObject obj = JsonObject.Parse(resultJson);

    //read intent
    var intent = obj["topScoringIntent"].GetObject()["intent"].GetString();
    switch (intent)
    {
        case "LightOn":
            result.Intent = Intent.LightOn;
            break;
        case "LightOff":
            result.Intent = Intent.LightOff;
            break;
        default:
            break;
    }

    //read entity
    var entity = obj["entities"];
    if (entity.GetArray().Any())
    {
        var entityLight = obj["entities"].GetArray()[0].GetObject()["type"].ToString().Replace("\"","");
        switch (entityLight)
        {
            case "AllLight":
                result.Entity = Entity.All;
                break;
            case "Green":
                result.Entity = Entity.Green;
                break;
            case "Yellow":
                result.Entity = Entity.Yellow;
                break;
            default:
                break;
        }
    }
    return result;
}

The classes used to access a web client and parsing JSON are located at the namespace Windows.Web.Http and Windows.Data.Json respectively.

And that’s it… within a minutes and a few lines of code we have our IoT device capable to understand our orders:

 

Conclusion

IoT devices provide a huge number of projects ideas ranging from task automation to machine monitoring. It can be used for personal projects by hobbyists to huge projects aiming to add value to your business. Furthermore, the power to integrate these devices with external APIs, such as Cognitive Services, opens possibilities that are only limited by our imagination.

Related articles

Read news
Integrating Azure IoT : Cognitive Services and Raspberry pi