How good is the Emotiv SDK

Azure IoT Hub: Capture and analyze brain waves with Azure IoT Hub

  • 18 minutes to read

November 2016

Volume 31, number 11

This article has been machine translated.

By Benjamin Perkins

The brain is the module that interprets simultaneous input from many sources and many interfaces, and then triggers some type of action or reaction. Sources like a flower, sun, or firecrackers and interfaces like smell, visible and sound calmness, a physical movement in which shading or a fool's reaction to sound can trigger louder. Currently, a reliable algorithm like this still does not exist due to the large number of variables between the source, the human and the interface.

Deriving from this complex algorithm is to have a better understanding of how the brain functions and reacts in numerous situations, such as smelling a flower, in the burning of sun and unexpectedly hearing firecrackers. This article describes how to gain further insight into how the brain works in specified scenarios in the hope of defining an algorithm that will one day react reliably in several unexpected situations.

Capture, store and analyze brain waves

The project that is described in this article uses numerous technologies to capture, store, and analyze brain waves, each of which are briefly described in illustration 1. The first two components - brain waves capture and store in Azure IoT Hub - are described in the next sections of this article. The remaining three components will be discussed in Part 2 of this article in a future issue of MSDN Magazine.

Figure 1 the components of the brain troubleshooting project

ComponentsroleShort description
Emotiv Insights SDKCaptureA brain interface that converts brain waves into numbers
Azure IoT HubStorageTemporary storage queue for IoT device rendered data
SQL AzureStorageHighly scalable, inexpensive and flexible database
Stream AnalyticsStorageAn interface between Azure IoT and SQL Azure
Power BIanalysisData analysis tool with support for simple graphic base

The sections are broken down by components each including a functional and technical description as well as the details of the coding or configuration requirements. The different parts of the solution are sorted in the way they create; However, it is possible to create these in many different sorted sequences. The technical goal is to upload brain waves captured using a brain interface (BCI), save it in a SQL Azure database and analyze the data with Power BI.

Capturing brain waves

When the brain receives input and reacts, it decides to answer by triggering electrical current between neurons called whole neurons. All these neural are physical movement of items that cause real, recordable vibrations with different intensities and in different places in the brain.

An electroencephalograph records these vibrations, and only in the past few years have companies created an inexpensive BCI to record these brain activities. (A list of many of these companies and devices on bit.ly/2c7j4fw.) In addition, some of these companies have created an SDK for their devices that allows real-time visualization and storage of brain activity.

I wrote a short post about my original intent of putting the Azure into putting my brain waves into operation; You can get it at bit.ly/294Hi4R. Note that I selected the Emotiv Insight BCI for this project. This BCI has five electrodes (AF3, AF4, T7, T8 and O1) with each state of the thinking readings on five different brain frequencies, see Figure 2.

Figure 2 Emotiv Insight BCI literature

Brain frequenciesStatus of notice
ALPHARelaxed high level creativity reflective
LOWBETASocial activity, excitement, warning
HIGHBETAFocus, think, work
GAMMAOptimal frequency for thinking, actively considered.
THETASleep, drowsy, meditative and believe I am dreaming

The Emotiv SDK is available for download from GitHub (github.com/Emotiv) and is easy to; in this example uses the community sdk version. When you configure the C # version of the SDK to run in Visual Studio, there are three "problems" that were not intuitive:

  1. Make sure that the "bitness" of the Visual Studio project and the number of bits property are aligned with the number of bits of the components in # 3.
  2. Make sure that the DotNetEmotivSDK.dll is compiled with the same number of bits as No. 3.
  3. You have to manually put the edk.dll and the glut32.dll / glut64.dll in the working directory, e.g. B. Copy Solutions: Web Project Templates or / Bin / Release.

To get started, navigate to the C # project in the Community-SDK-Master \ examples \ C folder and open the DotNetEmotivSDK solution in Visual Studio. Set the DotNetEmotivSDK project as the start project, right-click on the project and select Set as start project, then compile the project by pressing CTRL + SHIFT + B. Pay special attention to the target platform and make sure they stay consistent while configuring the SDK. You should choose either x86 or x64.

Next, create a new console application in Visual Studio and add a reference to the DotNetEmotivSDK.dll that was created during compilation of the DotNetEmotivSDK project by right-clicking on References and navigating to the Ex: \ obj \ x86 \ Release directory and carefully select compiled binary file. Last, copy the edk.dll and the glut * .dll file to the same working directory as the DotNetEmotivSDK.dll was pasted. There are numerous copies of edk.dll and glut * .dll. Select the binaries in this location included in the SDK Community-Sdk-master \ bin \ win32 if you have compiled everything in 32-bit, otherwise select the 64-bit version.

Once the SDK is properly configured and the new console application is ready, set Emotiv to use in the Program.cs file to reference the functions in the library. If desired, display the BrainComputerInterface project in the downloadable sample code. Pay special attention to GetHeadsetInformation as it is where some BCI is done before checking the device.

The GetHeadsetInformation method subscribes to the EmoStateUpdatedEventHandler, which is triggered when the ProcessEvents method of the EmoEngine class is called. Calling the GetHeadsetInformation method will continue to process ProcessEvents within a while loop until Bool StopHeadsetInformation is set to False. When the EmoStateUpdatedEventHandler is triggered, it executes the Engine_EmoStateUpdated method, checking the battery level and the signal strength. It is important to the validity of the BCI data collected, that the battery has an acceptable load and that there is an adequate Bluetooth 4.0 LE connection between BCI contacts and the computer.

In the source code, not to begin collecting the BCI data, pass these two measurements a reasonable threshold, e.g. B. ChargeLevel> 1 & & SignalStrength> EdkDll.IEE_SignalStrength_t.BAD_SIG. As long as the signal strength IEE_SignalStrength_t.NO_SIG, greater in which NO_SIG is no signal, the device is considered functional, but not optimal, therefore the SignalStrength must be at least GOOD_SIG before proceeding. In addition, the MaxChargeLevel five and current battery charge greater than one are functional again. The code to capture brain waves, the battery capacity, the signal strength and the contact quality for each of the electrodes is shown here:

Caution: The BCI can obtain readings from the electrodes, although the contact quality is poor. If some of the electrodes are working and collecting data, other electrodes may not be, which is an ideal situation as the conclusions drawn later from the analysis can be misinterpreted and all electrodes are not fully functional during the session. There is no code in the example to measure and confirm that all electrodes are running; this should still be the case before saving the measurements. An alternative to programming the logic to check that all electrodes are fully functional before running the code is to use the online Emotiv CPANEL being bit.ly/1LZge5T. There you see about Figure 3.


Figure 3 Check electrodes for the brain interface with BCI

After the Engine_EmoStateUpdated method confirms the BCI is functional, it sets StopHeadsetInformation = False, the while interrupts the loop in the GetHeadsetInformation method. Read the frequencies of all the electrodes the C # code will demonstrate Figure 4 and is in the GetBrainInterfaceMeasurements method. The method first creates a one-dimensional array of the EdkDll.IEE_DataChannel_t type with five reference elements, one reference element per electrode on the device. Finally, the program cycles through each of the five electrodes and the frequency strengths output in the console. Notice that the GetAverageBandPowers method of the EmoEngine class stores the Channel \ electrode (channelList [i]) and frequency variables (Theta, Alpha, Low_beta, High_beta, and Gamma) in which the numerical representation of the wave is brain become. Each listing of the measured values ​​along with the electrode are rendered in the console window using the static WriteLine method, see the system class.

Figure 4 the frequency values ​​of the brain interface electrodes

The console application requires that you have an Emotiv Insight BCI and a valid Bluetooth connection with. Regardless of the BCI chosen, the principles correspond in the following ways:

  • Before you begin collecting and storing data make sure that the device is optimal and consistent so that all recorded data is collected in the same way.
  • Understand how the electrodes are configured and what they measure, access the dimensions and display them for later storage and analysis.

If you continue to work in the console application and the results are written to the console window, the next section explains how to configure Azure IoT Hub. Configuring a SQL Azure database into which Stream Analytics BCI inserts data for analysis and learning is explained in Part 2 of this article.

The parallel between coding and with the

I don't believe that I am the only person who has made a connection between building code structures and "human" traits. It seems in many ways that creating a code platform was designed with our interactive features in mind, because the ability to define ourselves in code that works so well appears data flows almost without thought. Consider the object-oriented programming term inheritance, in which a child class receives a set of attributes and properties from a parent class. In the human context, the child elements receive attributes from their parent elements such as eyes and hair color. In addition, the ability to blink and sense of smell are examples of methodological traits that people usually possess, like my parents. These traits did not come directly from my parents, however, they were inherited over many generations from the base human class itself.

If you were to create an interactive Workflow Services class, you would likely do it including all of the basic human attributes and characteristics within that class; B. Sex, eating, sleeping, breathing and so on. You would then create a parent class inherited from the human class, with some additional unique or advanced properties like reflection, speaking, appreciating and etc., or not, provided that each generation of the inherited class is more sophisticated, complex with the Time will. Inheritance continues progressively in the implementation of a more recent child class.

The possibility of people speaking and communicating changes in each generation, so when a different programming language concept of polymorphism occurs. Polymorphism means that while the parent property is the name, purpose, and intent of the child, it can be done differently and with more inputs to make the result more accurate. For example, although the parent has the ability to speak, the child will have a similar speaking method that also includes the ability to speak a reverse in multiple languages. The additional parameter to speak to the method would be language type. This input will not be present in the speak method of the parent. The inferred or overloaded speaking characteristic could also include some advanced communication functions such as facial expression or an inflection.

These structured classes created, technically sophisticated methods and the unique set of attributes is a fascinating journey into the realizations of our internal self and is present. Creating and defining us is the best way to learn what makes us who we are. However, one thing is quickly recognized after the model is created: the methods trigger so the child can do something. Instantiating the class isn't a big deal (child child = new child ()), but what is the module that then calls the methods and uses the attributes? Without the module in place, an entity is motionless and thoughtless. While the human senses such as sight, smell and touch input module trigger a suitable method, a computer engine uses data and coded logic to interpret the input as the basis for an action. To write the logic properly coded we first would have to have an understanding of how humans work which doesn't yet. The missing piece is the brain.

Save the brain waves

In order to store the brain waves collected by the BCI, there are numerous required components. From the individual's point of view, a simple ADO.NET connection with a local SQL database is conceivable and that's all. However, if there are many people with many devices that the application will use, an Azure IoT hub is the best option to switch because of its reliability and scalability. The following three components are required to successfully upload and store brain waves:

  1. Azure IoT Hub
    1. One device identity
    2. Code to upload the brain wave
  2. SQL Azure instance and a data table
  3. Stream Analytics interface

Now discussed in detail the creation of Azure IoT Hub.

Create Azure IoT Hub

Azure IoT Hub is similar to a queue in that it temporarily stores multiple rows of data, assuming another entity, e.g. B. a reader or, in this case a Stream Analytics job, is monitoring the queue and taking an action once the message arrives. The advantage of Azure IoT Hub is that it is extremely stable and can scale very large in a short period of time. When testing this solution, the inserted three rows per second and the client-side record count exactly match the server-side number. Three events per second is very small. Azure IoT Hub can process millions of events per second.

To create an Azure IoT Hub, you need an Azure subscription and access to the Azure portal on bit.ly/2bA4vAn. Click the + new menu item, navigate to Internet of Things and select IoT Hub. Enter the required information and press the create button. It is possible that only one level of IoT Hub per subscription is free. Free tariff supports 8,000 events per day. Free tariff is the one chosen for this project. however, if you want to insert further events, then select the appropriate level. Once created the ads, see Figure 5.


Figure 5: the detail page of the BCI Azure IoT Hub

Once Azure IoT Hub has been created, the next step is to create a unique identity required to connect and upload data to Azure IoT Hub. The downloadable source contains a console called BrainComputerInterface-CreateIdentity that performs this activity. To build your own project, start by creating a blank console application in Visual Studio. Once built, right click on the project and select Manage NuGet Packages and find and add the Microsoft.Azure.Devices package with the sample code provided; Version 1.0.11 is used.

Before you start creating the device entity, code access policies to access Azure IoT Hub and get the connection string by selecting Shared on the Settings blade. Then select the appropriate policy explained in the table Figure 6. Selecting one of the policies listed in the table opens a window with the permissions and shared access keys. Copy the connection string - primary key and use it to set the value of ConnectionString See Figure 7.

Figure 6 String Connection Policies, Permissions, and Usages

Directiveauthorizationuse
IothubownerRegistration read / write, service device connectadministration
of the serviceConnect serviceSend and receive for the endpoints for the cloud side
deviceConnect deviceSend and receive for the device-side endpoints
RegistryReadReading the registryRead access to the identity registry
RegistryReadWriteRegistration read / write accessRead / write access to the registry of the identity

Figure 7 create a key for each unique device ID of the device

To create a device identity, you need a connection string for a policy that has write access to the Identity registry. This means that you would be using either the Iothubowner or RegistryReadWrite directive. It is highly recommended that you use the least privileged policies required to perform the desired task. This reduces the possibility of unintentional actions such as B. global delete or update. Protect Iothubowner connection string parameters and only supply them when device identity creation or other administrative activity is required.

Show the sample code Figure 7. Since this is a simple program, the _connectionString and the Microsoft.Azure.Devices.RegistryManager _registryManager are created as static class variables. It is also good to create the main method and pass these as method parameters if desired. Instantiating the _registryManager variable by calling CreateFromConnectionStringMethod and then calling the Program.AddDeviceAsync method asynchronously.

The Program.AddDeviceAsync method calls the Microsoft.Azure.Devices.RegistryManager.AddDeviceAsync method and passes a new Microsoft.Azure.Devices.RegistryManager.Device. If an identity doesn't already exist, it will be created. Otherwise the Microsoft.Azure.Devices.Common.Exceptions.DeviceAlreadyExistsException is thrown. The exception is processed because the code is being executed in a Try {} Catch {} block of code. Within the catch block {} the Microsoft.Azure.Devices.RegistryManager.GetDeviceAsyncmethod is called and, in both cases, whether the add or get method is called, the key of the device is rendered to the console.

When the code is complete and compiled, run the code and write down the key of the device, creating the DeviceClient class if necessary, which contains the logic to connect and send data to Azure IoT Hub used in the next section, has been. Also, consider again Figure 5 and make sure that the device link is initially deactivated. After a device is created, the device link is activated in the Azure IoT Hub window. Clicking on allows you to enable / disable the device and the device key in case you missed the console when.

The code for capturing brain waves was already written in the previous section. What needs to be done is that instead of writing the BCI output to the console, write it to the Azure IoT Hub that was just created. In the sample code, there is a project called BrainComputerInterface in which the While {} loop is mentioned Figure 2 is changed to call a new method, SendBrainMeasurementToAzureAsync, as shown in Figure 8, the BCI sends data to the Azure IoT Hub, rather than the output of the brain computer interface reading in the console.

Figure 8 Brain wave inserted into Azure IoT Hub.

Note that the SendBrainMeasurementToAzureAsync method uses the Microsoft.Azure.Devices.Client.DeviceClient, as mentioned earlier, and Newtonsoft.Json classes that format data and add the BCI reading in the cloud. If you are creating a new project, add these two NuGet packages by right-clicking the project and choosing Manage NuGet Packages.

Now that the code for writing the BCI output to Azure IoT Hub is complete, you can turn the BCI upside down and start the upload. When the BrainComputerInterface program is started, the system asks you to select the scenario in which the brain waves are stored. Examples of those are a flower, smelling in the sun, hearing a crackling frog, etc. Select the scenario, check whether the electrodes / contacts are shown in green (see Figure 3) and when the performance and sensor modules are ready, brain waves started being recorded and uploaded to the cloud.

Note that at this point the meter usage would be displayed as data via the change in the IoT sheet (see Figure 5), but the data would be deleted after about 24 hours, because at this point there is no database to store the data, nor is there a program to move the messages from the Azure IoT Hub to a permanent storage location. Part 2 creates a SQL Azure database, followed by the Stream Analytics job, so you can analyze the data and discover new things.

Summary

The path that this series of articles should take you towards ultimately doing this is two. The first is in terms of cognitive, the more you learn yourself and your brain how works, the more you can begin to replicate, or expand, will improve the quality from practice. Computers are better and faster at the end of math and can draw, a more general knowledge base for making decisions without emotions than the one that the human brain is capable of. If, for some reason, you can incorporate this into your own cognitive, using some kind of artificial intelligence, your ability to work faster and more accurately will get greater.

The other concept is the ability to use thought to control elements in your daily work. Increasing the knowledge of grasping and analyzing brain waves increases the ability to use them with confidence. As soon as one or more trains of thought are satisfied, pull or spin box is flawlessly defined, they can be used to control objects or carry out activities such as changing the TV channel or the channel. It may also be possible to capture one and take an action before your own realizes that you have to do that. The possibilities are endless.


Benjamin Perkinsis an Escalation Engineer at Microsoft and author of four books on C #, IIS, NHibernate, and Microsoft Azure. He recently co-authored "6 Beginning C # Programming with Visual Studio 2015" (Wrox). You can reach him at [email protected]

Thanks to the following Microsoft technical expert for reviewing this article: Sebastian Dau
Sebastian Dau is an embedded escalation engineer on the Azure IaaS team