How do you feel about this article? Help us to provide better content for you.
Thank you! Your feedback has been received.
There was a problem submitting your feedback, please try again later.
What do you think of this article?
Neural Compute Stick gets support for the numerical computation library from Google.
The Movidius NCS (139-3655) brings deep learning capabilities to low power devices, allowing artificial intelligence to be moved out to the edges of the network. The compact USB 3.0 device launched with support for the Caffe framework and in a previous post, I took a first look at the NCS and the provided examples. In this post we return to the updated SDK and take a look at the recently announced support for TensorFlow, Google’s 2nd generation machine learning system.
Tensor what?
Stress tensor, source Wikimedia Commons, CC BY-SA 3.0
From a quick search we found the easiest — relatively speaking! —to understand description of what a tensor is from PhysLink.com, which says that “Tensors, defined mathematically, are simply arrays of numbers, or functions, that transform according to certain rules under a change of coordinates. In physics, tensors characterize the properties of a physical system”.
TensorFlow is described as an open source library for numerical computation using data flow graphs, where nodes in the graph represent mathematical operations, and the graph edges represent multidimensional arrays (tensors) communicated between them.
Fortunately, we don’t need to truly get to grips with the mathematical details in order try out the TensorFlow examples and to get an idea of what it enables you to do. Indeed, it may be that existing freely available neural networks could be put to use in many applications.
TensorFlow provides support for CPUs and GPUs, but of course in our case the heavy lifting will be done courtesy of the Neural Compute Stick’s far more energy efficient Myriad 2 VPU.
SDK installation
Returning to the Movidius NCS getting started guide the first thing we noticed was that the tar archive downloads have been replaced with a GitHub link, which is a most welcome change and should make it easier to update the SDK and track changes. So to install this we now simply:
$ git clone https://github.com/movidius/ncsdk.git
$ cd ncsdk
$ sudo make install
Following which the Neural Compute Stick SDK is installed to:
- NCS Libraries → /usr/local/lib
- NCS Toolkit binaries → /usr/local/bin
- NCS Include files → /usr/local/include
- NCS Python API → /opt/movidius
With the PYTHONPATH environment variable being updated also.
Next we can build the examples, but we need to make sure first that the NCS is plugged in.
$ make examples
Part of the output from this process can be seen in the image above.
TensorFlow example
Detail from Inception v1 Network Analysis
TensorFlow examples are provided with the SDK for v1 and v3 of the Inception architecture from Google, which is described in the following papers:
- Going Deeper with Convolutions (17/09/14)
- Rethinking the Inception Architecture for Computer Vision (02/12/15)
A blog post by Nicolò Valigi provides a short history of the Inception deep learning architecture.
To run the v1 example we first:
$ cd examples/tensorflow/inception_v1
$ make all
This profiles the network on the NCS, and then uses a description of the network plus a trained weights file to compile a binary graph that will execute on the Myriad 2 VPU.
Next we enter:
$ make run
This downloads a single image to the NCS and displays the inference results based on the 1,001 available categories.
By default, the image used is of an electric guitar, as pictured above.
Sure enough, the top result by a long measure is electric guitar.
A selection of test images are provided and to run the inference on a different one we simply edit the run.py file and update the image_filename variable:
from mvnc import mvncapi as mvnc
import sys
import numpy
import cv2
path_to_networks = './'
path_to_images = '../../data/images/'
graph_filename = 'graph'
image_filename = path_to_images + 'nps_guac.png'
Changing this to point at an image of guacamole, for example.
Then executing make run again we get the following result.
While developing new neural network architectures may be the domain of AI/ML experts, looking at the run.py file — at only 93 lines long including copyright header and comments! — it’s clear that developing applications which use the Neural Compute API and existing pre-trained networks should be relatively straightforward. Roughly speaking the Python script:
- Imports support for the NCS, a numerical library and OpenCV
- Checks for an NCS device and opens it, else throws an error
- Loads a precompiled binary graph
- Converts the input image to an appropriate format
- Loads the image (our input tensor)
- Prints out the top 5 inferences
- Closes the NCS device
Simple. And with the availability of pre-trained neural networks for different environments and applications, it’s easy to imagine the technology being put to a wealth of different uses.
A trip to the zoo
Still from a run of the TinyYolo + GoogLeNet video stream example
In an effort to encourage sharing, reuse and to increase the number of freely available examples for the Movidius NCS, Intel have created the Neural Compute Application Zoo — a GitHub repository with scripts to download models and compile graphs for Caffe and TensorFlow, plus example applications and data to use with them.
At the time of writing the applications available in the NC App Zoo include some which use more than one NCS, so that multiple networks can be run simultaneously. An example of which is the stream_ty_gn application which takes a video stream and:
- Runs a TinyYolo inference to find all the objects in the frame that match one of the 20 categories it recognises
- For each object recognised crops out a bounding rectangle
- Passes the smaller image to the GoogLeNet network for more detailed classification
Contributions to the NC App Zoo are invited, with a set of simple guidelines provided and submission via GitHub pull request.
Closing thoughts
Why TensorFlow? Well, if you do a quick web search you will find no shortage of blog posts, articles and even videos providing an introduction, along with those covering more advanced topics and use in different applications. In less than two years, Google’s TensorFlow has managed to gather an immense amount of interest, in addition to providing an active platform for cutting-edge research, while also being put to use in production applications. As such support for the NCS can only serve to accelerate the development of deep learning technology in embedded systems.