The Intel Neural Compute Stick 2 is Here!
In July last year (2017), the Intel Movidius Neural Compute stick (139-3655) was launched, this was the world’s first self-contained Artificial Intelligence accelerators available in a USB format that allowed host devices to process deep neural networks at the edge. This gave developers and researchers a low cost and low power method of developing and optimising computationally intensive AI vision applications.
This capable Neural Compute Stick is powered by a fully functioning SoC, the Myriad 2 Vision Processing Unit, and looking at the specifications is said to be capable of 100 GFLOPS of performance whilst only consuming 1W of power amongst its other talents. The NCS certainly proved its worth paving the way for vision rich applications which traditionally are very data heavy. When combined with the OpenVINO Toolkit, the NCS was unbeatable for prototyping and for rapidly bringing computer vision and AI to IoT and edge devices.
Introducing The New Intel Neural Compute Stick 2 (181-1851)
Featuring the same form factor as its older brother The Intel Neural Compute Stick 2 has been designed with even more potent-power to make those data-intensive AI and vision applications even easier to perform. The Neural Compute Stick 2 is powered by the new Myriad X VPU, delivering best-in-class performance in computer vision and deep neural network interface applications with ultra-low power consumption. Capable of over 4 trillion operations per second (TOPS) the advanced vision and artificial intelligence applications become that much more accessible.
The handy small form factor plugs straight into a laptop or single board computer or other platforms with a USB port giving the user the ability to access DNN with minimal expenditure. When used with the optimised OpenVINO Toolkit it amplifies the abilities of its older sibling for an even more powerful plug and play experience.
Naveen Rao, Intel Corporate Vice President and General Manager of the AI Products group has this to say:
“The first-generation Intel Neural Compute Stick sparked an entire community of AI developers into action with a form factor and price that didn’t exist before. We’re excited to see what the community creates next with the strong enhancement to compute power enabled with the new Intel Neural Compute Stick 2.”
The Intel Neural Compute Stick 2 is packed with improved features:
- Brand new Myriad X VPU delivering over 1 trillion operations per second of DNN inferencing performance
- 16 programmable 128-bit VLIW SHAVE vector processors
- Up to 8 times faster on DNNs than the original Neural Compute Stick
- Enhanced vision accelerators, perform tasks such as optical flow and stereo depth utilising over 20 hardware accelerators without additional computing overheads
- 5MB to 450 GB/s bandwidth of Homogenous on-chip memory
- New Hardware encoders provide 4K resolution support at 30Hz and 60Hz frame rates
- Intel Distribution of OpenVINO Toolkit optimised for Intel Neural Compute Stick 2
- Supported frameworks: TensorFlow, Caffe
- Connectivity: USB 3.0 Type-A
Open the VINO
The OpenVINO Toolkit, which has been optimised for The Intel Neural Compute Stick 2 is a comprehensive prototyping and development software package designed to allow you to quickly and easily develop AI and vision applications. The toolkit includes two sets of optimized models that can expedite development and improve image processing pipelines. You can use these models for development and production deployment without the need to search for or train your own models.
- Enables CNN-based deep learning inference on the edge.
- Supports heterogeneous execution across Intel's CV accelerators, using a common API for The Intel Neural Compute Stick 2
- Speeds time-to-market through an easy-to-use library of CV functions and pre-optimised kernels
- Includes optimised calls for CV standards, including OpenCV, OpenCL and OpenVX
CommentsAdd a comment
I had a look at the Intel Neural Compute Stick 2 gadget and it seems that it uses a completely different set of tools than it's predecessor and those tools are designed to run on x86_64 which means no longer compatible with RPi, which is a great shame! In theory, it's possible to get Ubuntu 16.04 up on the Pi, maybe as a headless server? Will software for AMD 64 run on ARM 64? I'm no expert in this realm! Howver, I'm going to try it out on a regular AMD 68 PC running Ubuntu 16.04.5 and see how it runs.