Story image

Intel aims to bring AI to the masses with the Neural Compute Stick 2

21 Nov 2018

Intel recently announced the Intel Neural Compute Stick 2 (Intel NCS 2) designed to build smarter AI algorithms and for prototyping computer vision at the network edge. 

Based on the Intel Movidius Myriad X vision processing unit (VPU) and supported by the Intel Distribution of OpenVINO toolkit, the Intel NCS 2 affordably speeds the development of deep neural networks inference applications while delivering a performance boost over the previous generation neural compute stick. 

The Intel NCS 2 enables deep neural network testing, tuning and prototyping, so developers can go from prototyping into production leveraging a range of Intel vision accelerator form factors in real-world applications.

Bringing computer vision and AI to the Internet of Things (IoT) and edge device prototypes is supposedly easy with the enhanced capabilities of the Intel NCS 2.

What looks like a standard USB thumb drive hides much more inside. The Intel NCS 2 is powered by the latest generation of Intel VPU – the Intel Movidius Myriad X VPU. 

This is the first to feature a neural compute engine – a dedicated hardware neural network inference accelerator delivering additional performance. 

Intel corporate vice president Naveen Rao says, “The first-generation Intel Neural Compute Stick sparked an entire community of AI developers into action with a form factor and price that didn’t exist before. 

“We’re excited to see what the community creates next with the strong enhancement to compute power enabled with the new Intel Neural Compute Stick 2.”

Combined with the Intel Distribution of the OpenVINO toolkit supporting more networks, the Intel NCS 2 offers developers greater prototyping flexibility. 

Additionally, thanks to the Intel AI: In Production ecosystem, developers can now port their Intel NCS 2 prototypes to other form factors and productise their designs.

With a laptop and the Intel NCS 2, developers can have their AI and computer vision applications up and running in minutes. 

The Intel NCS 2 runs on a standard USB 3.0 port and requires no additional hardware, enabling users to seamlessly convert and then deploy PC-trained models to a wide range of devices natively and without internet or cloud connectivity.

The first-generation Intel NCS, launched in July 2017, has fueled a community of tens of thousands of developers, has been featured in more than 700 developer videos and has been utilised in dozens of research papers. 

Now with greater performance in the NCS 2, Intel is empowering the AI community to create even more ambitious applications.

Huawei talks P Series history - and drops hints on the P30
Next week will see the covers come off the new Huawei P30 Series at a special launch event held at the Paris Convention Center.
Hands-on review: Xiaomi’s Mi Mix 3 and the Amazfit Bip
You’ll probably be sad to see another device say ‘farewell’ to the 3.5mm headphone jack. Fortunately though, as mentioned, Xiaomi were kind enough to include an adapter in the box.
How Cognata and NVIDIA enable autonomous vehicle simulation
“Cognata and NVIDIA are creating a robust solution that will efficiently and safely accelerate autonomous vehicles’ market entry."
Kiwis know security is important, but they're not doing much about it
Only 49% of respondents use antivirus software and even fewer – just 19% -  change their passwords regularly.
Instagram: The next big thing in online shopping?
This week Instagram announced a new feature called checkout, which allows users to buy products they find on Instagram.
Google's Stadia: The new game streaming platform intertwined with YouTube
Move over Steam, Uplay, Origin and all the other popular gaming platforms – Google has thrown its hat in the ring and entered the game streaming market.
Privacy: The real cost of “free” mobile apps
Sales of location targeted advertising, based on location data provided by apps, is set to reach $30 billion by 2020.
How AI can transform doodles into photorealistic landscapes
The tool leverages generative adversarial networks, or GANs, to convert segmentation maps into lifelike images.