Ambarella shows off new robotics platform and AWS AI programming deal

Chip designer Ambarella has announced a new robotics platform based on its CVflow architecture for artificial intelligence processing, and it has also signed a deal with Amazon Web Services to make it easier to design products with its chips.

Santa Clara, California-based company will demo the robotics platform and the Amazon SageMaker Neo technology for training machine-learning models at CES 2020, the big tech trade show in Las Vegas next week.

Recommended VideosPowered by AnyClip
Amazon Threatens to Fire Outspoken Critics of Its Environmental Policies


Toggle Close Captions

Current Time

Up Next

NOW PLAYINGAmazon Threatens to Fire Outspoken Critics of Its Environmental Policies
New York Wanted Amazon’s HQ2 Nearly $1 Billion More Than It Publicized
Why Amazon Is Threatening to Fire Climate Activists
Spotify Halts Political Ads on Its Platform
Robotic Vacuum Maker Trifo Raises $15 Million Series B
Samuel L. Jackson’s Voice Is Available for Your Amazon Echo
Nasdaq hits 9,000 for the first time ever

Ambarella, which went public in 2011, started out as a maker of low-power chips for video cameras. But it parlayed that capability into computer vision expertise, and it launched its CVflow architecture to create low-power artificial intelligence chips.

Based on CVflow architecture, the new robotics platform targets automated guided vehicles (AGV), consumer robots, industrial robots, and emerging industry 4.0 applications.

The robotics platform provides a unified software infrastructure for robotics perception across Ambarella’s CVflow SoC family, including the CV2, CV22, CV25, and S6LM. And it provides access and acceleration for the most common robotics functions, including stereo processing, key points extraction, neural network processing, and Open Source Computer Vision Library (OpenCV) functions.

Ambarella will demonstrate the highest-end version of the platform during CES 2020 in the form of a single CV2 chip, which will perform stereo processing (up to 4Kp30 or multiple 1080p30 pairs), object detection, key points tracking, occupancy grid, and visual odometry. This high level of computer vision performance combined with Ambarella‚Äôs advanced image processing ‚ÄĒ including native support for up to 6 direct camera inputs on CV2 and 3 on CV25 ‚ÄĒ enables robotics designs that are both simpler and more powerful than traditional robotics architectures, the company said.

Jerome Gigot, senior director of marketing at Ambarella, said the technology combines the company’s advanced imaging capabilities with its high-performance CVflow architecture for computer vision, leading to smarter and more efficient consumer and industrial robots.

The platform supports the Linux operating system, as well as the ThreadX real-time operating system for products that require functional safety, and it comes with a complete toolkit for image tuning, neural network porting, and computer vision algorithm development. It also supports the Robotics Operating System (ROS) for easier development and visualization.

The new robotics platform and its related development kits are available today and can be paired with various mono and stereo configurations, as well as rolling shutter, global shutter, and IR sensor options.

Optimizing in AWS
Attendees at Amazon’s annual cloud computing conference walk past the AWS logo
Above: Attendees at Amazon’s annual cloud computing conference walk past the AWS logo in Las Vegas, November 30, 2017.

Image Credit: Reuters / Salvador Rodriguez/File Photo
Meanwhile, Ambarella and Amazon Web Services said customers can now use Amazon SageMaker Neo to train machine learning models once and run them on any device equipped with an Ambarella CVflow-powered AI vision system on chip (SoC).

Until now, developers had to manually optimize ML models for devices based on Ambarella AI vision SoCs. This step added considerable delays and errors to the application development process.

Ambarella and AWS collaborated to simplify the process by integrating the Ambarella toolchain with the Amazon SageMaker Neo cloud service. Now, developers can simply bring their trained models to Amazon SageMaker Neo and automatically optimize the model for Ambarella CVflow-powered chips.

Customers can build an ML model using MXNet, TensorFlow, PyTorch, or XGBoost and train the model using Amazon SageMaker in the cloud or on their local machine. Then they upload the model to their AWS account and use Amazon SageMaker Neo to optimize the model for Ambarella SoCs. They can choose CV25, CV22, or CV2 as the compilation target. Amazon SageMaker Neo compiles the trained model into an executable that is optimized for Ambarella’s CVflow neural network accelerator.

The compiler applies a series of optimizations that can make the model run up to 2 times faster on the Ambarella SoC. Customers can download the compiled model and deploy it to their fleet of Ambarella-equipped devices. The optimized model runs in the Amazon SageMaker Neo runtime purpose-built for Ambarella SoCs and available for the Ambarella SDK. The Amazon SageMaker Neo runtime occupies less than 10% of the disk and memory footprint of TensorFlow, MXNet, or PyTorch, making it much more efficient to deploy ML models on connected cameras.

Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Inline Feedbacks
View all comments
Would love your thoughts, please comment.x