AWS-Designed Inferentia Chips Boost Alexa Performance

source:- Almost two years after unveiling its Inferentia high-performance machine learning inference chips, Amazon has almost completed a migration of the bulk of its Alexa text-to-speech ML inference workloads to Inf1 instances on its AWS EC2 platform. The dramatic move to infrastructure powered by the newer chips was made to gain significant cost savings and performance gains over the GPU-based instances on EC2 that were formerly being used, according to the company. By migrating the workloads to Inf1 instances, the

Read more