Amazon finds more efficient cloud computing solution
To improve its cloud-based computing services, Amazon is to deploy an ARM-based Graviton processor. The e-commerce giant says this will lead to cost savings of up 45 per cent for “scale-out” services.
Amazon is the world’s biggest player in cloud computing. This is delivered via Amazon Web Services (AWS), which is the company’s $27-billion cloud business. AWS on-demand cloud computing platforms to individuals, companies and governments, on a paid subscription basis.
The reason behind Amazon changing the technology behind its cloud services is to deliver a faster performance and to save costs. The new system is expected to provide Amazon with a performance-per-dollar advantage.
The ARM Graviton processor contains 64bit Neoverse cores. Speaking with EE News Europe, ARM’s Drew Henry, senior vice president, indicates that the Graviton system is based on the Cosmos 16nm processor platform.
In addition, the Israeli designed Graviton operates on the Cortex-A72 64bit core, which functions at clock frequencies up to 2.3GHz. The servers run on Intel and AMD processors. The system will assist Amazon with scale-out workloads. Here it is possible for users of the service to share the load across a group of smaller instances, such as containerized microservices, web servers, development environments, and caching fleets
There are other advantages to Amazon from the new technology, centered around being more independent in relation to technology providers. The Register notes that Amazon will now have the ability to license Arm blueprints, via Annapurna. In addition, the company will be able to customize and tweak those designs, and the ability to go to contract manufacturers like TSMC and Global Foundries and get competitive chips made.
Commenting on the new technology, independent engineer James Hamilton said: “I’ve been interested in ARM server processors for more than a decade so its super exciting to see the AWS Graviton finally public, it’s going to be exciting to see what customers do with the new A1 instances, and I’m already looking forward to follow-on offerings as we continue to listen to customers and enhance the world’s broadest cloud computing instance selection.”
In related news, AWS is building a custom ASIC for AI Inference, called Inferentia, for Amazon. This could be capable of scaling from hundreds to thousands of trillions of operations per second and further reduce the cost of cloud-based. This will allow Amazon to compete with its rivals in the cloud computing space. Forbes reports that Google already has its own TensorFlow Processing Unit (TPU. In addition, Microsoft is using Intel and Xilinx FPGAs to accelerate inference processing.