API and Microservices Management Benchmark v1.0

Source:-https://gigaom.com

Application programming interfaces, or APIs, are now a ubiquitous method and de facto standard of communication among modern information technologies. The information ecosystems within large companies and complex organizations comprise a vast array of applications and systems, many of which have turned to APIs to exchange data as the glue to hold these heterogeneous artifacts together. APIs have begun to replace older, more cumbersome methods of information sharing with lightweight, loosely-coupled microservices. This gives organizations the ability to knit together disparate systems and applications without creating the technical debt inherent in custom code or proprietary, unwieldy vendor tools. APIs and microservices also allow companies to establish standards and govern the interoperability of applications—both new and old—creating modularity. Additionally, they broaden the scope of data exchange with the outside world, particularly mobile technology, smart devices, and the Internet of Things, because organizations can securely share data with non-fixed location consumers and information producers.

The popularity and proliferation of APIs and microservices has created a need to manage the multitude of services that companies rely on—both internal and external. APIs themselves vary greatly in their protocols, methods, authorization/authentication schemes, and usage patterns. Additionally, IT needs greater control over their hosted APIs, such as rate-limiting, quotas, policy enforcement, and user identification, to ensure high availability and prevent abuse and security breaches. Also, APIs have enabled an economy of their own by allowing businesses to transform into a platform (as well as the opposite—a platform into a business). Exposing APIs opens the door to many partners who can co-create and expand the core platform without even knowing anything about the underlying technology.

APIs gain prominence as part of the SOA movement. Initially, APIs primarily used heavier, more complex XML protocols (e.g. SOAP), which eventually became a bottleneck to adoption across enterprises. APIs using HTTP protocols (REST) have become more popular and are critical parts of most enterprise architectures.

However, REST is only one piece of the puzzle. As architectures and the APIs underlying them continue to evolve, APIs and API management that encompass high-performance protocols (like gRPC) and streaming (kafka) will become essential.

Still, many organizations depend on their apps, APIs, and microservices for high performance and availability. For this paper, we define “high performance” as companies that experience workloads of more than 1,000 transactions per second (tps) and require a maximum latency of fewer than 30 milliseconds across their landscape. For these organizations, their need for performance is tantamount to their need for management because they rely on these API transaction rates to keep up with the speed of their business. For them, an API management solution must not become a performance bottleneck. On the contrary, many of these companies are looking for a solution to load balance across redundant API endpoints and enable high transaction volumes. Imagine a financial institution with 1,000 transactions happening per second—this translates to 86 million API calls in a single 24-hour day! Thus, performance is a critical factor when choosing an API management solution.

The rise of containers and Kubernetes has increased the need and importance of high-performance, lightweight solutions. The ability of gateways to work natively and effectively with these platforms (K8s and containers) is critical to maximizing their value. Gateways that aren’t able to perform at the level required by modern architectures can become a bottleneck to transformation efforts.

In this paper, we reveal the results of performance testing we completed across three API and Microservices Management platforms: Kong Enterprise, Apigee Edge, and Apigee Edge Microgateway. In this performance benchmark, Kong came out a clear winner—particularly at the higher rates of transaction volume per second.

We experimented with syslog on and off and with different authentications, including none, and consistently received better transactions per second from Kong over Apigee Edge and Apigee Edge Microgateway. For example, with syslog on and authentication off, Apigee Edge Microgateway produced more than 10 times the latency of Kong at the 99.9th percentile and beyond.

Kong recorded 40,625 maximum transactions per second of throughput, achieved with 100% success (no 5xx or 429 errors) and with less than 30ms maximum latency. Apigee Edge, by contrast, produced 2,650 transactions per second, while Apigee Edge with Microgateway produced 12,950 transactions.

Testing hardware and software in the cloud is very challenging. Configurations may favor one vendor over another in feature availability, virtual machine processor generations, memory amounts, storage configurations for optimal input/output, network latencies, software, operating system versions, and the workload itself. Even more challenging is testing fully managed, as-a-service offerings where the underlying configurations (processing power, memory, networking, etc.) are unknown. Our testing demonstrates a narrow slice of potential configurations and workloads.

As the report’s sponsor, Kong opted for a default Kong installation and API gateway configuration out of the box—the solution was not tuned or altered for performance. The fully managed Apigee Edge was used “as-is,” since, by virtue of being fully managed, we have no access, visibility, or control of its respective infrastructure. GigaOm selected a similar hardware configuration for Apigee Edge Microgateway.

We hope this report is informative and helpful in uncovering some of the challenges and nuances of security architecture selection.

We have provided enough information in the report for anyone to reproduce this test. You are encouraged to compile your own representative workloads and test compatible configurations applicable to your requirements.

 

 

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x