5 open source frameworks for serverless computing

Source – infoworld.com

Sometimes all you need is a single function. That’s the idea behind serverless computing, where individual functions spin up on demand, perform a minimal piece of work (serve as an API endpoint, return static content, and so on), and shut down. It’s cheap, it uses minimal resources, and it has little management overhead.

Most of what we currently identify as serverless computing kicked off with AWS Lambda, later joined by similar services on Microsoft Azure, Google Cloud Platform, and IBM Bluemix. But there’s a healthy complement of open source serverless architectures available—not only facilitators for the serverless frameworks on a particular cloud, but full-blown methods to deploy serverless frameworks on the cloud or hardware of your choosing.

Here are five of the most significant open source frameworks for serverless computing, ranging from solutions that commercial clouds have built on to experimental projects for exploring new aspects of serverless computing without commercial constraints. (Projects that are mainly about facilitating an existing serverless computing implementation—such as Zappa, which works with AWS Lambda—aren’t covered here.)

Apache OpenWhisk

No discussion of open source serverless frameworks should begin without some mention of Apache OpenWhisk. Written mainly in Scala, it accepts input from a number of triggers, such as HTTP requests, then fires code—either a snippet in Swift or JavaScript, or a binary in a Docker container—in response.

Multiple actions can be chained together from a single trigger, and rules can describe what triggers touch off what actions. It’s also possible to integrate OpenWhisk with external API services like those found in GitHub and Slack, or for a service that offers webhooks or API endpoints.

Right now, OpenWhisk is an Apache Incubator project, but one major commercial cloud has already started building serverless offerings directly atop OpenWhisk: IBM Bluemix.

Fission

Fission builds on Platform9’s Kubernetes expertise to deliver a serverless architecture. It uses an existing Kubernetes cluster—whether provided by Platform9 or one you’ve rolled yourself—as the infrastructure for a function-as-a-service architecture. Triggers run functions that are provisioned in Docker containers, and Fission allows commonly used functions to be prewarmed to reduce startup time.

The idea with Fission is that using Kubernetes frees you from the heavy lifting of setting up all the underlying strata. With serverless architecture, the point is that devs shouldn’t have to worry about that strata in the first place, and with Kubernetes, it’s that ops shouldn’t have to sweat too many of those details either.

IronFunctions

This offering from Iron.io touches on all of the familiar points for a serverless framework. IronFunctions uses Docker containers as a basic unit of work for a function, so it can support any language runtime that’ll fit in a container. To that end, the only prerequisite needed is Docker and a login for Docker Hub; orchestration frameworks like Kubernetes are optional. Functions written in Go (the same language used for IronFunctions) can be built and deployed directly from within IronFunctions.

IronFunctions features cross-integration with AWS Lambda. Functions already hosted in Lambda can be imported to and run in IronFunctions; it’s clearly intended to avoid lock-in with Amazon.

Gestalt

Gestalt is billed as a set of pre-integrated microservices for running “future-proofed cloud native applications.” One of those components is a FaaS stratum, or the Lambda Engine, as Gestalt’s creator calls it. It boasts orders-of-magnitude better speed than Amazon Lambda (Amazon’s reputation for erratic performance probably doesn’t help there) and the freedom to use almost any language with a standalone runtime. Most of the big ones are already supported—JavaScript, .Net, Python, Go, Ruby, Scala—but you can roll your own executor if needed.

Gestalt can be deployed in a Kubernetes cluster with the Helm installer, or it can run on Mesosphere DC/OS with an installer available for that platform. Thus, it’s easier to get up and running—and managed—if you have an existing investment in either of those managers.

OpenLambda

All of the above projects are intended—either now or eventually—for production use. OpenLambda, on the other hand, is a serverless computing project spun up mainly for research, to “enable exploration of new approaches to serverless computing,” per the paper that outlines the project’s goals and intentions.

Only the most basic functionality is available in OpenLambda right now, and the developers caution against using it in production due to the inherent insecurity of the design. But it provides a minimal playground where others can hack on the source and experiment with different approaches minus the overhead of a full-blown product meant for deployment.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x