What is serverless, anyway?

When DigitalOcean polled developers on serverless, for their June 2018 issue of currents, one of their findings was that half of developers did not have a strong understanding of serverless. Since serverless provides benefits for developers and operators in some situations it’s worth understanding well enough to know how to leverage.

The Short Answer

Simon Wardley recently posted a short explanation to Twitter. He’s an analyst who has been following and mapping these technology trends.

X : Can you explain what is serverless?

Me : The definition I use? Serverless is an event driven, utility based, stateless, code execution environment.

While this is the gist, there are still many more relevant questions.

The Server In Serverless

Software runs on a computer. When we use a cloud provider the code is running on a server. So, why is it called serverless?

The short answer is that the developer, the person who deals with the business logic, does not need to be concerned with the server. The service provider handles it. This is about a contract and defined communication (API) between two parties who handle separate concerns.

The developer working on the business logic can focus on their business logic. The provider, who has to operate serverless instances across many people and for many reasons, can focus on running the workloads.

Does it run in a virtual machine, in a container, inside a custom V8 setup, or something else? That’s an implementation detail the provider is concerned with. It could change over time. They could even run them in different ways in different environments. It’s a detail the provider can choose, change, and iterate on.

This could cause concerns over stability. What if an environment change caused a bug? For example, moving a JavaScript function from a virtual machine running node.js to a worker in V8. This would be on providers to ensure any change was handled well or loose customers and business. This is where service-level agreements (SLAs) can provide some level of trust, enticement, and safety net.

There are known advantages. For example, when a system security issue is found the provider can patch it everywhere for everyone rather quickly. No need to wait on all the service users.

Events

Another aspect of serverless is that the application does not have a server. For example, that means the application logic does not have a web server waiting for requests. Instead, a unit of work executes when an event occurs.

Many things can trigger an event. Here are some examples:

With no application server it become the job of the provider to execute the code when the event comes in. The provides an opportunity for service providers to optimize how this happens and change it over time.

For example, if an event only happens once per day the code may sit in storage and not in memory except when it’s needed. Here no RAM or CPU is used unnecessarily. The provider can optimize around how it’s loaded and run.

If the event is triggering regularly the provider can optimize for that. For example, the machine running the code can keep it ready to execute. Or, if the application is seeing a lot of scale the provider can scale machines handling this function horizontally.

These forms of optimizations are something providers can focus on. They may even boast about their capabilities like Cloudflare recently did.

CloudEvents

Because so many providers are jumping on the serverless bandwagon and they have been doing so with differences in their APIs, the Cloud Native Computing Foundation (CNCF) has stepped in to come up with a common event specification knows as CloudEvents. The idea is to have a common open specification rather than many different proprietary ones.

At KubeCon/CloudNativeCon EU 2018, Kelsey Hightower gave a demo using CloudEvents that had a file uploaded to AWS S3 cause an event that ran in Google Cloud that translated the text of a file and uploaded the translation to S3. All while handling authentication. The demo uses a fair amount of demo code but highlights the possibilities.

FaaS and Containers

The most common form of serverless is Functions as a Service (FaaS). AWS, Azure, and Google Cloud offer these out of the box. AWS calls these functions lambda.

Functions can run in a variety of places. For example, hey can run in general computing environments and can run on edge nodes. It all depends on what the provider offers. Both AWS and Cloudflare provide a means of running functions at the edge.

One of the current drawbacks to FaaS is the functions need to be uniquely crafted for each provider. This creates a form of vendor lock-in.

As an alternative to pure functions, containers are starting to show up on the serverless scene. A container image can hold what is needed to execute on an event. When an event occurs, a container can be started, receive the event, and perform the action before being shot down when completed.

There are both advantages and disadvantages to using a container instead of a pure function. For example, a container image can more easily encapsulate dependencies but limits the providers ability to innovate around the running of the workload.

Brigade

Brigade, especially when paired with Azure ACI to handle billing per use, is one example of a platform that provides container based serverless.

Why Not PaaS?

This sounds similar to a Platform as a Service (PaaS). There are some definite similarities. For example, the application code is handed to a PaaS and it figures out how to run it. Does Heroku use Docker or LXC? It doesn’t matter because that’s an implementation detail. The interface is around the application code.

There is one important difference. Applications in a PaaS present a server and are expected to be running in a way to accept connections. In serverless there is no need to run that server. Things happen based on events. The system to accept the event (e.g., HTTP request) is outside the code the application needs to supply.

Developer Benefit and Experience

There are some practical elements to the developers experience worth highlighting.

  • Applications are written in a way that can scale horizontally really well
  • When the service goes down it’s the responsibility of the provider to handle getting it back up. There’s less work DevOps folks handling the business logic are going to be paged for
  • Payment is often based around when business logic runs. That time where a server is sitting idle isn’t billed for because it’s being used for something else. This has lead to some services being able to drastically lower their recurring bill
  • Most of the serverless providers have their own APIs. This leads to vendor lock-in. The serverless project is attempting to make the experience better but there is only so much they’ve been able to do
  • Some applications, like high performance databases, are not appropriate for serverless. It’s not a silver bullet

Conclusion

Serverless provides a different paradigm from the way many applications are written. This can, sometimes, be useful. It’s worth having in any developers tool belt.