DR Griffin|PostsAbout

Servers in serverless: Solved

Ever since the serverless buzzword first took off around 2015, people have been asking.

Are there servers in serverless?

Today, after some thorough research, and drawing on my experience working in the deepest bowels of Amazon's gargantuan cloud, I can share the definitive answer with you. Hover your mouse over the grey box below to find out.


As you can see, since the word serverless contains the word server, we have found evidence for servers in serverless!

Ok, so now you think I'm a smart-ass. Also, if you were looking closely we only found one server, singular.

If you'll just bear with me for one second, I think we can do better.


Whoa! It turns out, there are servers (plural) in serverless after all! 1

A more serious answer

First, let's start with a definition of serverless. I'm going to use one from AWS, because they popularized the term, and seem to be the leader in the space.

Serverless allows you to build and run applications and services without thinking about servers

That's a pretty good definition. Developers always say they just want to think about business logic. They don't want to worry about messy details, so this sounds great. But why do very smart people get so passionate and agitated about the word serverless. It seems to trigger almost a primal reaction in many engineers, because they know there are some computers somewhere doing the thing and if they're in a datacenter, you can probably call them servers.

I think the answer is: serverless often fails spectacularly to deliver on it's core promise. Developers using serverless technologies are often finding themselves dealing with servers, but in weird and unusual ways.

Outrageous example: Aurora Serverless

This example makes me suspect that some marketing team slapped the serverless name on a feature that was designed with less lofty goals. I don't know for sure, but I can't think of a better explanation.

The Aurora Serverless FAQ states:

In Aurora Serverless [...] you pay a flat rate per second [...] with a minimum of 5 minutes of usage each time the database is activated.

AWS Lambda, the original service that popularized the serverless model has 100 millisecond increments for billing. But Aurora will bill me for FIVE MINUTES For one request? That's like making one request to AWS Lambda, but then getting charged for 3,000.

For comparison, the minimum billing increment for both a Google Cloud Virtual Machine (VM), or an AWS EC2 VM is one minute. Both of those products are actual (virtual) servers.

The bottom line? Aurora serverless has a less granular pricing model than actual servers. That's a fail in my book.

Subtle example: hunt the chips

No current serverless technology1 abstracts the underlying CPU. For instance, Eric Hammond found that his Lambda function was running on an Intel Xeon E5-2680 in 2014. I think that's the same CPU as the EC2 C3 instance type. AWS can probably migrate Lambda functions to later-and-greater Intel EC2 instances without any customers noticing too much.

However, AWS has recently been spruiking their ARM based Graviton2 chips as being better value than Intel chips.

Excited to bring customers the next gen of Amazon EC2 instances powered by #AWS-designed, Arm-based Graviton2 processors that deliver up to 40% better price/performance than comparable current x86 instances. Should be a game changer! https://t.co/UCmidWcTmP

— Andy Jassy (@ajassy) June 12, 2020

Normally I would expect the Lambda team to migrate workloads to these newer, cheaper EC2 instances. AWS loves to talk about how they "work relentlessly to reduce [costs] and to pass the resulting savings along". However, with Graviton and Lambda, I don't think they can. A lot of code that was written for the Intel, or x86 architecture will likely break on ARM, even if it's running inside virtual machines like Python, or NodeJS. AWS doesn't know if your Python, or JavaScript code depends on subtle or less subtle features of x86 architecture. If you're using their custom runtimes then it's even worse.

Contrast this to services like S3 and DynamoDB. How would you, a customer know if the S3, or DynamoDB team migrated from x86 to ARM? You wouldn't. You just see data going in, and data going out. Everything else is Amazon's problem.

Will serverless hit a speed-bump as the industry shifts from Intel to ARM? Will we have to recompile our serverless functions to ARM to get the lowest prices? I haven't seen anybody else talking about this yet, so either I'm early, or it's less of an issue than I thought.


I've just shown one joke way, and two real ways where "servers", or the underlying hardware are not fully abstracted by serverless. I suspect this is one reason why people like to make fun of serverless on Twitter. Additionally, I think those who argue passionately that serverless isn't the future of our industry have a point. Surely there will be future waves of innovation that deliver better abstractions, performance, and pricing than serverless currently does.

If you're interested in what might come after serverless, please get in touch. I'm building a company, and looking for people to help build the next paradigm for developers.

1: There is actually a serious point to be made here, which is to define your branding around what you're for rather than what you're against. The point of serverless isn't that servers are bad. It's that I (a developer) don't want to think about them. A better name for serverless might have been something like "developer-centric", "zero-fixed-cost", or "granular-scalability"3.

2: I'm only counting compute services here. In other words, services that execute code. Examples include AWS Lambda, Google Cloud Functions, Azure Functions, CloudFlare Workers, and Fastly Compute@Edge.

3: Ok, none of these are great, but hopefully you get the idea. This is why I don't work in marketing.