Traditional Server Deployments vs AWS lambdas

AWS lambdas are a Functions as a service from amazon. Google has google cloud functions, but they have don’t have any support for any other language except Node.js. Azure also offers something similar under the name of Azure function. They are all essentially the same thing. Functions as service.

Image result for AWS serverless large

How’s it different from a traditional server?

Think of a cloud function as server that only runs when there is request routed to it. This is called an trigger. Unlike a traditional server which needs to be up all the time,  cloud functions are invoked only when there is a event for it to process. This event can be from any data source (depends very heavily on the cloud provider)

The most amazing thing about AWS’s ecosystem is that one can almost trigger a AWS lambda from almost any event source (eg S3, Dynamo DB, Kinesis, AWS SNS, AWS SQS etc).

Why is it Cost effective?

Keeping servers up and running are a very tedious and  specialized job, there’s a lot of things that need to function like clock work for this to happen, hence cloud providers charge for that accordingly.

However running a independent piece of logic whenever there is some trigger in an idempotent fashion is simpler to program as well as maintain. Running a EC2 T2 micro for a full month is about:

$ 0.0116 *  30  *  24 = $11.95 per month

which in itself is pretty cheap, however let check out lambda pricing (this is under the assumption that you allocate 128MB towards each trigger that comes in and takes about 250ms to complete) come out to about

$0.72 per month.

Now that’s saving of about 93% cost in host and compute infrastructure. Even if we increase the duration in which process a request from 250ms to about 1 second the price jumps to about :

$2.28 per month

This is still much less than a traditional server cost that you would incur, even while serving about a million requests in a month!!!!!

The other best thing about this setup is that it is based on a on-demand model. You could very well 100 million request with the same setup without altering your lambda deployment scheme.

However for all it’s benefits deploying lambdas was as pain full as filling out tax forms for a developer . You had to manually upload the function for executing it.

Enter Serverless

The story changed with the Serverless framework.  Though it’s initial release was done about 2 years ago I discovered it some 4 months ago while trying to figure out a better way to manage my lambdas without going insane.

It supports all major cloud providers currently out there. Though I can only vouch for AWS lambda services(haven’t really tested it out for other platforms).

It allows one to create and configure almost all the widely used compute/communication resources that are available in the AWS product list from a single yaml/json file in the  system. Along with creating resources it allows one to do event source mapping(fancy way to saying mapping triggers to various lambdas) across existing and newly created resources.

When not to use Lambdas:

While I have discussed in detail about how not to use lambdas , I feel it’s important that I point out when not to use them, since I have seen people being taken over by the lambda fever(using lambdas for everything).

  1. Never ever use lambda for polling: Lambdas are charged for their execution durations. Anything that involves the system/functioning idling and not doing anything should be kept AWAY from lambdas.
  2. Do not use lambdas for a read or a write API which connects directly to a database: A lambda is a perfectly elastic resource when placed behind a Amazon API gateway. If your API was to get a lot of sporadic traffic, a lot of lambda instances would crop up, and your database might not be able to handle all the connections being made to it. You could add a database connection pooler but that is again added effort. You can try re-using a connection, but that has very temperamental support in AWS lambdas due to container restart and stop policies. You can check out a very nice article here.
  3. When your triggers have insanely high throughput and have clearly defined patterns of increase and decrease: Maybe you have a discount coupon generation service which only spikes during sales and can have crazy high hits on it. In this case, due to the overall throughput of the system a lambda might come out to be a costlier alternative than a traditional server, and since you have a pre-indication of about the load you can very well prepare for it and figure out an architecture that scales better.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s