Pletratech is an AWS consulting partner provides expertise to the client to implement server less architecture using Lambda.
One key aspect of serverless architecture is that it usually relies on the interaction between many different cloud resources. This can be difficult to manage on your own. The Serverless Framework lets you configure, test, develop, and deploy entire serverless applications from a single configuration file. This takes the headache away from deployment, ensures that you have reproducible architecture, and makes it significantly easier to test Lambda code by running it locally.
Going Serverless in AWS with Lambda
Lambda is Amazon’s core component of serverless applications. Lambda satisfies the computing pillar of any serverless application and is likely the reason the serverless name really caught on, since there’s no server configuration. In this blog we will be deploying application using the Serverless Framework. This is going to create many of the resources we’ll use later to get hands‑on experience with serverless architecture. Next, we’ll talk about how to design Lambda functions and the different monitoring metrics available. Then we’ll discuss security, stability, and performance with Lambda functions. Serverless Application Model, an alternative to the Serverless Framework developed by AWS.
Using the Serverless Framework, we’ll be able to quickly create and deploy resources into AWS. So before we get into the weeds, let’s deploy the demo resources with the Serverless Framework. In the root of the project add serverless.yml file. This is where all the information is on how to construct our serverless application. Remember, S3 bucket names must be globally unique, which is why we neeNPM deploy is building the client code with webpack and then deploying the application with the Serverless Framework. This will actually take quite a while, especially the first time. The actual serverless part, creating Lambda functions, is fairly quick, but the framework also has to create an RDS instance, which is what will make us wait. I’m going to speed my video up here, and once everything has been deployed and you’re back at your command prompt, scroll up some until you see a bunch of URLs. These are the API Gateway endpoints that the Serverless Framework has created.
Creating Lambda Functions
AWS Lambda is the compute part of a serverless application. When we say a Lambda, this is basically the same as saying a function. In fact, services like Lambda are often called functions as a service. Lambda is the name of the AWS service that provides functions as a service. There are two ways to design server less application. One is to try to push all of your computing code into a single Lambda, something like a monolith. A lot of times, this is how you would build a normal web application, so I don’t think the initial thought is that wrong. But with Lambda, functions are cheap, and connecting different services in AWS to create your entire application is really how things ought to be built. It’s good to separate out functions by their logical business function. In a normal API, you have actions such as getting a thing or posting a thing. You could have one Lambda for both, logically isolating it due to the CRUD operations on a thing. If you do that, however, it would mean in your Lambda function you would need to have logic to decide whether to do the getting or the posting. You need to look at the event request method to determine the client’s intention. If you push that logic up to the HTTP layer, executing different Lambdas for the GET method and the POST method, then not only do you have the different behavior, but you’re essentially getting that logic for free due to the way you’ve architected your application. I think that’s pretty cool and a very elegant solution, and at Pletratech we try to architect services in this way. If there’s a logical distinction between one function and another, then split them into multiples. Create modular functions when it seems right and trying to keep the logic in each Lambda function to a minimum. There are a few more scientific methods to guide when Lambda functions are getting too large. The first is package size. Lambda has a hard limit of 50 MB on the upload size of your Lambda function package. Unzipped, the function size limit is 250 MB. If you find yourself hitting either of those limits, you should either take a look at the dependencies you’re uploading with your Lambda or think about breaking up your function into multiples. Another method of analyzing Lambda function scope is the execution metrics. In the Monitoring tab of a Lambda, you’ll be able to view a lot of different metrics related to your function execution. You can use these metrics not only to ensure everything is working correctly, but also to determine whether you need to refactor a function into many functions. There’s no hard and fast rule around how long your function should execute. You’ll just want to consider the user experience you’re aiming for. Let’s see what these metrics look like in action. This should generate enough Lambda invocations for us to view metrics in AWS console and the Lambda dashboard on the Monitoring tab. The graphs should have some actual data in them now. The Invocations graph shows how many invocations per 5‑minute interval. The Duration graph is a little more relevant, showing the min, max, and average invocation duration per 5‑minute interval. This is where you can get a good idea what the average invocation of your Lambda function is and whether you need to split it up or not. In the case of this index function, it’s only returning the contents of an S3 file, so you can see the invocation time is quite low. Looking at the rest of these graphs, we next have an Errors and Availability Percentage graph. This lets you know how many errors your Lambda is throwing, a good metric to keep an eye on. The Throttles graph refers to how many times a function execution Lambda was throttled. The IteratorAge graph is used for Lambdas that consume DynamoDB or Kinesis streams. It’s basically a way to measure how long it took between records being put in those streams and the time your Lambda was invoked, so it’s a good measure of the speed of those streams. The last graph is for errors sending to the dead‑letter queue. A dead‑letter queue is either an SNS topic or an SQS queue that the Lambda will send an event to that it cannot process, usually as a result of errors. This graph is for errors it may have sending the event to that queue, sort of like an error for errors graph. This metric going above 0 means your Lambda is failing to process events and failing to save them in the dead‑letter queue, completely missing them essentially, which is a very bad thing. We’ll go deeper into dead‑letter queues in a later video. And that’s it for the metrics and Lambda. In the next blog, we’re going to talk about security and stability for your Lambda functions.