Building the Foundations of Serverless Architecture in AWS
This blog, by Pletratech, introduces AWS Serverless Architecture. We are excited to introduce you to some of the most cutting‑edge technologies available in the market. AWS has been innovating new architecture patterns in the world of cloud application monitoring and other application lifecycle and health.
What Is Serverless?
Oftentimes, when certain terms and technology become buzzwords, it can often become difficult to get to the bottom of what they truly mean and how they can impact you. Cloud is definitely one of those words that dominated the technology conscious for many years as companies looked at what migrating to the cloud could do for them. It became so prevalent that memes developed around the cloud, solving all the world’s woes. Well, the term serverless has reached that same level of notoriety now, and I feel the need to really pull apart the facts from anything you may have heard that doesn’t really jive with reality. In the early days of the web, companies would need to purchase their own infrastructures, such as servers and networking appliances to maintain a web application. Then the cloud came along and disrupted that entire practice by relieving businesses of the burden of owning hardware.
Platforms like AWS virtualized the hardware required to run applications and let users control infrastructure through GUIs and APIs. That change from on‑premises to the cloud was world‑changing and many businesses are still going through extensive migrations to make the most of the cloud. When AWS introduced its Lambda service in late 2014, it inadvertently created the final piece of the serverless puzzle. While there were some services that AWS offered before Lambda that fit the serverless model, Lambda made true serverless applications a reality in AWS. The advent of serverless meant that not only did businesses does not have to manage physical servers, but they also didn’t even have to manage virtual servers anymore either.
In addition, the cost model that had been flipped on its head with cloud billing was once again revolutionized with serverless’ pay only for what you use model. I think the serverless revolution could be as big as the cloud revolution, and it really has only just begun. The key aspects of serverless are the following:
A completely abstracted infrastructure. With serverless, you don’t know what instances your code is running on, how it’s scaling, or any of the other multitude of concerns you have to keep track of with server management. Pay for what you use. Since you’re not managing the infrastructure, you don’t need to pay for it when your application isn’t using it. Say goodbye to idling servers eating up your wallet. Stateless. With any serverless service, you won’t be storing state in your compute layer. You would have to rely on explicit persistence, like a cache or a database to preserve your state. Event-based. The way that things happen in a serverless application is from events.
Whether it’s HTTP events or scheduler events or messaging events, things happen from explicit events, and you’ll need to think about your architecture with that communication model in mind. These aspects will play a large role in how you will architect and write code for your application. A serverless application is quite different than a regular web application running on a server. In the next video, let’s take a look at how to architect a serverless application.
Architecting a Serverless Application
Serverless architecture is based on four major pillars found in most web applications.
Computing. Most of the time you’ll need computing, but you may not. If you have an application that is basically static and just relies on flat files, you wouldn’t really need a computing aspect. If you do have computing, they’ll be separated into separate functions, so you should identify them as such in your architecture diagram. You may not have enough information at this early stage, but I try to sketch out as much as I know at the time.
Storage. Here I identify what S3 buckets if any, I’ll need. I try to be liberal with how many buckets I have so I’m not constraining my design or mixing up files unnecessarily.
Persistence. This could be either database and/or caching. Caching often comes later in the development lifecycle as you identify hot spots in your application, so I wouldn’t be too worried if you don’t know where it would fit yet. Many applications that have user interaction require at least a database, so here’s a good place where you can decide whether to go with NoSQL or SQL. DynamoDB has an API for reading and writing records, making it really simple to use with Lambda. All of the database engines in RDS require a connection to be made before any operations can be executed, so you may want to keep that in mind as the additional overhead for creating connections should be factored into your compute execution time. AWS has recently released a new service called RDS Proxy, which mitigates this new connection process by acting as a proxy and maintaining persistent connections to your database for you. Although there are additional costs associated with the service, it should drastically reduce much of the overhead involved with using RDS and Lambda.
Eventing. A web application needs an API layer or at least some way to access the application from a URL, in which case you’d use API Gateway. You may also use Kinesis or Simple Queue Service to communicate between lambdas. Of course, you can also invoke Lambdas directly, but the cost of making that call and the Lambda should be considered. If your main communication pattern is directly invoking other lambdas, you may find yourself telescoping through functions and incurring greater and greater costs for the earliest invocations. It’s better to decouple you’re compute functions as much as possible by using a messaging layer. By going through these pillars, you should have a rough outline of what type of app you’ll be building, and you can get to work quickly. Later considerations for your architecture would look at Route 53 for routing or CloudFront for asset delivery. These can be added to your diagram later once you’ve determined their need.