Serverless computing (also called Functions as a Service or FaaS) is changing the IT industry by decomposing larger applications as cheap, on-demand, and scalable functions. However, there’s more to it than cost and on-demand access. Serverless computing addresses real software engineering pain points by introducing a new architecture.
Pain Points of Existing Development Paradigms
Building, deploying, and running software can be complex, as teams must consider cloud providers, deployment pipelines, logging and monitoring options, and cost-effectiveness. They must also sort through the myriad of languages, frameworks, and architectures available. In particular, complicated deployment strategies, as well as the requirement to pay for idle servers and VMs, make current development paradigms problematic.
Complicated Deployment Strategies
Deploying software is complicated, and while using a popular language or framework can make the process easier, it doesn’t eliminate the difficulties completely. Off-the-shelf platforms make it simple to deploy code, but users are still responsible for handling rollbacks and testing new releases. Microservices don’t make this any easier since more services mean more deployment problems. In addition, every application needs telemetry. This means obtaining metrics, as well as monitoring and logging, for every piece of deployed software—regardless of the language or runtime. Cloud providers help with some of this, but teams still need to fill in the gaps with their own bespoke solutions. These solutions require maintenance, upgrades, and new language integration—plus, they cost money to run.
Paying for Idle Servers or VMs
Running software on servers can be time consuming and expensive. Whether using containers or VMs, the software runs 24/7 and costs money both in the cloud and on-premises. While running software in the cloud is cheaper, this model only goes so far, since constantly worrying about cost or relying on the Amazon EC2 Spot market can be stressful. Some teams try to reduce costs with Amazon EC2 Auto Scaling, but then they’re responsible for implementing and managing the solution. Even Amazon EC2 Auto Scaling has ramp-up times, which means infrastructure may have to keep extra capacity on the back burner for sudden spikes.
So What’s the Solution?
Addressing the difficulties with existing development paradigms requires changing your approach to building and shipping software. One option is to use a microservices model, which splits functionality into smaller units. Working with smaller units makes it easier to write and support a vibrant infrastructure ecosystem. Another option is to go serverless.
The Benefits of Serverless
Going serverless can lower cost since serverless functions are billed per use instead of for constant capacity. This benefits infrequently used applications. Serverless also scales up to the highest capacity applications, given that functions are executed on demand. This eliminates the need to configure Auto Scaling groups, set minimum capacities, select the right VM specs, or bet on the EC2 Spot price.
Serverless also improves production operations, offering telemetry data, such as latency, invocation counts, and return values—plus real-time logging. Telemetry data is available at the function level, so telemetry for individual functions, such as “sign up,” “send notifications,” or “contact,” are accessible by design. You simply won’t get data this granular straight out of the box by placing a web server behind a load balancer.
Serverless architecture also eliminates the need to focus on deploying code, since serverless applications are not part of team members’ day-to-day operational responsibilities. Simply put, there’s no need for SSH, configuration management, or many of the other activities required to deploy traditional applications.
Also, it’s far easier to deploy applications when there are no servers to worry about. Users package software into an archive and call the deployment APIs. This process is the same regardless of language. Serverless platforms also maintain previous versions, so building UAT/staging/production pipelines is possible, in addition to rollbacks.
Building Serverless Applications
Serverless architecture is one of the exciting developments in the IT world today. Open-source tooling is constantly improving, and platforms are competing to win over the market. Google Cloud Platform, Microsoft Azure, and Amazon Web Services all provide serverless features. Companies, such as Binaris, improve the serverless experience, and open-source frameworks tie the entire package together. Clearly, now is the best time to build serverless applications.
To build a serverless application, start by choosing a provider. Binaris, Google, Microsoft, and Amazon all offer serverless products with similar functionalities: a deployment API, integrated metrics, logging, and HTTP endpoints. The providers vary in cost, supported languages, and integrations across their platforms.
Teams adopting serverless tend to select the cloud provider they’re most familiar with. However, choosing a different cloud provider for small serverless applications is a great way to evaluate and learn about other options. Google, Microsoft, and Amazon are in heavy competition with each other regarding pricing and features, which opens the market to other competitors, such as Binaris, who strive to build the fastest and cheapest platforms possible.
It’s easy to deploy a function that handles an HTTP request or processes a message from a queue. The friction begins when you deploy applications composed of multiple functions. AWS Lambda users must configure an Amazon API Gateway that maps requests to AWS Lambda functions, but there’s no bundled support for deploying the API Gateway and associated Lambdas together. Considering that the cloud provider is the lowest layer, look to the ecosystem to fill in the gaps.
Frameworks handle the grunt work of configuring, deploying, and composing applications and are useful when building applications with multiple functions. They keep applications portable by abstracting providers into consistent APIs. They also handle wiring up calling mechanisms, such as HTTP requests, or async triggers, such as queues. Right now, the Serverless Framework is the best option. It deploys entire sets of functions with rollback and versioning support. It also handles configurations, such as environment variables and secrets, so there’s a clear separation between code and data. Apex is a lighter-weight solution with similar features if you want something closer to the metal.
Teams can deploy a new serverless application easily in under an hour, so it’s simple to break off chunks from the monolith or build a chatbot. This may capture developers’ attention for short-lived projects, but it doesn’t hold sway in the long term. Serverless computing provides two long-term architecture benefits: improved composition and durability.
Well-thought-out-serverless applications are better factored than monolithic architectures. Layers, such as authentication, authorization, data, and business logic, are separate. This provides teams with more control and the flexibility to adapt to future requirements. Individual functions of a serverless application can shift from async to sync or from HTTP request handling to stream processing—all without drastically re-architecting.
Today, software engineering is trending toward building distributed systems, which requires a different mindset regarding availability and durability. Since serverless architecture promotes durability, it’s a good option. Also, for better or worse, applications are going global, and popular web services have to deal with higher traffic than ever before. Serverless architecture provides a solution here too. Independent functions can be deployed to separate geographical regions, communicating via a shared global data store, such as Amazon DynamoDB. Although this setup isn’t a necessity now, serverless platforms will continue to improve until a globally distributed and durable application is the default, not the exception.
Serverless has already affected other technologies, pushing them to meet the industry’s demands for cheaper and more elastic platforms. For example, AWS Fargate applies the serverless paradigm to containers. Teams can build a containerized application but run and bill it as if it were a serverless function—bridging two technologies.
Serverless computing is relatively new, and best practices and established solutions are still emerging from the ecosystem. Adapting to building applications as discrete functions is challenging for teams used to building monoliths. Successful transitions require a mental shift to approach architecture differently and the confidence to tackle issues as they arise in the evolving landscape.
For the development of the serverless ecosystem, 2018 is an exciting year. Platforms, such as Binaris, are making serverless accessible to more teams with cheaper on-demand pricing and improved latency. Frameworks provide engineers with the tools to develop and deploy complex serverless applications across providers. Most importantly, this allows engineers to focus more on development and less on operations. So, why wait? Give serverless a try.