Fotolia

Tip

Manage APIs with microservices in a Lambda architecture

When you use Serverless Framework in an AWS environment, you might run into resource limits. Chris Moyer explains how to use microservices to avoid this problem.

After I attended my first Serverlessconf in April 2017, I was convinced that the new Serverless Framework was a better fit for me than Apex, which I previously used with Amazon API Gateway to manually configure and manage APIs. This decision proved greatly helpful.

Serverless Framework provides a robust set of plug-ins, including the ability to write applications in TypeScript and test offline, and it helped me manage APIs when they grew in complexity. Serverless Framework made it easy to add in a new API with just a few lines of code.

But one day, I added a new API endpoint, then attempted a new development deployment in AWS and hit a bizarre error message:

The CloudFormation template is invalid: Template format error: Number of resources, 201, is greater than maximum allowed, 200

I wasn't aware that CloudFormation templates had size limits, and I certainly didn't have more than 200 API endpoints. So, what caused the issue?

As it turns out, when you create a new API endpoint in Serverless Framework, a CloudFormation template creates multiple resources. The template creates a Lambda function, a new API endpoint in API Gateway and a few other items to manage just one new entry.

Serverless Framework does not directly cause this issue, but it can help you avoid the problem. Instead of one service, set up several different services, which you can later add to one API hostname with the serverless-domain-manager plug-in.

Split the monolith into microservices

It's important to consider how you manage Serverless Framework code. Although I created separate AWS Lambda functions for each of my services, I didn't actually isolate the code from other services. Serverless Framework bundles the code for every single function together into one zip file and then uploads it as the source code for all Lambda functions. It then configures each Lambda function separately, using the shared code base, to make each function trigger a different part of your code.

Additionally, Serverless Framework creates a single CloudFormation template for the entire service, which means developers can potentially run into two limits: the size of the Lambda function being too large, as well as the 200 resource limit on CloudFormation.

To get around this issue, I created several microservices to manage serverless functions, each with a specific purpose. Some microservices included multiple endpoints for the same resource type, such as GET and DELETE, but entirely separate endpoints didn't need to be in the same monolithic service. They don't even need to be in the same Git repository.

To merge these API endpoints, I added the serverless-domain-manager plug-in to each service and configured the base path of each service:

custom:

  stage: ${opt:stage, self:provider.stage}

  customDomain:

    domainName: ${file(../../env/${self:custom.stage}.yml):API_HOSTNAME}

    basePath: "campaigns"

    stage: "${self:custom.stage}"

With Serverless Framework configuration files, you can actually reference another YAML file, even one in another directory, as I did here with domainName.

But basePath cannot contain a forward slash character, so I could not add something like /campaigns/{id} as a separate service. Additionally, you can only map one service to any given basePath.

These two limitations mean any functions that need to exist under a given basePath must also exist under the same service. In my case, this meant that Create, Read, Update and Delete functions all needed to exist in this single Campaigns service.

Share code between services

Of course, all of my services needed some shared code. This meant that I needed to create a separate package for any shared content among my APIs, instead of sharing information in a local file somewhere.

You can specify Node.js packages with the file: prefix. For code in the same repository as the current function, I was able to include a virtual package referencing another directory:

  "dependencies": {

    "cnbapi": “file:../../shared"

Additionally, I created a shared Authorizer function to enable all of the API endpoints to share the same authentication method. I created a separate service for the Authorization function and then linked it in manually via an Amazon Resource Name:

functions:

  list:

    handler: index.list

    events:

      - http:

          path: ""

          method: get

          cors: true

          authorizer: "arn:aws:lambda:us-east-1:XXX:function:cnbapi-auth-${self:custom.stage}-default"

Unfortunately, this link also means that you can't use Serverless Offline to debug these functions. This increases the importance of unit testing when you create microservices, because it's much more difficult to perform offline testing of the entire application due to the cloud-based Authorizer function. Fortunately, Serverless Framework supports multiple stages, so it's easy to deploy to a development endpoint first. It's simple to manage changes in development, migrate them to staging, perform manual tests and then release them to production.

Isolation is crucial, and it's a much better approach to divide into multiple services or microservices in the long run. While this approach does slow down deployment, because it takes time to deploy multiple services instead of just one, it's much more reliable overall, helps me manage APIs and ensures failures don't cascade across multiple endpoints.

Dig Deeper on AWS cloud development