Serverless architecture with Lambda
Great learning experience with Graphql, SAM, Lambda, Redis
Recently, we had the opportunity to setting up an information website using Salesforce as CMS and Graphql + AWS Lambda functions as serverless backend.
Why using serverless architecture with Faas (Function as a Service)?
- Cost saving: we pay for the resources used and only when the applications use the resources. The experience gained from piloting this small system in serverless architecture would benefit us in shifting our existing complex system to Faas later and achieve greater cost saving.
- Scaling: AWS lambda handles horizontal scaling for us automatically to deal with sudden spikes in usage.
Key Considerations:
Local Development & Continuous Integration
Before we get started with using AWS lambda, we wanted to make sure that we can develop locally to ensure fast feedback loop, easy debugging and good development experience. There were currently 2 main frameworks for building AWS serverless applications locally: SAM and Serverless. We ended up using AWS SAM because SAM is developed and owned by AWS.
To use Graphql on AWS, we had 2 options: Appsync or Lambda + API Gateway. Even though Appsync was the desired option, we could not get Appsync working with SAM at the time of writing. Therefore we went for the good old Lambda + API Gateway combination with SAM.
AWS SAM (Serverless Application Model)
SAM deploys functions locally by using docker containers. There is a SAM template yaml file that tells SAM how to package and deploy the application. An example of the fields in the yaml
file can be found in their github HOWTO readme. CodeUri
is a path to a folder which tells AWS what folder to zip up and upload the AWS. Handler
points to the exported function that we want to invoke on lambda.
The safest way is set path to the root folder to include the whole project. That way, all of the dependencies that functions depend on would be included. handler
points to the exported function.
Lambda talking to Lambda
One issue we faced in this architecture in the local development setup was getting communications going between local docker containers like real lambda functions do. It was difficult getting Graphql Function Docker container to fetch data from the Article/ Category Function Docker container although it was not an issue on the actual lambda. A quick way to work around was to create a local Graphql server using Apollo-server
instead of Apollo-server-lambda
.
In the local development, we would use the local server with Apollo-server
to avoid docker — docker communication issue and swap back to the server with Apollo-server-lambda
when deploying to AWS.
Deploying with SAM
Locally: sam local start-lambda
This starts a local endpoint emulating the AWS lambda invoke endpoint. SAM also allows us to invoke lambda functions and start API Gateways locally. Refer to the API docs for more information. In our case, we also start the local graphql server and a local redis server when deploying locally.
Remotely: Create a S3 bucket on AWS first as SAM would package code artifacts and upload to S3 bucket. Run sam package — template-file sam.yaml — s3-bucket mybucket — output-template-file packaged.yaml
and sam deploy — template-file ./packaged.yaml — stack-name mystack — capabilities CAPABILITY_IAM
Sam would create a packaged.yaml and deploy the stack according to the stack-name. We will be able to see the deployed lambda functions on the AWS console.
Deploying without SAM
In some cases, we may want to deploy the lambda functions manually. e.g. Configuring the deployment of each individual function in parallel jobs in the CI pipeline so the failure in 1 deployment does not affect other function deployments.
// To deploy the functions without SAM1. Zip the whole function folder including dependencies
> cd "$ROOT_DIRECTORY"
> zip -rq "$FUNCTION_NAME.zip" ./
> aws s3 cp \
"$ROOT_DIRECTORY/$FUNCTION_NAME.zip" \
"s3://my-s3-bucket/$FUNCTION_NAME.zip"2. Create corresponding Lambda for each function on AWS. Deployment would only require an update to the existing function
>aws s3 cp \
"s3://my-s3-bucket/$FUNCTION_NAME.zip" \
"$DEPLOYMENT_DIR/$FUNCTION_NAME.zip" \>aws lambda update-function-code \
--function-name "$LAMBDA_NAME" \
--zip-file "fileb://$DEPLOYMENT_DIR/$VERSION_ZIP" \
--region ap-southeast-2 \
--publishaws lambda tag-resource \
--resource "arn:aws:lambda:ap-southeast-2:371404392502:function:$LAMBDA_NAME" \
--tags "VersionHash=$SHA1" \
--region ap-southeast-2
Troubleshoot
// Problem
Waiting for changeset to be created..
Waiting for stack create/update to completeFailed to create/update the stack. Run the following command
to fetch the list of events leading up to the failure
aws cloudformation describe-stack-events --stack-name sherrystack// Solution
=> Region might be set incorrectly! Set the region to be the correct region