đź’ˇ Building a Practical Serverless API with SAM: Leveraging Nested Stacks for Organization
This example demonstrates how to build and deploy a simple serverless API using AWS Lambda, API Gateway, and AWS SAM (Serverless Application Model). The core focus here is on leveraging modularity and nested stacks to create a well-organized and scalable project structure. Project Structure Here’s a quick overview of the project’s key components: api.yaml: This SAM file defines the API Gateway. It’s responsible for linking HTTP paths (like /orders and /logs) to the appropriate Lambda Functions. Crucially, it imports the API definition from openapi.yaml. log.yaml: This SAM file contains the definition for the Lambda Function responsible for fetching logs (GetLogs). order.yaml: This SAM file contains the definition for the Lambda Function responsible for fetching orders (GetOrders). openapi.yaml: This is the OpenAPI (Swagger) file that specifies the API paths (e.g., /orders, /logs) and how they integrate with the Lambda Functions. Here, we define the x-amazon-apigateway-integration that links each path to the ARN of the relevant Lambda Function. template.yaml: This is the main SAM template for your project. It acts as an orchestrator for all the sub-stacks (OrderStack, LogStack, ApiStack). Each sub-stack points to its respective YAML file using the Location property of AWS::Serverless::Application. Why Modularity and Nested Stacks? Modularity & Organization: When you split your project into separate files like order.yaml, log.yaml, and api.yaml, your project becomes easier to organize and understand. Each file focuses on one part of the app, so it’s easier to work on and update, especially when the project gets bigger. ...
🚀 Deploy a Python App to AWS with Elastic Beanstalk & Terraform
This example shows how to build and deploy a simple Python application using AWS Elastic Beanstalk for managed hosting and Terraform for infrastructure as code. The deploy.sh script automates the application packaging and deployment process. Project Structure Here’s a quick overview of the project’s key components: app/: Contains the Python application, main.py, along with Procfile for defining the web server, and requirements.txt for Python dependencies. terraform/: Holds all the Terraform configuration files (.tf) that define the AWS infrastructure: main.tf: Defines the core AWS resources like the S3 bucket for application versions, IAM roles for Elastic Beanstalk, and the Elastic Beanstalk application and environment themselves. variables.tf: Declares input variables for customizability (e.g., AWS region, instance type). outputs.tf: Exports important values like the S3 bucket name and the Elastic Beanstalk environment URL. deploy.sh: A shell script that orchestrates the deployment process: Reads outputs from Terraform to get dynamic values. Zips the app/ directory. Uploads the zipped application to the designated S3 bucket. Creates a new Elastic Beanstalk application version. Updates the Elastic Beanstalk environment to use the new version. How It Works First, Terraform provisions all necessary AWS resources for Elastic Beanstalk. Then, the deploy.sh script automates the rest: it fetches deployment details from Terraform, zips your Python application, uploads it to S3, and finally, updates your Elastic Beanstalk environment with this new application version. ...
⚙️ Serverless Image Processing with AWS SAM: Uploads, Thumbnails & S3 Events
This example demonstrates a robust serverless architecture on AWS for handling file uploads and automated thumbnail generation. It leverages AWS SAM (Serverless Application Model), Lambda functions, API Gateway, and S3 event notifications to create a streamlined and scalable image processing pipeline. Project Structure Here’s a quick overview of the project’s key components: functions/: Contains the core Lambda function code: event.mjs: Handles S3 event notifications, triggering the thumbnail generation process. upload.mjs: Manages the initial file upload through API Gateway. layers/nodejs/: A Lambda layer containing shared code and dependencies for the functions: lib/utils.mjs: Common utility functions. package.json & package-lock.json: Node.js dependencies for the layer. template.yaml: The AWS SAM template defining the serverless resources: Lambda functions, API Gateway endpoints, S3 buckets, and their respective permissions and event triggers. How It Works The workflow is straightforward and efficient: ...
⚡️ Build a Simple API on AWS with Lambda, API Gateway, and SAM
This example shows how to build and deploy a simple serverless API using AWS Lambda, API Gateway, and AWS SAM (Serverless Application Model). The API fetches a list of todos from a public endpoint and returns them via an HTTP GET request. Project Structure Here’s a quick overview of the project’s key components: functions/: Contains the Lambda function ListTodos.mjs, which fetches todo items from a public API using Axios. tests/: Contains a Jest test to ensure the function returns items with a 200 status code. layers/: Optional shared layer for common dependencies (e.g., axios), defined under nodejs/package.json. template.yaml: AWS SAM template that defines the infrastructure: Lambda function (ListTodosFunction) API Gateway (/posts) Shared Layer How It Works When a client makes a GET request to /todos, the Lambda function is triggered, fetches data from https://jsonplaceholder.typicode.com/todos?_limit=3, and returns it as JSON. ...
🚀 Host Static Website on AWS with Terraform, S3, and CloudFront
This example outlines a method for deploying static websites on Amazon Web Services (AWS) using Terraform for infrastructure as code, Amazon S3 for content storage, and Amazon CloudFront for global content delivery. The deployment process is fully automated. Project Structure Below is an overview of the project’s key directories and files: public/: Contains the static website assets, such as index.html and 404.html. These are the files that will be served to users. terraform/: This directory holds all the Terraform configuration files (.tf) responsible for provisioning and managing AWS resources. main.tf: Defines the core AWS resources, including the S3 bucket and CloudFront distribution. variables.tf: Contains input variables to parameterize the Terraform configuration (e.g., bucket names, domain names). outputs.tf: Specifies output values that are useful after Terraform applies the configuration, such as the CloudFront distribution domain name. deploy.sh: A shell script to execute the Terraform plan and apply the infrastructure, including uploading static files to S3. destroy.sh: A shell script to tear down all provisioned AWS resources. Project Repository: https://github.com/OmarMakled/aws-terraform-s3 ...
đź”” Customizing Cognito Emails with AWS SAM and Lambda
This example shows how to use AWS SAM to create a user sign-up system (Cognito) and change the automatic emails it sends, like account verification or password reset emails. This is a great method if you want your application’s messages to match your brand’s style. Project Components The project has two main files: cognito-email/index.js: This file contains a simple Lambda function. Its job is to change the content of the emails. template.yaml: This file tells AWS what to build. It includes: Cognito User Pool: The place where user data is saved. Lambda Function: A simple function that runs when needed. Permissions: These allow Cognito to call the Lambda function. How It Works This setup customizes the email messages that Cognito sends in different user flows, such as: ...
đź”” Serverless API Monitoring with AWS SAM: Lambda, API Gateway & CloudWatch Alarms
This example demonstrates how to build a lightweight, serverless API using AWS SAM (Serverless Application Model) while integrating real-time error monitoring and email alerts using CloudWatch Alarms and SNS. This pattern is ideal for production-ready serverless applications that need observability without heavy tooling. Project Structure The project includes the following core components: functions/: Contains the Lambda function that handles HTTP requests: app.mjs: Returns a simple response (e.g., “Hello World”). template.yaml: The SAM template defining resources including: Lambda Function (hello handler) API Gateway (to expose the function via HTTP) CloudWatch Alarm (monitors API 5XX errors) SNS Topic & Subscription (sends email alerts) How It Works This architecture sets up an API and monitors its availability in real time: ...
đź§Ş Organizing Serverless Code with AWS SAM: Lambda Layers and Testing with Jest
This example demonstrates how to build clean, testable, and modular serverless applications on AWS using AWS SAM, Lambda Layers, and Jest. It focuses on two essential practices for scalable serverless development: Using Lambda Layers to share code across functions Using Jest to properly test each function and its dependencies By combining these two patterns, you get a powerful structure that’s maintainable, reusable, and confidently testable. Project Overview: Shared Logic + Layer for Node Modules + Testing functions/ – Your Lambda function handlers: ...
📦 Organize AWS Lambda Projects with Nested Stacks and Layers in SAM
This example shows how to build a modular AWS Lambda project using SAM (Serverless Application Model) with nested stacks and a shared layer. This helps keep your project clean and organized by separating logic and shared dependencies. How It Works layer.yaml creates a shared Lambda Layer that contains Node.js packages (like axios). order.yaml and log.yaml are two Lambda functions that use this shared layer. The root template (template.yaml): Creates the layer stack. Passes the layer reference to the other stacks as a parameter. sam build sam deploy --guided Project Repository: https://github.com/OmarMakled/sam-nested-layer ...
🔄 Using Parameters vs. Export/Import in AWS SAM: What’s Better?
When you build serverless applications with AWS SAM or CloudFormation, sometimes you need to share resources between stacks. For example, you may have: A Cognito User Pool in one stack (for user authentication) And an API Gateway in another stack that needs to use that Cognito pool There are two main ways to connect these stacks: Option 1: Using Parameters You can pass values like the User Pool ARN or ID as a parameter when you deploy the second stack. ...