AWS Lambda
AWS Lambda
Table of Contents

AWS Lambda Deep Dive: A Serverless Design Guide Learned Through Comparing Google Cloud Functions, Cloud Run, and Azure Functions

Introduction (Key Summary)

  • The main subject of this article is AWS Lambda, AWS’s serverless execution platform. Lambda is a function execution service that lets you deploy code as “functions” without thinking about server counts, OS, or middleware, and it runs automatically in response to events.
  • Pricing is based on request count and execution time × memory, using a pay-as-you-go model where you don’t pay for idle time. Execution time is measured in milliseconds, and the free tier includes 400,000 GB-seconds per month of compute time plus a certain number of requests.
  • Google’s comparable services are mainly Cloud Functions (v2 is integrated onto Cloud Run as Cloud Run functions), and Azure’s counterpart is Azure Functions. All share the same core concept: “deploy only code, run it in response to events, and get freed from server management.”
  • The intended readers include:
    • Backend engineers who want to move APIs or batch jobs to serverless
    • Tech leads/architects who want to decide architecture while considering Lambda vs ECS/EKS, and differences across GCP/Azure
    • Small teams and startups aiming for scalable systems with low operational overhead
  • By the end, the goal is that you can explain in your own words:
    • “This workload is (or isn’t) a good fit for Lambda.”
    • “How Lambda differs from Cloud Functions and Azure Functions.”
    • “Which serverless platform(s) should realistically be combined for your project.”

1. What is AWS Lambda? Think of it as “a small server that reacts to events”

1.1 Lambda’s core concept

AWS Lambda is AWS’s serverless computing service. Without provisioning or managing servers, you deploy a function (a small unit of code) and have it execute automatically in response to various events.

In short:

  • You don’t see servers (though they are, of course, running behind the scenes)
  • You specify only code and configuration (memory, timeout, env vars, etc.)
  • Events like S3, API Gateway, EventBridge, SQS, SNS can trigger execution
  • You pay only for the time it actually runs

Instead of “launching an EC2 instance just for one small job,” the idea becomes: “upload the code for that job and let it execute as many times as needed in response to events.”

1.2 Supported languages, execution time, and resource limits

Lambda supports multiple runtimes, and as of 2025 the officially supported runtimes include Node.js / Python / Ruby / Java / .NET / Go, etc. With a custom runtime, other languages can also be run.

Typical limits:

  • Max execution time: 15 minutes (900 seconds)
  • Memory: 128MB to 10,240MB, in 1MB increments
  • Ephemeral storage: 512MB by default, expandable up to 10GB
  • Concurrency: account-level limits exist; default is around 1,000 (can be increased via support request)

GCP Cloud Functions and Azure Functions have similar constraints: max execution time, memory limits, concurrency controls, and so on.


2. Comparing Google Cloud Functions / Cloud Run functions and Azure Functions

2.1 Shared traits as “serverless functions”

AWS Lambda, Google Cloud Functions (v2 as Cloud Run functions), and Azure Functions all share:

  • Deploy code without managing servers
  • Event-driven (HTTP requests, storage, messaging, schedulers, etc.)
  • Auto-scaling, near-zero cost while idle
  • Pay-as-you-go (requests + execution time + memory)

2.2 Each platform’s “home turf”

A helpful high-level framing is:

  • AWS Lambda
    • Rich integrations with AWS services like S3 / DynamoDB / API Gateway / EventBridge / SQS
    • The most natural choice if your architecture is already AWS-centered
  • Google Cloud Functions / Cloud Run functions
    • Smooth integration with GCP services like Cloud Storage / Firestore / Pub/Sub / Cloud Run
    • Well-suited for HTTP microservices and small workflows
  • Azure Functions
    • Tight integration with Azure Storage / Cosmos DB / Event Hubs / Service Bus
    • Strong affinity with the .NET ecosystem; widely adopted in enterprise scenarios

Rather than “which is best,” selection is usually driven by practical factors such as:

  • Integration with the cloud services you already use
  • Team-familiar languages and toolchains
  • Organizational preference for a specific cloud provider

2.3 Scaling and pricing model differences (roughly)

All are “auto-scale + pay-as-you-go,” but behavior differs in details:

  • Lambda
    • Automatically adds execution environments in response to demand; has per-function burst behavior and account-level concurrency limits
    • Pricing is requests + GB-seconds. Recently, pricing can differ between ARM (Graviton) and x86; a free tier is available.
  • Cloud Functions / Cloud Run functions
    • Auto-scales for HTTP/events, and allows configuration of instance counts and concurrency
    • Pricing is requests + execution time + memory/CPU, with free tier options
  • Azure Functions
    • Multiple execution plans: Consumption (fully pay-as-you-go), Premium / App Service plans (more stability/flexibility), etc., chosen by use case

Pricing changes over time, so for real architecture decisions you should always estimate using official pricing pages and calculators.


3. AWS Lambda Architecture: Events, Triggers, and the Execution Environment

3.1 What “starts” a Lambda: trigger types

Lambda’s appeal includes the range of triggers. Common examples:

  • HTTP requests via API Gateway or ALB
  • S3 object upload/delete events
  • DynamoDB Streams change events
  • SQS messages arriving in a queue
  • SNS topic notifications
  • EventBridge schedules and event rules
  • Events from Cognito, CodeCommit, CloudWatch Logs, and more

This “everything can be an event” feel is also true for GCP Cloud Functions (Cloud Storage / Pub/Sub / Firebase) and Azure Functions (Storage / Event Hubs / Service Bus). Serverless functions often act as the “glue” connecting cloud services.

3.2 Lifecycle of the execution environment and cold starts

Internally, Lambda runs functions in an isolated environment that behaves like a container.

  • On first invocation or after a period of inactivity, a new environment must be created—this is a cold start.
  • Once created, the environment may be reused for some time, so multiple requests can be handled within the same container (enabling optimizations like caching in global variables).

This pattern is similar across Cloud Functions and Azure Functions: “the first call can be slower; subsequent calls are faster.”

3.3 Example: the simplest Node.js Lambda function

A minimal HTTP handler example:

// index.mjs (Node.js 20, etc.)
export const handler = async (event, context) => {
  console.log("Request ID:", context.awsRequestId);
  console.log("Event:", JSON.stringify(event));

  return {
    statusCode: 200,
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      message: "Hello from Lambda!",
      input: event,
    }),
  };
};

If you integrate this with API Gateway (HTTP API or REST API), you can quickly create a serverless Web API.


4. Typical Real-World Use Cases (and equivalents on other clouds)

4.1 Building Web APIs and backends

A very common pattern is Lambda + API Gateway for Web APIs or BFFs (Backend for Frontend).

  • Frontend (SPA / mobile app) → API Gateway → Lambda
  • Lambda accesses RDS/Aurora, DynamoDB, S3, etc.
  • Authentication via Cognito or OIDC (Cognito / Auth0 / Azure AD, etc.)

On GCP, similar architectures use Cloud Run / Cloud Functions + API Gateway. On Azure, Functions + Azure API Management is a common pairing.

4.2 Batch processing and scheduled jobs

  • Implement scheduled tasks with EventBridge (schedule) → Lambda
  • Example: “Aggregate reports every day at midnight, output CSV to S3, then notify via SNS”

GCP: Cloud Scheduler + Cloud Functions / Cloud Run
Azure: Timer-triggered Functions

4.3 File processing: image conversion, thumbnail generation

  • When an image is uploaded to S3, Lambda generates thumbnails and stores them in another bucket
  • Similar patterns for PDF conversion or metadata extraction

GCP: Cloud Storage + Cloud Functions
Azure: Blob Storage + Functions (Blob trigger)

4.4 Messaging integrations (SQS / SNS / Kinesis)

Lambda is also excellent with messaging services:

  • SQS queue → Lambda for background jobs
  • SNS topic → Lambda for email notifications or third-party integrations
  • Kinesis stream → Lambda for near-real-time processing (log analysis, metric aggregation)

GCP: Pub/Sub + Cloud Functions
Azure: Service Bus / Event Hubs + Functions


5. Infrastructure Definition and Deployment: SAM, Serverless Framework, Terraform

5.1 Defining with AWS SAM (sample)

For production, Infrastructure as Code (IaC) is essentially required. AWS’s official option is AWS SAM (Serverless Application Model).

A simple SAM template for HTTP API + Lambda:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Sample Lambda + HTTP API

Resources:
  ApiFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: src/
      Handler: index.handler
      Runtime: nodejs20.x
      MemorySize: 512
      Timeout: 10
      Policies:
        - AWSLambdaBasicExecutionRole
      Events:
        HttpApi:
          Type: HttpApi
          Properties:
            Path: /hello
            Method: GET

With sam buildsam deploy, you deploy the Lambda function, IAM role, and HTTP API as a set.

5.2 Comparing other tools

  • Serverless Framework
    • Supports deployments not only to AWS but also to Azure Functions and Google Cloud Functions, making it useful for multi-cloud serverless development.
  • Terraform / Pulumi
    • Often chosen when you want to manage “the entire cloud” in code, including VPC, ECS/EKS, etc., not just Lambda.

On GCP, Cloud Deployment Manager and Terraform are common. On Azure, ARM/Bicep and Terraform are widely used. Across clouds, defining functions in code (not manual clicking) is increasingly the norm.


6. Cost and Performance: How to Avoid Making Lambda Expensive

6.1 A concrete sense of the pricing model

Lambda pricing roughly looks like:

  • Request pricing
    • About $0.20 per 1M requests (varies by region)
  • Execution time (GB-seconds) pricing
    • Memory size × execution duration (measured in milliseconds)
    • Free tier includes 400,000 GB-seconds per month

For example, if you run a Lambda with 1GB memory and average 500ms duration 1M times per month:

  • Request cost: ~$0.20
  • Compute time: 0.5s × 1GB × 1,000,000 = ~500,000 GB-ms → billed accordingly

(Use real unit prices and free tier details to estimate precisely.)

6.2 Cost optimization tips

  • Balance memory size and runtime
    • Increasing memory also increases CPU; execution may become faster enough that total cost decreases.
  • Avoid API designs that trigger too many calls
    • Rather than calling Lambda 10 times per page, using a BFF to consolidate into 1–2 calls often improves both cost and latency.
  • Don’t over-log
    • CloudWatch Logs costs add up; avoid flooding logs with DEBUG-level output.

The same principles apply to serverless functions on GCP and Azure: optimize request count, duration, and resource allocation.


7. Common Pitfalls and How to Avoid Them

7.1 Packing long-running or CPU-heavy jobs into Lambda

  • With a 15-minute max, trying to cram long ETL or video processing into one Lambda risks timeouts and exploding costs.
  • Common mitigations:
    • Split workflows with Step Functions
    • Use ECS/EKS, AWS Batch, or other long-running platforms

Similarly, on GCP you often shift long jobs to Cloud Run / GKE; on Azure, to Container Apps / AKS, etc.

7.2 Storing state inside Lambda

  • Because environments can be terminated at any time, keeping important state in local files or memory can disappear suddenly.
  • Store state in external systems (DynamoDB, RDS, ElastiCache, S3, etc.) and keep functions essentially stateless.

This is equally true for Cloud Functions and Azure Functions: “functions are stateless; state lives in external storage” is a core rule.

7.3 Ignoring cold starts

  • For latency-sensitive APIs (e.g., real-time systems needing responses within tens of milliseconds), ignoring cold starts can harm UX.
  • Common mitigations:
    • Use Provisioned Concurrency to keep a number of execution environments warm
    • Put the most latency-critical parts on always-on workloads (ECS/EKS or other always-running services)

GCP and Azure have similar approaches (minimum instances / pre-warm), and the same mindset applies.


8. Who Benefits and How (by reader persona)

8.1 Backend engineers

  • Moving small APIs/batches/utilities from EC2 or containers to Lambda can greatly reduce operational burdens such as:
    • OS updates, scaling, health checks, auto-healing
  • Understanding the shared model across GCP and Azure helps you reuse the same “event-driven function composition” thinking even if you change clouds.

8.2 SRE / platform engineers

  • Correctly understanding concurrency limits, Provisioned Concurrency, and account limits helps with:
    • Load control, throttling design, and reducing blast radius on downstream services
  • With CloudWatch Logs / X-Ray / CloudTrail, you can observe “which event calls which Lambda how many times,” enabling cost monitoring and performance tuning.

8.3 Tech leads / architects / CTOs

  • Understanding Lambda / Cloud Functions / Azure Functions side-by-side helps you judge:
    • “Which workloads are good enough for serverless”
    • “Where to switch to containers or traditional servers”
  • Encouraging “reviewable/testable code at function granularity” promotes modular design and cleaner CI/CD pipelines.

8.4 Small teams / startups

  • Without building complex container orchestration up front, you can launch solid products with something like:
    • API Gateway + Lambda + DynamoDB + S3
  • As traffic grows, you can adopt ECS/EKS or cross-cloud integration as needed, keeping early investment low while preserving future options.

9. Three steps you can take starting today

  1. Pick one small function to start with
    • Example: a simple HTTP API that returns a basic response, or a function that logs CSV line counts when a file is uploaded to S3.
  2. Create one Lambda (even in the console) and trigger it with an event
    • Experience the behavior: logs, CloudWatch metrics, and how cold starts feel.
  3. Then codify the same Lambda in SAM or Serverless Framework
    • Doing IaC early clarifies the “production-ready serverless workflow” end-to-end.

Trying the same experiment on GCP Cloud Functions / Cloud Run functions or Azure Functions helps you learn that “the thinking stays the same even if the cloud changes”—a strong foundation for multi-cloud skills.


10. Conclusion: Lambda is the place for “small, lightweight, event-reactive workloads”

As we’ve seen, AWS Lambda is:

  • A serverless execution platform where you can deploy code with minimal server-management concerns, and
  • An “event glue” that integrates deeply with AWS services like S3, DynamoDB, API Gateway, EventBridge, SQS, and SNS.

At the same time, it has characteristics such as:

  • Max execution time of 15 minutes
  • Concurrency and cold-start constraints
  • Cases where long-running batch jobs or ultra-low-latency workloads are not a good fit

So it’s not that “everything belongs in Lambda.”

What matters is calmly assessing:

  • Workload characteristics (frequency, latency sensitivity, execution time)
  • Your team’s skills
  • Your existing cloud assets

…and then cleanly dividing responsibilities:

From here to here: Lambda / Cloud Functions / Azure Functions
Beyond that: containers or traditional servers

Start by picking one everyday task and asking: “Could this be written as one event-driven function?”
Try it on Lambda, lightly. That small step is often the first step toward becoming strong at serverless and multi-cloud architecture.


Reference links (official docs first)

Translated with DeepL.com (free version) Note: During actual adoption, versions, limits, and pricing may have changed—always check the latest official documentation.

By greeden

Leave a Reply

Your email address will not be published. Required fields are marked *

日本語が含まれない投稿は無視されますのでご注意ください。(スパム対策)