Backend

Building Serverless Functions

Mayur Dabhi
Mayur Dabhi
May 8, 2026
14 min read

Serverless computing has fundamentally changed how developers build and deploy backend logic. Instead of provisioning servers, configuring load balancers, and managing OS updates, you write a function and let the cloud provider handle everything else — scaling, fault tolerance, and infrastructure. AWS Lambda alone executes trillions of function invocations per month, and platforms like Vercel and Cloudflare Workers have brought serverless to the frontend developer's workflow. This guide walks through building, deploying, and optimizing serverless functions in production.

The Serverless Promise

Serverless doesn't mean "no servers" — it means you never think about them. You pay only for the exact compute time your function uses (billed in milliseconds), and the platform auto-scales from zero to millions of requests without any configuration from you.

What Is a Serverless Function?

A serverless function, also called a Function-as-a-Service (FaaS), is a stateless unit of execution that runs in response to an event. The cloud provider spins up a container, executes your code, returns the result, and tears down the environment — all in milliseconds.

The key characteristics that distinguish serverless from traditional servers:

Client HTTP Request API Gateway Routes & Auth Rate Limiting Lambda Fn A Node.js / Python 128MB–10GB RAM Lambda Fn B Auth / Email Background Job DynamoDB Database S3 Bucket File Storage SQS Queue Messaging SES / SNS Notifications Serverless Architecture — Functions connect to managed AWS services

Each function handles a single concern and integrates with managed cloud services

Serverless Platforms Compared

Multiple cloud providers offer FaaS products with different trade-offs. Here's a practical comparison for developers choosing a platform:

Platform Free Tier Max Timeout Cold Start Best For
AWS Lambda 1M req/month 15 minutes ~100–500ms Enterprise, complex workflows
Vercel Functions 100GB-hours 60 seconds ~50–200ms Next.js, frontend-adjacent APIs
Cloudflare Workers 100K req/day 30 seconds <5ms (V8 isolates) Edge computing, low latency
Google Cloud Functions 2M req/month 60 minutes ~100–400ms GCP ecosystem integrations
Azure Functions 1M req/month Unlimited* ~200–600ms .NET, Azure ecosystem
Choosing a Platform

For most Node.js developers: AWS Lambda for complex enterprise workloads, Vercel for API routes alongside a Next.js frontend, and Cloudflare Workers when sub-10ms cold starts are critical. The choice often follows your existing cloud ecosystem.

Your First AWS Lambda Function

AWS Lambda is the most widely used FaaS platform. Functions can be written in Node.js, Python, Go, Java, Ruby, or custom runtimes. Let's build a production-ready function from scratch.

Project Setup with the Serverless Framework

The Serverless Framework (or AWS SAM) gives you infrastructure-as-code for Lambda deployments. It handles packaging, IAM roles, API Gateway wiring, and environment variables.

1

Install the Serverless Framework

Install globally and configure your AWS credentials before proceeding.

Terminal
# Install Serverless Framework globally
npm install -g serverless

# Configure AWS credentials (one-time setup)
aws configure
# Enter: AWS Access Key ID, Secret, Region (e.g. us-east-1), Output format (json)

# Create a new serverless project
serverless create --template aws-nodejs --path my-service
cd my-service
npm init -y
2

Define Your Service in serverless.yml

This file declares your functions, events, IAM permissions, and environment configuration.

serverless.yml
service: my-api-service

provider:
  name: aws
  runtime: nodejs20.x
  region: us-east-1
  memorySize: 256       # MB — increase for CPU-heavy tasks
  timeout: 10           # seconds — max 900 for Lambda
  environment:
    DB_HOST: ${env:DB_HOST}
    JWT_SECRET: ${env:JWT_SECRET}
    NODE_ENV: ${opt:stage, 'dev'}
  iam:
    role:
      statements:
        - Effect: Allow
          Action:
            - dynamodb:GetItem
            - dynamodb:PutItem
            - dynamodb:UpdateItem
            - dynamodb:DeleteItem
            - dynamodb:Query
          Resource:
            - arn:aws:dynamodb:${aws:region}:${aws:accountId}:table/Users
            - arn:aws:dynamodb:${aws:region}:${aws:accountId}:table/Users/index/*

functions:
  getUser:
    handler: src/users/get.handler
    events:
      - httpApi:
          path: /users/{id}
          method: GET

  createUser:
    handler: src/users/create.handler
    events:
      - httpApi:
          path: /users
          method: POST

  processQueue:
    handler: src/queue/processor.handler
    events:
      - sqs:
          arn: arn:aws:sqs:us-east-1:123456789:my-queue
          batchSize: 10

  scheduledTask:
    handler: src/tasks/cleanup.handler
    events:
      - schedule:
          rate: rate(1 hour)
          enabled: true

plugins:
  - serverless-offline           # local development
  - serverless-dotenv-plugin     # .env support
3

Write the Handler Function

Each handler receives an event (the trigger payload) and a context (Lambda runtime info), and returns a response object.

src/users/get.js
const { DynamoDBClient } = require('@aws-sdk/client-dynamodb');
const { DynamoDBDocumentClient, GetCommand } = require('@aws-sdk/lib-dynamodb');

// Initialize outside the handler — reused across warm invocations
const client = new DynamoDBClient({ region: process.env.AWS_REGION });
const docClient = DynamoDBDocumentClient.from(client);

const headers = {
  'Content-Type': 'application/json',
  'Access-Control-Allow-Origin': '*',
};

exports.handler = async (event, context) => {
  // Tell Lambda not to wait for empty event loop
  context.callbackWaitsForEmptyEventLoop = false;

  try {
    const { id } = event.pathParameters;

    if (!id) {
      return {
        statusCode: 400,
        headers,
        body: JSON.stringify({ error: 'User ID is required' }),
      };
    }

    const result = await docClient.send(new GetCommand({
      TableName: 'Users',
      Key: { userId: id },
    }));

    if (!result.Item) {
      return {
        statusCode: 404,
        headers,
        body: JSON.stringify({ error: 'User not found' }),
      };
    }

    return {
      statusCode: 200,
      headers,
      body: JSON.stringify(result.Item),
    };
  } catch (error) {
    console.error('GetUser error:', error);
    return {
      statusCode: 500,
      headers,
      body: JSON.stringify({ error: 'Internal server error' }),
    };
  }
};
4

Deploy to AWS

The framework packages your code, uploads it to S3, and creates/updates Lambda functions and API Gateway automatically.

Terminal
# Deploy to dev stage
serverless deploy --stage dev

# Deploy to production
serverless deploy --stage prod

# Deploy only a single function (faster for iteration)
serverless deploy function --function getUser

# Test locally with serverless-offline
serverless offline start

# View logs for a function
serverless logs --function getUser --tail

# Remove all resources
serverless remove --stage dev

Understanding Cold Starts

A cold start occurs when Lambda spins up a fresh execution environment for your function — downloading the code package, initializing the runtime, and running your module-level code. Warm starts reuse an existing container and skip all of this.

COLD START Download code Init runtime Init module Your handler ~200–600ms total WARM START Download code Init runtime Init module Your handler ~5–50ms total Skipped steps shown dashed — container already warm

Cold vs warm start — module-level code runs only once on cold start

Strategies to Reduce Cold Start Impact

Minimize the cold start by reducing initialization work:

// BAD: Heavy imports at module level delay cold start
const aws = require('aws-sdk');             // 40MB+
const moment = require('moment');           // brings locale data
const _ = require('lodash');

// GOOD: Import only what you need
const { DynamoDBClient } = require('@aws-sdk/client-dynamodb');
const { format } = require('date-fns');    // tree-shakeable

// GOOD: Lazy-load heavy dependencies inside handler
exports.handler = async (event) => {
  if (event.requiresPdf) {
    const { PDFDocument } = await import('pdf-lib');
    // ...
  }
};

// GOOD: Initialize SDK clients outside handler (reused on warm)
const db = new DynamoDBClient({ region: 'us-east-1' });

exports.handler = async (event) => {
  // db client already initialized — reused on warm invocation
};

Provisioned Concurrency keeps N containers permanently warm:

# serverless.yml — provision 5 warm containers for production
functions:
  getUser:
    handler: src/users/get.handler
    provisionedConcurrency: 5     # always-warm containers
    events:
      - httpApi:
          path: /users/{id}
          method: GET

# OR use Application Auto Scaling for time-based provisioning
# (scale up to 20 before 9am, down to 2 after midnight)
resources:
  Resources:
    GetUserScalableTarget:
      Type: AWS::ApplicationAutoScaling::ScalableTarget
      Properties:
        ServiceNamespace: lambda
        ScalableDimension: lambda:function:ProvisionedConcurrency
        MinCapacity: 2
        MaxCapacity: 20

Lambda SnapStart (Java/Python 3.12+) takes a snapshot after init:

# serverless.yml — enable SnapStart for Java functions
functions:
  javaHandler:
    handler: com.example.Handler::handleRequest
    runtime: java21
    snapStart: true    # snapshot taken after init phase
    events:
      - httpApi:
          path: /api/java
          method: POST

# SnapStart restores from snapshot instead of re-initializing
# Typical cold start reduction: 90%+ for JVM runtimes
# Not available for Node.js/Python (use Provisioned Concurrency instead)

Event Sources and Triggers

Lambda's power comes from the breadth of AWS services that can trigger it. Here are the most common patterns developers use in production:

HTTP API via API Gateway

src/users/create.js — HTTP POST handler
const { DynamoDBDocumentClient, PutCommand } = require('@aws-sdk/lib-dynamodb');
const { DynamoDBClient } = require('@aws-sdk/client-dynamodb');
const { randomUUID } = require('crypto');

const docClient = DynamoDBDocumentClient.from(new DynamoDBClient({}));

exports.handler = async (event) => {
  const body = JSON.parse(event.body || '{}');

  // Validate input
  if (!body.email || !body.name) {
    return {
      statusCode: 422,
      body: JSON.stringify({ error: 'email and name are required' }),
    };
  }

  const user = {
    userId: randomUUID(),
    email: body.email.toLowerCase().trim(),
    name: body.name.trim(),
    createdAt: new Date().toISOString(),
    updatedAt: new Date().toISOString(),
  };

  await docClient.send(new PutCommand({
    TableName: 'Users',
    Item: user,
    ConditionExpression: 'attribute_not_exists(email)',  // prevent duplicates
  }));

  return {
    statusCode: 201,
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify(user),
  };
};

S3 Event Trigger

Process files the moment they land in an S3 bucket — resize images, parse CSVs, run virus scans:

src/images/resize.js — S3 trigger
const { S3Client, GetObjectCommand, PutObjectCommand } = require('@aws-sdk/client-s3');
const sharp = require('sharp');   // image processing (Lambda layer or bundled)

const s3 = new S3Client({});
const SIZES = [{ w: 400, suffix: 'thumb' }, { w: 1200, suffix: 'large' }];

exports.handler = async (event) => {
  for (const record of event.Records) {
    const bucket = record.s3.bucket.name;
    const key = decodeURIComponent(record.s3.object.key.replace(/\+/g, ' '));

    // Skip already-processed images (avoid infinite loop!)
    if (key.includes('/processed/')) continue;

    const { Body } = await s3.send(new GetObjectCommand({ Bucket: bucket, Key: key }));
    const imageBuffer = Buffer.from(await Body.transformToByteArray());

    await Promise.all(SIZES.map(async ({ w, suffix }) => {
      const resized = await sharp(imageBuffer)
        .resize(w, null, { withoutEnlargement: true })
        .webp({ quality: 85 })
        .toBuffer();

      const outputKey = key.replace('uploads/', `processed/${suffix}/`)
        .replace(/\.[^.]+$/, '.webp');

      await s3.send(new PutObjectCommand({
        Bucket: bucket,
        Key: outputKey,
        Body: resized,
        ContentType: 'image/webp',
        CacheControl: 'max-age=31536000',
      }));
    }));

    console.log(`Processed ${key} → ${SIZES.length} variants`);
  }
};

SQS Queue Processor

src/queue/processor.js — SQS batch handler
const { SESClient, SendEmailCommand } = require('@aws-sdk/client-ses');

const ses = new SESClient({ region: 'us-east-1' });

exports.handler = async (event) => {
  // Lambda delivers up to batchSize messages at once
  const results = await Promise.allSettled(
    event.Records.map(record => processMessage(JSON.parse(record.body)))
  );

  // Return failed message IDs so Lambda can retry them
  const failures = results
    .map((result, i) => ({ result, record: event.Records[i] }))
    .filter(({ result }) => result.status === 'rejected')
    .map(({ record }) => ({ itemIdentifier: record.messageId }));

  return { batchItemFailures: failures };
};

async function processMessage(message) {
  const { to, subject, body } = message;

  await ses.send(new SendEmailCommand({
    Source: 'noreply@example.com',
    Destination: { ToAddresses: [to] },
    Message: {
      Subject: { Data: subject },
      Body: { Html: { Data: body } },
    },
  }));

  console.log(`Email sent to ${to}`);
}
Idempotency Is Critical

SQS delivers messages at least once, meaning your function may receive the same message multiple times due to network issues or retries. Always design handlers to be idempotent — processing the same message twice should produce the same outcome. Use DynamoDB conditional writes or a deduplication key to enforce this.

IAM Roles and the Principle of Least Privilege

Every Lambda function runs with an execution role that grants it AWS permissions. This is where many developers make critical security mistakes — giving functions too broad permissions like AdministratorAccess or AmazonDynamoDBFullAccess.

serverless.yml — Scoped IAM permissions per function
provider:
  name: aws
  runtime: nodejs20.x
  # Remove global IAM — use per-function roles instead

functions:
  getUser:
    handler: src/users/get.handler
    role: GetUserRole   # custom role with minimal permissions
    events:
      - httpApi:
          path: /users/{id}
          method: GET

  processImages:
    handler: src/images/resize.handler
    role: ImageProcessorRole

resources:
  Resources:
    # Read-only DynamoDB access for getUser
    GetUserRole:
      Type: AWS::IAM::Role
      Properties:
        AssumeRolePolicyDocument:
          Version: '2012-10-17'
          Statement:
            - Effect: Allow
              Principal:
                Service: lambda.amazonaws.com
              Action: sts:AssumeRole
        Policies:
          - PolicyName: GetUserPolicy
            PolicyDocument:
              Version: '2012-10-17'
              Statement:
                - Effect: Allow
                  Action:
                    - logs:CreateLogGroup
                    - logs:CreateLogStream
                    - logs:PutLogEvents
                  Resource: '*'
                - Effect: Allow
                  Action:
                    - dynamodb:GetItem
                  Resource:
                    - arn:aws:dynamodb:us-east-1:*:table/Users

    # S3 read + write for image processor
    ImageProcessorRole:
      Type: AWS::IAM::Role
      Properties:
        AssumeRolePolicyDocument:
          Version: '2012-10-17'
          Statement:
            - Effect: Allow
              Principal:
                Service: lambda.amazonaws.com
              Action: sts:AssumeRole
        Policies:
          - PolicyName: ImageProcessorPolicy
            PolicyDocument:
              Version: '2012-10-17'
              Statement:
                - Effect: Allow
                  Action:
                    - logs:CreateLogGroup
                    - logs:CreateLogStream
                    - logs:PutLogEvents
                  Resource: '*'
                - Effect: Allow
                  Action:
                    - s3:GetObject
                    - s3:PutObject
                  Resource: arn:aws:s3:::my-image-bucket/*

Vercel Serverless Functions

Vercel offers a simpler serverless experience — especially for Next.js developers. Any file in the /api directory (or /app/api in App Router) becomes a serverless function automatically. No YAML configuration, no IAM roles, no deployment commands beyond git push.

pages/api/users/[id].ts — Next.js API Route
import type { NextApiRequest, NextApiResponse } from 'next';
import { db } from '@/lib/database';

type User = {
  id: string;
  name: string;
  email: string;
};

type ErrorResponse = {
  error: string;
};

export default async function handler(
  req: NextApiRequest,
  res: NextApiResponse<User | ErrorResponse>
) {
  const { id } = req.query;

  if (req.method !== 'GET') {
    return res.status(405).json({ error: 'Method not allowed' });
  }

  try {
    const user = await db.user.findUnique({
      where: { id: String(id) },
      select: { id: true, name: true, email: true },
    });

    if (!user) {
      return res.status(404).json({ error: 'User not found' });
    }

    // Cache for 60s at edge, stale-while-revalidate for 5 minutes
    res.setHeader('Cache-Control', 's-maxage=60, stale-while-revalidate=300');
    return res.status(200).json(user);
  } catch (error) {
    console.error('Database error:', error);
    return res.status(500).json({ error: 'Internal server error' });
  }
}

Vercel Edge Functions

For ultra-low latency, Vercel Edge Functions run on Cloudflare's V8 isolate network — no cold starts, but limited to the Web API surface (no Node.js APIs like fs or net):

middleware.ts — Vercel Edge Middleware
import { NextResponse } from 'next/server';
import type { NextRequest } from 'next/server';

// Runs at the edge — before any serverless function
export function middleware(request: NextRequest) {
  const token = request.cookies.get('auth-token')?.value;

  // Protect /dashboard routes
  if (request.nextUrl.pathname.startsWith('/dashboard')) {
    if (!token) {
      return NextResponse.redirect(new URL('/login', request.url));
    }
  }

  // Geolocation-based routing
  const country = request.geo?.country || 'US';
  if (country === 'GB') {
    return NextResponse.rewrite(new URL('/uk' + request.nextUrl.pathname, request.url));
  }

  return NextResponse.next();
}

export const config = {
  matcher: ['/dashboard/:path*', '/((?!api|_next/static|favicon.ico).*)'],
};

Production Best Practices

Getting serverless into production requires thinking carefully about observability, error handling, and cost optimization. Here's what matters most in real-world deployments:

Observability — Logging, Tracing, and Metrics

Use structured logging (JSON) so CloudWatch Logs Insights can query your logs like a database:

// Structured logger utility
const log = {
  info: (msg, data = {}) => console.log(JSON.stringify({
    level: 'INFO', message: msg, timestamp: new Date().toISOString(), ...data
  })),
  error: (msg, error, data = {}) => console.error(JSON.stringify({
    level: 'ERROR', message: msg,
    error: { name: error.name, message: error.message, stack: error.stack },
    timestamp: new Date().toISOString(), ...data
  })),
};

exports.handler = async (event) => {
  const requestId = event.requestContext?.requestId;
  log.info('Handler invoked', { requestId, path: event.path });

  try {
    const result = await processRequest(event);
    log.info('Handler success', { requestId, statusCode: 200 });
    return result;
  } catch (error) {
    log.error('Handler failed', error, { requestId });
    throw error;  // Let Lambda mark invocation as failed
  }
};

Enable AWS X-Ray for distributed tracing across Lambda → DynamoDB → SQS chains by adding tracing: Active to your serverless.yml provider block.

Cost Optimization

TechniqueImpactEffort
Right-size memoryHighLow — test with AWS Lambda Power Tuning
Reduce package sizeMediumLow — use esbuild bundling, tree-shaking
Use Graviton2 ARM~20% cheaperLow — add architecture: arm64
Async where possibleHighMedium — SQS vs synchronous HTTP
Set concurrency limitsSafetyLow — prevent runaway cost spikes
# Graviton2 ARM (20% cheaper, 19% faster for many workloads)
provider:
  name: aws
  runtime: nodejs20.x
  architecture: arm64    # Graviton2

# Reserve concurrency to prevent cost surprises
functions:
  getUser:
    handler: src/users/get.handler
    reservedConcurrency: 100   # max 100 simultaneous instances

VPC and Database Connections

Placing Lambda inside a VPC to access RDS databases causes significant cold start delays (ENI attachment: ~500ms). Use RDS Proxy to pool connections instead:

# serverless.yml — Lambda + RDS Proxy (no VPC cold start penalty)
functions:
  getUser:
    handler: src/users/get.handler
    vpc:
      securityGroupIds:
        - sg-xxxxxxxxx
      subnetIds:
        - subnet-xxxxxxxx
        - subnet-yyyyyyyy

# Connect to RDS Proxy endpoint instead of RDS directly
# handler.js
const { Pool } = require('pg');

// Initialized once — reused across warm invocations
const pool = new Pool({
  host: process.env.DB_PROXY_ENDPOINT,  // RDS Proxy endpoint
  database: process.env.DB_NAME,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,
  max: 1,         // Lambda containers are single-threaded
  idleTimeoutMillis: 120000,
});

Local Development and Testing

Iterating locally is critical for developer velocity. The serverless-offline plugin emulates API Gateway and Lambda on your machine, while AWS SAM provides Docker-based Lambda emulation for more accurate environment parity.

Testing a Lambda handler with Jest
// tests/users/get.test.js
const { handler } = require('../../src/users/get');

// Mock AWS SDK
jest.mock('@aws-sdk/client-dynamodb');
jest.mock('@aws-sdk/lib-dynamodb', () => ({
  DynamoDBDocumentClient: {
    from: jest.fn(() => ({ send: jest.fn() })),
  },
  GetCommand: jest.fn(),
}));

const { DynamoDBDocumentClient } = require('@aws-sdk/lib-dynamodb');

describe('GET /users/:id', () => {
  let mockSend;

  beforeEach(() => {
    mockSend = jest.fn();
    DynamoDBDocumentClient.from.mockReturnValue({ send: mockSend });
  });

  it('returns 200 with user data', async () => {
    mockSend.mockResolvedValue({
      Item: { userId: '123', name: 'Alice', email: 'alice@example.com' },
    });

    const event = { pathParameters: { id: '123' } };
    const context = { callbackWaitsForEmptyEventLoop: false };

    const response = await handler(event, context);

    expect(response.statusCode).toBe(200);
    expect(JSON.parse(response.body).name).toBe('Alice');
  });

  it('returns 404 when user not found', async () => {
    mockSend.mockResolvedValue({ Item: undefined });

    const event = { pathParameters: { id: 'nonexistent' } };
    const response = await handler(event, {});

    expect(response.statusCode).toBe(404);
  });

  it('returns 400 when id is missing', async () => {
    const event = { pathParameters: {} };
    const response = await handler(event, {});

    expect(response.statusCode).toBe(400);
  });
});

Key Takeaways

Serverless functions are a powerful primitive, but they require a different mental model than traditional servers. Here's what to internalize before going to production:

Production Serverless Checklist

  • Initialize SDK clients outside handlers — they persist across warm invocations and save ~50ms per call
  • Design for idempotency — every trigger source (SQS, S3, EventBridge) can deliver events more than once
  • Use the principle of least privilege — give each function only the exact IAM permissions it needs
  • Set reservedConcurrency — prevent runaway cost from bugs causing infinite loops
  • Use structured (JSON) logging — enables powerful CloudWatch Logs Insights queries in production
  • Bundle with esbuild or webpack — smaller packages mean faster cold starts
  • Test cold start latency — use AWS Lambda Power Tuning to find the memory/speed/cost sweet spot
  • Avoid VPC unless required — ENI attachment adds ~500ms to cold starts; use RDS Proxy instead
"The best architecture for serverless isn't about replacing your server — it's about identifying which parts of your system benefit most from event-driven, scale-to-zero execution."

Serverless functions shine for event-driven processing, scheduled jobs, API endpoints with variable traffic, and background tasks. They're less suited for long-running processes, stateful operations, or workloads that require sub-10ms cold starts everywhere. Knowing when to use serverless — and when to keep a long-running process — is the mark of a mature cloud architecture.

Serverless Lambda Functions AWS Vercel FaaS Cloud
Mayur Dabhi

Mayur Dabhi

Full Stack Developer with 5+ years of experience building scalable web applications with Laravel, React, and Node.js.