DevOps

AWS for Beginners: Getting Started

Mayur Dabhi
Mayur Dabhi
April 5, 2026
14 min read

Every time you stream a movie on Netflix, book an Airbnb, or scroll through Pinterest, there is a good chance the infrastructure underneath is running on Amazon Web Services. AWS commands roughly 31% of the global cloud market — more than Microsoft Azure and Google Cloud combined. It powers startups burning through runway, Fortune 500 enterprises migrating legacy systems, and solo developers shipping side projects at zero upfront cost.

If you are a developer, DevOps engineer, or even a technically curious person who has heard "the cloud" mentioned in every second meeting and finally wants to know what it actually means in practice, this guide is your starting point. We will go from creating an account all the way to deploying real infrastructure, with actual CLI commands you can run today.

"You do not need to understand all 200+ AWS services to be productive. You need to understand about a dozen well."

What is AWS?

Amazon Web Services launched in 2006 with just three services: S3 (object storage), SQS (message queuing), and EC2 (virtual machines). The idea was simple: Amazon had built a massive internal infrastructure for its own e-commerce operations. Why not rent out spare capacity to developers? The bet paid off spectacularly. Today AWS offers over 200 services spanning compute, storage, databases, networking, machine learning, security, IoT, and more.

At its core, AWS is a collection of data centers distributed across the globe that you can rent by the hour, the second, or the request. Instead of buying a physical server (which takes weeks to provision, costs thousands upfront, and sits idle at 3 AM), you spin up a virtual machine in under 60 seconds and pay only for the time it runs.

Global Infrastructure: Regions and Availability Zones

AWS infrastructure is organized into a hierarchy of Regions, Availability Zones (AZs), and data centers. A Region is a geographic location (e.g., us-east-1 in Northern Virginia, ap-south-1 in Mumbai). Each Region contains at least two, usually three or more, Availability Zones. An AZ is one or more discrete data centers with independent power, cooling, and networking — physically separated by meaningful distances to protect against localized failures.

AWS Region (e.g. us-east-1 — N. Virginia) Availability Zone 1a Data Center Power / Network A Data Center Power / Network B Availability Zone 1b Data Center Power / Network A Data Center Power / Network B Availability Zone 1c Data Center Power / Network A Data Center Power / Network B AZs are physically separated but connected via low-latency fiber

AWS Global Infrastructure: one Region containing three Availability Zones, each with multiple isolated data centers.

The Shared Responsibility Model

Before writing a single line of infrastructure code, understand this foundational concept. AWS secures the infrastructure (physical hardware, facilities, hypervisor, managed service software). You are responsible for everything running on that infrastructure: your operating system patches, application code, IAM permissions, data encryption, network rules, and security group configurations. A misconfigured S3 bucket that exposes private data is your responsibility, not Amazon's. This distinction shapes every security decision you will ever make on AWS.

Setting Up Your AWS Account

Getting started with AWS takes about 15 minutes. The free tier gives you 12 months of limited access to core services and a handful of always-free offerings — more than enough to learn and build real projects.

1

Create Your AWS Account

Go to aws.amazon.com and click "Create an AWS Account." You will need a valid email address, a phone number for verification, and a credit card (you will not be charged unless you exceed free tier limits). Choose the "Basic Support" plan — it is free and sufficient for learning.

2

Enable MFA on the Root Account

Your root account has unrestricted access to everything, including closing your account and changing billing details. Immediately after creating it, go to IAM → Security credentials → Assign MFA and set up a virtual MFA device (Google Authenticator or Authy work well). Then stop using the root account for day-to-day tasks.

3

Create an IAM User for Daily Use

Navigate to IAM → Users → Create User. Give it a name (e.g., mayur-admin), attach the AdministratorAccess managed policy for now (you can tighten permissions later), and enable console access with a password. Download the credentials CSV. Enable MFA on this user too.

4

Set Up Billing Alerts

Go to Billing → Budgets → Create Budget. Create a "Zero spend budget" that emails you the moment any charge appears. Also create a monthly cost budget of $10–$20. This gives you early warning before a forgotten resource costs you real money.

Free Tier Gotchas — Read This Before Clicking

The AWS Free Tier has two layers: 12-month free (e.g., 750 hrs/month of t2.micro EC2, 5 GB S3 storage) and always free (e.g., 1 million Lambda requests/month). Exceeding these limits — even slightly — incurs charges billed at standard rates. Common surprises include: leaving an EC2 instance running past 750 hours, creating resources in multiple regions that each count against the same limit, NAT Gateways (not free tier eligible), and data transfer OUT charges. Always check the AWS Pricing Calculator before deploying anything new, and enable Cost Explorer from day one.

Core Services Every Developer Should Know

AWS has over 200 services, but you can accomplish the vast majority of real-world use cases with fewer than fifteen. Here are the ones you will reach for first, almost regardless of what you are building.

Service Type Primary Use Case Free Tier
EC2 Compute Virtual machines — web servers, build agents, anything that needs a full OS 750 hrs/mo t2.micro (12 mo)
S3 Storage Object storage — static files, backups, data lakes, hosting SPAs 5 GB storage, 20K GET, 2K PUT (12 mo)
RDS Database Managed relational databases: MySQL, PostgreSQL, MariaDB, SQL Server 750 hrs/mo db.t3.micro (12 mo)
Lambda Serverless Compute Event-driven functions — API backends, webhooks, scheduled jobs 1M requests + 400K GB-sec/mo (always free)
VPC Networking Isolated virtual network — subnets, routing, security groups, NACLs Free (VPC itself); NAT Gateway is not
CloudFront CDN Content delivery network — cache S3 / origin responses at edge locations worldwide 1 TB data transfer out + 10M HTTP requests/mo (always free)
IAM Security Identity and access management — users, roles, policies for all AWS resources Always free
CloudWatch Monitoring Metrics, logs, alarms, dashboards for AWS resources and custom applications 10 custom metrics, 5 GB logs/mo (always free)

VPC deserves special mention even though it is often treated as background plumbing. Every resource you create lives inside a VPC. Understanding subnets (public vs. private), route tables, internet gateways, and security groups is the difference between an architecture that is accidentally exposed to the internet and one that is deliberately, correctly secured.

Your First EC2 Instance

EC2 (Elastic Compute Cloud) is the workhorse of AWS. An EC2 instance is essentially a virtual machine running in one of Amazon's data centers. You choose the operating system, the hardware specification (instance type), the storage, and the networking. You can SSH into it and treat it like any Linux server.

Install and Configure the AWS CLI

The AWS CLI is an indispensable tool. Almost everything you can do in the console you can script with the CLI, which makes it automatable, reproducible, and version-controllable. Install it first.

bash — Install AWS CLI v2 (Linux/macOS)
# macOS (using Homebrew)
brew install awscli

# Linux (x86_64)
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

# Verify installation
aws --version
# aws-cli/2.x.x Python/3.x.x Linux/...

Next, configure the CLI with your IAM user's access keys. Go to IAM → Users → your user → Security credentials → Create access key, choose "CLI" as the use case, and download the key pair.

bash — Configure AWS credentials
aws configure

# You will be prompted for:
# AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
# AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
# Default region name [None]: us-east-1
# Default output format [None]: json

# Verify it works
aws sts get-caller-identity
# {
#   "UserId": "AIDAIOSFODNN7EXAMPLE",
#   "Account": "123456789012",
#   "Arn": "arn:aws:iam::123456789012:user/mayur-admin"
# }

Launch and Connect to an EC2 Instance

Before launching, you need a key pair (for SSH access) and a security group (acts as a firewall). The commands below create all prerequisites and launch a free-tier eligible Amazon Linux 2023 instance.

bash — Launch an EC2 instance via CLI
# Create an SSH key pair and save the private key
aws ec2 create-key-pair \
  --key-name my-first-key \
  --query 'KeyMaterial' \
  --output text > my-first-key.pem

chmod 400 my-first-key.pem

# Create a security group that allows SSH from your IP
aws ec2 create-security-group \
  --group-name my-sg \
  --description "My first security group"

# Allow SSH (port 22) from your current IP only
MY_IP=$(curl -s https://checkip.amazonaws.com)
aws ec2 authorize-security-group-ingress \
  --group-name my-sg \
  --protocol tcp \
  --port 22 \
  --cidr "${MY_IP}/32"

# Get the latest Amazon Linux 2023 AMI ID for us-east-1
AMI_ID=$(aws ec2 describe-images \
  --owners amazon \
  --filters "Name=name,Values=al2023-ami-*-x86_64" \
             "Name=state,Values=available" \
  --query 'sort_by(Images, &CreationDate)[-1].ImageId' \
  --output text)

# Launch a t2.micro instance (free tier eligible)
INSTANCE_ID=$(aws ec2 run-instances \
  --image-id "$AMI_ID" \
  --instance-type t2.micro \
  --key-name my-first-key \
  --security-groups my-sg \
  --query 'Instances[0].InstanceId' \
  --output text)

echo "Launched instance: $INSTANCE_ID"

# Wait for it to be running, then get the public IP
aws ec2 wait instance-running --instance-ids "$INSTANCE_ID"
PUBLIC_IP=$(aws ec2 describe-instances \
  --instance-ids "$INSTANCE_ID" \
  --query 'Reservations[0].Instances[0].PublicIpAddress' \
  --output text)

# SSH into your instance
ssh -i my-first-key.pem ec2-user@"$PUBLIC_IP"
Stop Instances When Not In Use

A stopped EC2 instance does not consume compute hours (so it does not eat your free tier allocation or incur hourly charges), but its attached EBS volume still costs money. For short-lived experiments, terminate the instance entirely: aws ec2 terminate-instances --instance-ids $INSTANCE_ID. Terminated instances and their root volumes are deleted automatically.

S3 for Storage

Amazon S3 (Simple Storage Service) is one of the oldest and most used AWS services. It stores objects (files) inside buckets (top-level containers). There is no directory structure in a traditional sense — the / character in object keys is just a naming convention that the console renders as folders. S3 scales to virtually unlimited storage without any provisioning on your part.

S3 excels at: hosting static websites and Single Page Applications, storing user-uploaded files, serving as the source for CloudFront CDN distributions, archiving application logs, holding Terraform state files, and as a staging area for data pipelines.

bash — Common S3 CLI operations
# Bucket names must be globally unique across all AWS accounts
BUCKET_NAME="my-project-assets-$(date +%s)"

# Create a bucket (us-east-1 doesn't need LocationConstraint)
aws s3api create-bucket --bucket "$BUCKET_NAME" --region us-east-1

# For other regions, add the LocationConstraint:
# aws s3api create-bucket \
#   --bucket "$BUCKET_NAME" \
#   --region ap-south-1 \
#   --create-bucket-configuration LocationConstraint=ap-south-1

# Upload a single file
aws s3 cp ./index.html s3://"$BUCKET_NAME"/index.html

# Upload an entire directory recursively
aws s3 sync ./dist s3://"$BUCKET_NAME"/ --delete

# List objects in a bucket
aws s3 ls s3://"$BUCKET_NAME"/ --recursive --human-readable

# Download a file from S3
aws s3 cp s3://"$BUCKET_NAME"/index.html ./downloaded-index.html

# Generate a pre-signed URL (expires in 1 hour) for private objects
aws s3 presign s3://"$BUCKET_NAME"/index.html --expires-in 3600

# Enable static website hosting
aws s3 website s3://"$BUCKET_NAME"/ \
  --index-document index.html \
  --error-document 404.html

# Delete all objects and remove the bucket
aws s3 rm s3://"$BUCKET_NAME"/ --recursive
aws s3api delete-bucket --bucket "$BUCKET_NAME"
S3 Pricing Tips

S3 charges for storage (GB-months), requests (GET, PUT, LIST), and data transfer OUT to the internet. Data transfer IN is always free. The most common mistake is forgetting to clean up old versions when versioning is enabled — every version of every object is billed as storage. Use S3 Lifecycle Policies to automatically transition infrequently accessed data to S3-IA or S3 Glacier after a set number of days, reducing costs by 40–90% on cold data.

Lambda: Serverless Functions

AWS Lambda lets you run code without managing any servers. You upload a function, define what triggers it, and AWS handles everything else: provisioning machines, scaling (from zero requests to thousands per second automatically), patching the runtime, and billing you only for the compute time consumed — measured in 1-millisecond increments.

"Serverless" does not mean there are no servers — it means you do not manage them. This model is ideal for event-driven workloads: an API endpoint that handles variable traffic, a webhook processor that runs on each GitHub commit, a scheduled job that runs every night to generate reports, or a function triggered when a file is uploaded to S3.

A Simple Lambda Function in Node.js

javascript — handler.js (Lambda function)
// handler.js
// Lambda receives an "event" object (shape depends on trigger)
// and a "context" object with runtime info.

export const handler = async (event, context) => {
  console.log('Event received:', JSON.stringify(event, null, 2));

  // Example: parsing a body from an API Gateway event
  const body = event.body ? JSON.parse(event.body) : {};
  const name = body.name || 'World';

  // Simulate async work (e.g., DB query, external API call)
  const greeting = await buildGreeting(name);

  return {
    statusCode: 200,
    headers: {
      'Content-Type': 'application/json',
      'Access-Control-Allow-Origin': '*',
    },
    body: JSON.stringify({ message: greeting }),
  };
};

async function buildGreeting(name) {
  // In a real function, this might query DynamoDB or call an API
  return `Hello, ${name}! Deployed via AWS Lambda.`;
}
bash — Deploy a Lambda function via CLI
# Zip the function code
zip function.zip handler.js

# Create an IAM execution role for Lambda (one-time setup)
aws iam create-role \
  --role-name lambda-basic-role \
  --assume-role-policy-document '{
    "Version": "2012-10-17",
    "Statement": [{
      "Effect": "Allow",
      "Principal": {"Service": "lambda.amazonaws.com"},
      "Action": "sts:AssumeRole"
    }]
  }'

aws iam attach-role-policy \
  --role-name lambda-basic-role \
  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole

# Get the role ARN (replace 123456789012 with your Account ID)
ROLE_ARN=$(aws iam get-role \
  --role-name lambda-basic-role \
  --query 'Role.Arn' --output text)

# Create the Lambda function
aws lambda create-function \
  --function-name my-greeting-api \
  --runtime nodejs20.x \
  --handler handler.handler \
  --role "$ROLE_ARN" \
  --zip-file fileb://function.zip

# Invoke it directly from the CLI to test
aws lambda invoke \
  --function-name my-greeting-api \
  --payload '{"body":"{\"name\":\"Mayur\"}"}' \
  --cli-binary-format raw-in-base64-out \
  output.json && cat output.json

Lambda Trigger Flow

API Gateway HTTP Request Trigger Lambda handler.js Node.js 20.x runtime Read/Write DynamoDB NoSQL Database All console.log() output automatically sent to CloudWatch Logs

A typical Lambda trigger flow: API Gateway receives an HTTP request, invokes the Lambda function, which reads or writes data to DynamoDB.

Best Practices & Cost Optimization

Building on AWS without a cost strategy is a common beginner mistake that can turn a side project into an unexpected monthly bill. Equally important is building in a way that is secure, reliable, and maintainable. Here are the practices that matter most.

Right-Sizing Your Instances

The biggest driver of unnecessary EC2 cost is over-provisioned instances. Start with a small instance type (t3.micro or t3.small), enable CloudWatch detailed monitoring, and observe actual CPU, memory, and network utilization for a week before sizing up. AWS Cost Explorer has a built-in Right Sizing Recommendations feature that analyzes your usage and suggests downgrades.

Reserved Instances and Savings Plans

If you know a workload will run continuously for a year or more, Reserved Instances can save 40–60% over On-Demand pricing. You commit to a specific instance type and region for 1 or 3 years. Compute Savings Plans are more flexible — you commit to a dollar amount of compute usage per hour, and the discount applies automatically across EC2, Lambda, and Fargate.

Spot Instances for Tolerant Workloads

Spot Instances use spare AWS capacity at discounts up to 90% off On-Demand prices. The catch: AWS can reclaim them with a 2-minute warning when capacity is needed elsewhere. This makes Spot Instances perfect for batch processing jobs, CI/CD build runners, data science notebooks, and other interruptible workloads — not for your primary production web server.

S3 Lifecycle Policies

Data that starts in S3 Standard often does not need to stay there. Define lifecycle rules that automatically transition objects to cheaper storage classes as they age. A typical policy might move objects to S3 Standard-IA after 30 days, to S3 Glacier Instant Retrieval after 90 days, and delete them after 365 days.

CloudWatch Alarms and Anomaly Detection

Set billing alarms (covered in setup) and also create CloudWatch alarms on your key infrastructure metrics: EC2 CPU utilization above 80%, Lambda error rate above 1%, RDS connection count approaching the limit. AWS Cost Anomaly Detection can automatically flag unusual spending patterns and send you an email before a runaway process becomes a runaway bill.

Key Takeaways

  • Always enable MFA on both root and IAM users — this is non-negotiable.
  • Use IAM roles (not access keys) whenever possible, especially on EC2 and Lambda. Rotate keys regularly if you must use them.
  • Keep resources in a VPC with private subnets for anything that should not be publicly accessible (databases, internal APIs).
  • Tag every resource with at minimum Project, Environment (dev/staging/prod), and Owner tags — this makes cost allocation and cleanup dramatically easier.
  • Use infrastructure-as-code tools (AWS CloudFormation or Terraform) from day one. Clicking through the console to create resources is fine for learning, but not for anything you need to reproduce or audit.
  • Enable AWS Config and CloudTrail in every region you use to get an audit trail of all configuration changes and API calls.
  • Prefer managed services (RDS over self-managed MySQL on EC2, ElastiCache over self-managed Redis) — the operational overhead savings are worth the slightly higher sticker price.

AWS has a steep learning curve, but it rewards patience and systematic exploration. The best way to learn is by doing: spin up an EC2 instance, host a static site on S3, write a Lambda function that talks to DynamoDB. Start small, keep billing alerts on, and incrementally add services as your projects demand them. The free tier is genuinely generous enough to build and launch real applications.

In future posts, we will go deeper into VPC networking, Infrastructure as Code with Terraform, containerized deployments with ECS and Fargate, and setting up a production-ready CI/CD pipeline with CodePipeline. Stay tuned.

AWS Cloud Beginners DevOps EC2 S3 Lambda
Mayur Dabhi

Mayur Dabhi

Full-stack developer and DevOps enthusiast with a passion for cloud infrastructure, clean code, and making complex topics approachable. Writing about AWS, Laravel, Vue.js, and everything in between.