Complete Guide: Working with Amazon S3 Using AWS CLI
This guide explains how to create and manage S3 buckets and objects using the AWS CLI.
Step 0 — Prerequisites
- AWS Account
- AWS CLI Installed: https://aws.amazon.com/cli/
- Configure AWS CLI:
bash
aws configure
- Enter your Access Key ID, Secret Access Key, region, and default output (json)
- Optional: Python installed if you want to use scripts.
Step 1 — Create an S3 Bucket
bash
aws s3 mb s3://my-unique-bucket-name --region us-east-1
my-unique-bucket-namemust be globally unique.- Verify creation:
bash
aws s3 ls
Step 2 — Upload Files to S3
Upload a single file
bash
aws s3 cp local_file.txt s3://my-unique-bucket-name/
Upload a folder recursively
bash
aws s3 cp ./local-folder s3://my-unique-bucket-name/ --recursive
Step 3 — List Objects in a Bucket
bash
aws s3 ls s3://my-unique-bucket-name/
- To list recursively:
bash
aws s3 ls s3://my-unique-bucket-name/ --recursive
Step 4 — Download Files from S3
Download a single file
bash
aws s3 cp s3://my-unique-bucket-name/remote_file.txt ./local_file.txt
Download an entire folder
bash
aws s3 sync s3://my-unique-bucket-name/ ./local-folder
Step 5 — Copy or Move Objects
Copy an object within a bucket
bash
aws s3 cp s3://my-unique-bucket-name/file1.txt s3://my-unique-bucket-name/file2.txt
Move an object
bash
aws s3 mv s3://my-unique-bucket-name/file1.txt s3://my-unique-bucket-name/file2.txt
Step 6 — Delete Objects or Buckets
Delete a single object
bash
aws s3 rm s3://my-unique-bucket-name/file1.txt
Delete all objects in a bucket
bash
aws s3 rm s3://my-unique-bucket-name/ --recursive
Delete the bucket (must be empty first)
bash
aws s3 rb s3://my-unique-bucket-name
Step 7 — Advanced Tips
- Sync Local Folder to S3
bash
aws s3 sync ./local-folder s3://my-unique-bucket-name/ --delete
--deleteremoves files from S3 that are not in local folder.
- Make Objects Public
bash
aws s3 cp s3://my-unique-bucket-name/file.txt s3://my-unique-bucket-name/file.txt --acl public-read
- Generate Pre-signed URL (temporary access)
bash
aws s3 presign s3://my-unique-bucket-name/file.txt --expires-in 3600
- Check Bucket Policies
bash
aws s3api get-bucket-policy --bucket my-unique-bucket-name
Step 8 — Using S3 with Lambda (Optional)
- You can trigger Lambda functions on object creation events.
- Example Python snippet for Lambda triggered by S3 upload:
python
def lambda_handler(event, context):
for record in event['Records']:
bucket = record['s3']['bucket']['name']
key = record['s3']['object']['key']
print(f'Uploaded file: {key} in bucket: {bucket}')
- Use S3 trigger in Lambda console or CloudFormation/SAM template.
Step 9 — Summary Workflow
aws s3 mb→ Create bucketaws s3 cp/aws s3 sync→ Upload filesaws s3 ls→ List objectsaws s3 cp/aws s3 sync→ Download filesaws s3 rm/aws s3 rb→ Delete objects/buckets- Optional: Configure triggers for Lambda integration
✅ Tips
- Bucket names must be globally unique.
- Use
--regionto control bucket location. syncis safer for folder mirroring than multiplecpcommands.- Be careful with
--delete, as it removes files from S3.