Skip to content

Complete Guide: Working with Amazon S3 Using AWS CLI

This guide explains how to create and manage S3 buckets and objects using the AWS CLI.


Step 0 — Prerequisites

  1. AWS Account
  2. AWS CLI Installed: https://aws.amazon.com/cli/
  3. Configure AWS CLI:
bash
aws configure
  • Enter your Access Key ID, Secret Access Key, region, and default output (json)
  1. Optional: Python installed if you want to use scripts.

Step 1 — Create an S3 Bucket

bash
aws s3 mb s3://my-unique-bucket-name --region us-east-1
  • my-unique-bucket-name must be globally unique.
  • Verify creation:
bash
aws s3 ls

Step 2 — Upload Files to S3

Upload a single file

bash
aws s3 cp local_file.txt s3://my-unique-bucket-name/

Upload a folder recursively

bash
aws s3 cp ./local-folder s3://my-unique-bucket-name/ --recursive

Step 3 — List Objects in a Bucket

bash
aws s3 ls s3://my-unique-bucket-name/
  • To list recursively:
bash
aws s3 ls s3://my-unique-bucket-name/ --recursive

Step 4 — Download Files from S3

Download a single file

bash
aws s3 cp s3://my-unique-bucket-name/remote_file.txt ./local_file.txt

Download an entire folder

bash
aws s3 sync s3://my-unique-bucket-name/ ./local-folder

Step 5 — Copy or Move Objects

Copy an object within a bucket

bash
aws s3 cp s3://my-unique-bucket-name/file1.txt s3://my-unique-bucket-name/file2.txt

Move an object

bash
aws s3 mv s3://my-unique-bucket-name/file1.txt s3://my-unique-bucket-name/file2.txt

Step 6 — Delete Objects or Buckets

Delete a single object

bash
aws s3 rm s3://my-unique-bucket-name/file1.txt

Delete all objects in a bucket

bash
aws s3 rm s3://my-unique-bucket-name/ --recursive

Delete the bucket (must be empty first)

bash
aws s3 rb s3://my-unique-bucket-name

Step 7 — Advanced Tips

  1. Sync Local Folder to S3
bash
aws s3 sync ./local-folder s3://my-unique-bucket-name/ --delete
  • --delete removes files from S3 that are not in local folder.
  1. Make Objects Public
bash
aws s3 cp s3://my-unique-bucket-name/file.txt s3://my-unique-bucket-name/file.txt --acl public-read
  1. Generate Pre-signed URL (temporary access)
bash
aws s3 presign s3://my-unique-bucket-name/file.txt --expires-in 3600
  1. Check Bucket Policies
bash
aws s3api get-bucket-policy --bucket my-unique-bucket-name

Step 8 — Using S3 with Lambda (Optional)

  • You can trigger Lambda functions on object creation events.
  • Example Python snippet for Lambda triggered by S3 upload:
python
def lambda_handler(event, context):
    for record in event['Records']:
        bucket = record['s3']['bucket']['name']
        key = record['s3']['object']['key']
        print(f'Uploaded file: {key} in bucket: {bucket}')
  • Use S3 trigger in Lambda console or CloudFormation/SAM template.

Step 9 — Summary Workflow

  1. aws s3 mb → Create bucket
  2. aws s3 cp / aws s3 sync → Upload files
  3. aws s3 ls → List objects
  4. aws s3 cp / aws s3 sync → Download files
  5. aws s3 rm / aws s3 rb → Delete objects/buckets
  6. Optional: Configure triggers for Lambda integration

Tips

  • Bucket names must be globally unique.
  • Use --region to control bucket location.
  • sync is safer for folder mirroring than multiple cp commands.
  • Be careful with --delete, as it removes files from S3.