AWS Data Pipeline for DynamoDB Backup to S3 — a Tiny Demonstration

Step 1: Create a DynamoDB Table and Populate it

region=us-east-1aws dynamodb --region $region create-table \
--table-name DataPipeLineDemo \
--attribute-definitions \
AttributeName=Artist,AttributeType=S \
--key-schema AttributeName=Artist,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1
aws dynamodb --region $region wait table-exists --table-name DataPipeLineDemo
aws dynamodb --region $region put-item \
--table-name DataPipeLineDemo \
--item '{ "Artist": {"S": "Acme Band"}, "SongTitle": {"S": "Happy Day"} }' \
--return-consumed-capacity TOTAL

Step 2: Create an S3 Bucket

aws s3 --region $region mb s3://datapipelinedemo-sree

Cleaning Up

Sreeprakash Neelakantan

Written by

AWS Certified DevOps Engineer & Solutions Architect Professional — Docker | Kubernetes | DevOps — Trainer | Running | Swimming | Cycling

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade