Is ditching own infrastructure really safe?

This question may sound bit outdated. We already dropped own metal boxes sitting in the cupboard on the other end of the office. First, we moved to dedicated servers in gated data centers, then to virtual hosts running on more sealed machines. Later instead of putting things on the server, we decided to wrap them in sealed container and deliver to execution environment somewhere in the cloud. And yet, we have one more step to do. Kill the infrastructure, including operating system, totally.

Let’s be realistic. Even biggest players on the internet don’t handle their data in their bunkers. They pay providers like AWS, Azure, DigitalOcean or Google to run their servers and expect full security. And it is. Simplest like Digital Ocean ensure at least that machines are held far from the public, in secure buildings, under guard, with proper failover and monitoring. More sophisticated like AWS build multi-level firewalls with Security Groups or detach your stuff from the network completely by VPNs. Still, there is always open window. That window is your instance, let’s say EC2 or Droplet listening to the network. It listens to the good stuff, but to the bad too. That is your weak link. That is where you get caught.

If you are lame, you likely left FTP running and open or you use a weak password for your root SSH account. If you do things like you should and secured your access with RSA keys and disabled root login, still there are places which may fail. Like your HaProxy, Nginx or Apache. Each of them from time to time will show some weakness, often under pressure of DDoS attack or by failures in your PHP/Node/Python/Ruby/Java application.

I will tell about the real cost of caring about all of that next time. Now, just please, believe me, it’s expensive. And often it’s something you wouldn’t like to care. You wouldn’t like to worry what will happen in the case of attack, or when load would outrun your servers. What if you succeed (traffic), and then fail because of this success (downtime)?

So now think about a much simpler world. Where you don’t run big WordPress or Meteor. Your application is single page JavaScript app in whatever you like served from S3 bucket. It makes requests to your API, which is mainly API Gateway covering the set of AWS Lambda functions. Each function is a small program running on request, in sealed container. You can decide which functions have rights to send emails with SES or access DynamoDB to read, write or both. You don’t have to do all heavy actions when a user hits your page. You can request thumbnails with delay. A user has uploaded an image to his profile and see it, it’s saved, can go further. In meantime, S3 bucket has sent an event, told some Lambda function that it needs thumbnails. This function fired few child processes, each creating one, high-quality thumbnail and saving it in the bucket. No one can login to the servers, even you. What servers? It’s a cloud. No physical form. You log into your master account using email, password, and code from your token or phone. You, colleagues, have only limited access to resources, they can’t access the live database.

Now let’s see. Is it a safe world?

S3, DynamoDB, Lambda, API Gateway — instance less, serverless, scale infinitely

Q: How much data can I store?
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.
Q: Is there a limit to how much data I can store in Amazon DynamoDB?
No. There is no limit to the amount of data you can store in an Amazon DynamoDB table. As the size of your data set grows, Amazon DynamoDB will automatically spread your data over sufficient machine resources to meet your storage requirements.

API Gateway throttles traffic, S3, DynamodDB and Lambda handle load perfectly — DDos will fail

Q: How can I address or prevent API threats or abuse?
Amazon API Gateway supports throttling settings for each method in your APIs. You can set a standard rate limit and a burst rate limit per second for each method in your REST APIs. Further, Amazon API Gateway automatically protects your backend systems from distributed denial-of-service (DDoS) attacks, whether attacked with counterfeit requests (Layer 7) or SYN floods (Layer 3).
Q: What happens if traffic from my application suddenly spikes?
 Amazon S3 was designed from the ground up to handle traffic for any Internet application. Pay-as-you-go pricing and
unlimited capacity ensures that your incremental costs don’t change and that your service is not interrupted. Amazon S3’s massive scale enables us to spread load evenly, so that no individual application is affected by traffic spikes.
Q: Is there a limit to how much throughput I can get out of a single table?
No, you can increase the throughput you have provisioned for your table using UpdateTable API or in the AWS Management Console. DynamoDB is able to operate at massive scale and there is no theoretical limit on the maximum throughput you can achieve. DynamoDB automatically divides your table across multiple partitions, where each partition is an independent parallel computation unit. DynamoDB can achieve increasingly high throughput rates by adding more partitions.
If you wish to exceed throughput rates of 10,000 writes/second or 10,000 reads/second, you must first contact Amazon through this online form.
Q: How do I scale an AWS Lambda function?
You do not have to scale your Lambda functions — AWS Lambda scales them automatically on your behalf. Every time an event notification is received for your function, AWS Lambda quickly locates free capacity within its compute fleet and runs your code. Since your code is stateless, AWS Lambda can start as many copies of your function as needed without lengthy deployment and configuration delays. There are no fundamental limits to scaling a function. AWS Lambda will dynamically allocate capacity to match the rate of incoming events.

Do I really have to say more?

But remember. All depends on how you write your function code, set roles, and permissions. But that already loads less work to do.

Sources
 API Gateway https://aws.amazon.com/api-gateway/faqs/
 DynamoDB https://aws.amazon.com/dynamodb/faqs/
 Lambda https://aws.amazon.com/lambda/faqs/
 S3 https://aws.amazon.com/s3/faqs/

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.