How to delete all company progress by one "rm" command in AWS s3 Buckets

This could be a comprehensive overview of electric skateboards.

BUT, something went wrong…

It all started quite simple. I read a story about interesting vulnerability in AWS S3 Bucket. 
This story: How i pwned a million dollar company ( hey Sriram )
And everything on this story was clear and detailed. But a little misunderstanding was sitting in my head. As someone said "Until you try — you will not understand".
I was lucky to try my knowledge on my office project. The project passed all tests. Levels of access were set up correctly. But it was not enough for me. I wanted to find something very interesting.
Well, I think you understand that I found it. But in another project.

A couple of months ago, I was thinking about buying new electric-skateboard. And in the process of thinking, I went to research various applications that are written by different companies for these electric skates. That research almost did not end with anything. There were serious problems for several companies. But all my messages were ignored. I did not bought electric skate in the end.

And then I discovered that one of the applications used AWS S3 Bucket. But then I still did not know how can it be dangerous if it has misconfiguration.
I dug up my old logs. I found the name of Aws S3 Bucket I was interested in and went hacking.

The investigated AWS s3 bucket was available for listing. Ie you could walk through the directories and watch the contents. It’s as if it’s not dangerous if you keep static data for your application or web-page (logos, pictures, text …). But what if you keep different types of updates for high-speed vehicles(like electric skateboard)? Looks strange. After all, it is really strange to keep the versions of unstable updates in public. Yes, with developer comments which informed you that this update can be dangerous.

One problem which was happened is that different AWS S3 Buckets can use different regions. And when you referring to a particular s3 bucket, you need to know not only the exact name of AWS s3 Bucket but also region prefix. I managed to solved this problem by banal guessing the region for reading the contents. The list of regions was taken from here :(http://docs.aws.amazon.com/general/latest/gr/rande.html)

Anyway. Let’s talk about the most dangerous thing on this story. When I tried to write new test file (with CP command) to the directory of this project, I received a response with code 403 forbidden.

Access denied. Then I decided to to try to move file between the directories. I did this by copying command cp (but mv also worked). Checked on a regular file Readme.txt which was caught in one of the folders.

And we already have two Readme.txt files in different folders. Wow! At that moment , we can already arrange chaos and confusion by moving and copying files.
And what about remove command? I tried this by rm command.

And it worked too! I was shocked. Readme.txt was deleted!

Company wrote a good native app. Made a good electric skateboard. And someone can instantly take and destroy everything with a few "good"commands in their AWS s3 Bucket.

It took a lot of effort to find and write to someone who takes note of the information on this issue. Of course, there was no question of rewards. I am not made of money.

On this example, I managed to try new knowledge in practice. And get an excellent result in the form of the vulnerability which I found.
Clap your hands if you like this story :)