Dale Bingham
Jan 6 · 9 min read

In the first post I did, I documented how I used static files (HTML, JS, CSS) in an AWS S3 bucket fronted by AWS CloudFront to create our https://www.cingulara.com/ website. That was not good enough as there was some manual stuff in there. And I work very hard to be lazy! (Ok, automating is not laziness, just removes non-value-added stuff so I can do harder work. But it is hard work to do.) So I used GitHub to store my files, a trigger in CodeShip for my master branch of files to automatically update S3, a “build started” and “build succeeded” setup to talk to a #website channel in my company Slack and a separate AWS Lambda function linked to any change in S3 to email me the file when it is changed.

Why an email as well as Slack? Well because Slack is updated right now when the correct process is followed through automation. The Lambda function is triggered on ANY change in S3 so if someone tries to (later on in life) update S3 directly there is a notification sent out on that as well. People like to circumvent automation when they think they can do it faster and better. But I am a process guy as well. So I want to know on any change. That is why I am using AWS Lambda.

My initial setup for my company website

Do not get me wrong. I setup the S3 bucket and CloudFront next to ensure the website was working correctly. Then I did the automation of keeping it updated so I can do one thing and trigger every single other thing the way it should be. Automation in any SDLC is our thing so I better be “dogfooding” this right?! Right. So I am. This is what we did to automate our website updates with proper configuration management and a defined process. You can do it this way or maybe a similar way (GitLab or Bitbucket, AWS internal tools for automation, Jenkins, TravisCI, etc.). I just decided this way as it was simple and worked. No need for extra complexity.

Step 1: Get your GitHub repo setup for your website

You can do this with any hosted “git” repository really. I have used GitHub and like it so I put my corporate stuff there. I pay for the account so I am using a private repo for the website code. I am also using a private repo for the lambda emailer code as well. When you have an account you can setup your repo and put your files in it. You can decide to use whatever branching strategy you wish. I have a master branch that is the “source of truth” for the website files myself. And I trigger off a change to that master branch to make this automation happen. What I found out is that with CodeShip if I put all the files in the root of my repo, when I later on copy all files to S3 it includes the “.git” folder because I give a root relative path. So what I did is make a “wwwroot” path in my repo and put all the files in there. Then I use that wwwroot path in my configuration (shown later) so only the website files get copied.

My simple repo with a wwwroot folder where all my main files are

Step 2: Setup a user to connect to your S3 bucket from CodeShip

So for security sake you will want to have a least privileged user in AWS to connect from CodeShip to S3 just for your bucket, and just to copy files in there. You can play with settings once you get this working and tighten down on the inline permissions for the user. I will copy the JSON below of the policy I am using to get started and you can tweak as you like from there. If something does not work when you tighten security, loosen a bit until it is right.

In the IAM section of AWS (I apologize for all these damn acronyms!!) create a new user just for this integration and name it accordingly so you know it is just for this integration. You can do an inline policy if you wish just for this user, or if you think you will use other places make a custom policy you can reuse. Add whatever tags you wish to this user as well. The policy will look similar to below.

My inline IAM policy for my integration user

Now that you have those permissions finish creating a user and make sure you download the credentials CSV for the access key and secret key you will need in the CodeShip step just below.

Step 3: Get your CodeShip project setup

So now that your repo is setup and your integration user is setup in AWS with least privilege, sign up or log into Cloudbees CodeShip (there is a free tier) so you can create a new project. Choose GitHub, Bitbucket or GitLab as a SaaS setup or a self hosted setup. If you do not see one you use you can contact their support engineers on the page as well. For this I am using GitHub. If you have not yet done so, install the CodeShip GitHub app and follow the directions to link your corporate or personal GitHub account to CodeShip. Once done you can select your GitHub organization and the repository you wish to link.

CodeShip new project setup

Click the Connect button and then choose Basic unless you wish to pay for Pro. I would suggest get started with Basic and when you want more control or specifics bump up to Pro. The typical “Open Core” type of model IMO. Once you select the Basic button you see your CodeShip project settings. You can specify tests, branches, environment, notifications, as well as the repository settings. I specified the pipeline in CodeShip Deploy settings to run when “Branch is exactly” for “master” so it only runs for the master branch. Set this to however you wish to use your automation. There is not a wrong choice however over time you will see some best practices bubble up for you and your group.

I also setup my S3 interaction to copy files from the “wwwroot” GitHub master branch into my S3 bucket name by using the access key and secret access key mentioned above in my deploy settings for S3. The Local Path is the path within my repo I talked on earlier and if you do not, you may get the “.git” folder copied in there without some other settings or specifications. Making it go into its own folder was simple and worked for me so that is what I did (KISS method). The S3 bucket in this form is just the name that is listed when you list all our S3 buckets. The regular name, not the full URL or the ARN. Just the simple name in your S3 listing.

CodeShip S3 deploy settings

Step 4: Get your Slack integration with CodeShip setup

The other thing I did on the Notifications tab of CodeShip is make it link into my Slack channel. On the CodeShip Notifications tab in project settings you can specify to add a New Notification and choose the Slack option. Follow the settings in there to approve and apply the integration and specify what Branch, Build events, and other information you wish. The Webhook URL in the Slack setup will need to be copied into your CodeShip settings when done. I choose a “#website” channel in my slack setup so it is separate from the rest of the things in here like #twitter, #work-schedule, #github-notifications, etc. Do what you want here just get it setup how you wish.

Slack integration with CodeShip

When this is done you will see notifications in your Slack channel like below just letting you know when things happen. This may not be a big deal if you are a company of 1 to 5 people. When you get to delegating with 25+ people it is easier to notify like this and keep things separate so you can keep it straight!

Slack channel with info on CodeShip running successfully

If that is all you want, save your CodeShip project settings and go test it out! Edit locally your website code and do a push and a PR or (just for testing purposes!!!) edit locally in GitHub to a file and then commit it with a good comment. Let this rip and see what you get! If something does not work the Slack info as well as CodeShip logs and AWS should show you information you can trace. Otherwise you are good to go! This is not super complex but it does have some moving parts. The great news is, once it is setup and working you can trust your automation! If like me you wanted a bit more interaction, you also can setup an AWS Lambda function that is triggered on any S3 change to send you an email. We will do that next. If you do not want to do that, you are done! Or you can just read below to get a feel for how Lambda can work for you.

Step 5: Get the AWS Lambda function for email generated and uploaded

Ok if you want to setup a Lambda function there are a few more things to do. One is to setup a local directory on your box for NodeJS. If you do not have NodeJS go install it or read this and change for your favorite Lambda language (Go, C#, etc.). Go into the directory and run “npm i nodemailer” as we use that module. Now create an index.js file and make it like below. I did not want to copy the text in here. I want to make you get used to making these and understand how they work.

AWS Lambda index.js code for email

When you have this setup and working locally, similar to how I did it, you need to go into the folder on your machine and zip up the contents. DO NOT zip up the folder. Go into the folder and zip up the contents or this won’t work! Been there. Done that. Burned the T-Shirt. Specify the Runtime as Node.js 8.10 and the Handler as index.handler.

You will need to specify the trigger (event) that kicks this off and choose S3. Find your S3 website folder we used earlier for your code that you wish to trigger off and specify the events you wish to use (create, put, delete, etc.) and enable it. And there are a few resources you need in this Lambda function that I used: AmazonS3FullAccess, AmazonSESFullAccess, and CloudWatchLogsFullAccess. I also may use the SNS if I want to get notifications on this Lambda function just in case it does not work. And you can test out and restrict the access and permissions to tweak this for least privileged user as well. I suggest you do that to #1 tighten down security but #2 to learn how to do it in AWS. Experience being the best teacher.

Note: To send an email through AWS Lambda like this you must go to https://console.aws.amazon.com/ses/home and verify an email address or domain. Do this for sure. Also if you want to use this as a real step you will want to request to “get out of sandbox mode” for AWS SES Simple Email Service. The screen at the SES homepage has information on how to do that. I will not go into doing that, just suggest you do it.

You can test this by using a test sample payload. This page in AWS Lambda docs is very helpful. Go to “To test the Lambda function” and see the payload. You can save this and test your function to see the logs as well as the email response once your SES is setup correctly. Pretty cool stuff when you see this working! I breezed through this step and I assure you it took me several hours over a few days to get it working the way I wanted it to. I will probably go back and revisit this but for now it is working and good enough to share.

Step 6: The ending architecture (for now)

This is what we ended up with from our initial step at the top of this post. Congratulations! I hope you learned something. I certainly did.

Flow of automation with the corporate website

Dale Bingham

Written by

CTO of Cingulara. Software Geek by trade. Father of three daughters. Husband. Lover of newer tech where it fits. Follow at https://www.cingulara.com/ @cingulara

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade