Energetiq’s experience with Amazon’s new quirky machine-learning offering

Myself and a couple other members of the Energetiq team recently made the long journey from Melbourne to Las Vegas to attend AWS re:Invent 2018. One odd little announcement worth spending some time on is AWS DeepRacer and the associated AWS DeepRacer League.

Image for post
Image for post
AWS DeepRacer, a new machine-learning offering for… fun?

AWS DeepRacer (“deep” being a reference to deep learning I imagine, the family of machine learning that includes modern computer vision) consists of an autonomous “toy” car, and a collection of associated cloud tools for training models to drive it using simulations and reinforcement learning. Or as the product page puts it:

“AWS DeepRacer is the fastest way to get rolling with machine learning, literally. Get hands-on with a fully autonomous 1/18th scale race car driven by reinforcement learning, 3D racing simulator, and global racing league.” …


Using CloudWatch Events, Lambda, and SSM

Recently I have been experimenting with AWS CodeBuild as an alternative to our aging Jenkins-based CI/CD platform. Overall, it’s a fantastic platform, I’m a big fan. One glaring omission, however, is a built-in mechanism for a build number that automatically increments on each build. Jenkins exposes a build number as an environment variable, which is useful information to include in a versioning scheme.

Let’s build a similar system using the AWS ecosystem, it’ll look something like this here:

Image for post
Image for post
An auto-incrementing build number for AWS CodeBuild using Parameter Store, Lambda, and CloudWatch

We’ll store our build numbers for each project in AWS Systems Manager Parameter Store (SSM), as CodeBuild has a built-in integration to auto-populate an environment variable values stored there. Then we’ll use CloudWatch Events to listen in on CodeBuild build events and invoke a Lambda function. Our Lambda will simply extract the project name from the build event, then look up our existing build number and increment it. On our next build, CodeBuild will pull down the new value. …


Tips on integrating the AWS CLI when Ansible modules are letting you down

If you have picked up Ansible as a tool for managing your AWS cloud environments, then I know how it’s going. Things are going great. Ansible’s rich library of modules for AWS (159 at last count) is enabling you to bash out playbooks for bits of your stack at an alarming rate: EC2, DynamoDB, S3, Route 53, you’ve got it all. You are swimming in idempotent automation that makes your job a breeze. Life is good.

A quick example Ansible playbook to illustrate the idempotency of the S3 module
Image for post
Image for post
Lots of Ansible modules are idempotent, able to handle running from different states

That is, until you need to build something you don’t have a module for. For example: your team is building a new service that leverages Aurora clusters. Time for some more automation. You pull up your trusty list of Ansible modules… Hmm. …


Simple analysis of CSV data in S3 with serverless SQL queries

Edit: by request of Data Victoria, links to the data discussed in this article have been removed.

As a Melbourne resident and daily commuter on our Myki public transport fare system (no comments), I was intrigued when I heard the dataset for the Melbourne Datathon 2018 was to be large scale, real world Myki usage data. What cool insights can we glean on how our bustling city uses its public transport network? Let’s find out! Best of all, we’ll check it out without transforming it from CSV, or even moving it out of S3.

Here are a couple of quick stats I gleaned from this 1.8 billion row dataset, with SQL queries that run in seconds, for much less than the cost of a cup of…


Using Logentries’ REST API to avoid manually managing logsets and logs

In some Docker Compose-based services I administer, I use Logentries to aggregate the log output from our containers. The token for the Logentries log is provided to the agent on the command line from the environment, something like this:

The LOGENTRIES_VERY_IMPORTANT_SERVICE environment variable is then populated through some Ansible we have. This approach works quite nicely, but leaves us with the burden of creating and naming new logs in Logentries when we deploy new instances of services, as well as transcribing the tokens for these new logs into our Ansible configuration. Lame. 🙅

In this article we’ll put together some Ansible tasks that will leverage the Logentries REST API to create a defined logset and list of logs if they don’t exist, and/or retrieve the tokens for those logs — ready to be plugged into something like the Docker Compose environment situation described above. …


Managing Amazon’s fully-managed relational database service

In this article we will use Ansible to automate the configuration of Amazon Aurora managed databases. If you’re not sure what you’re doing here, maybe peek at the introduction, and take note that the automation here builds in part on what was built in a previous article about building a VPC. The scope of the automation will handily build the following

  • Subnet and cluster parameter group: define the VPC subnets that our databases will live in, and a default MySQL 5.7 parameter group for our cluster to use
  • Aurora cluster: configure an Aurora cluster, with settings provided by Ansible…


No-fuss AWS-managed Elastic clusters

In this article we look at using Ansible for automating the configuration of AWS-managed Elasticsearch clusters in Amazon’s Elasticsearch Service. If you’re not sure what you’re doing here, maybe peek at the introduction, and take note that the automation here builds in part on what was built in the previous article about building a VPC. The scope of the automation will handily build the following:configure

  • IAM Role for AWS ES: a service-linked IAM role that AWS Elasticsearch Service requires to operate
  • Elasticsearch cluster: configure the cluster itself, inside a VPC, with settings provided by Ansible configuration
  • Route 53 DNS entry for the cluster: configure a more friendly CNAME record that will point to our new cluster…


Networking on AWS made easy!

In this article we’re looking at using Ansible for automating the configuration of cloud networking in an AWS VPC. If you’re not sure what you’re doing here, maybe peek at the introduction. The scope of the automation we will build will handily configure all of the following:

  • VPC and Subnets: create the VPC itself, along with public and private subnets across three availability zones: a, b, and c
  • Internet Gateway: configure a gateway for our public resources to access the internet
  • NAT Gateway: configure a NAT gateway to allow our private resources to access the public internet
  • VPC Route Tables: define the routing to make our subnets public or private (route public traffic using either the public or NAT…


4 benefits of automating, and 4 reasons to automate with Ansible

I love automation. This series of articles, Automation with Ansible, is the documenting of some of the Ansible bits ‘n bobs that make my life easier when managing software infrastructure. This first article is just a little introduction: why I consider automation so important, and why I use Ansible when building automation for my team.

Check it all out on GitHub.

Why Automate?

I am a huge advocate for infrastructure automation on my team. I love automation. I don’t want to spend too much time convincing you why you should focus more time on automation (if you’re here you’re probably convinced already), but here is a little shortlist of the reasons I think automation is a critical part of any software product. …


For your Dell XPS 15 9560 or Nvidia Optimus notebook

Say, for a moment, that you’re like me in two particular respects: you’ve recently decided you’re going to take the leap and move over to Ubuntu full-time after a few years of administering Linux machines in the cloud, and you own a Dell XPS 15 9560 (or a similar Nvidia GPU-equipped laptop). You’ll probably have noticed one significant detail upon that squeaky-clean fresh install of Ubuntu 17.10:

The battery life sucks.

Image for post
Image for post
Like, one hour sucks.

There’s good news and bad news. Bad news: power efficiency just isn’t as good under Linux, compared to when running Windows and Mac. Your mileage may vary, I’ve heard it from some that their battery life is just as good, but overall, for most hardware configs, you’re gonna lose out. But good news, there is a lot we can do to improve on that measly single-lunch-break-spanning battery estimation we’re seeing. I usually see 6–8 hours on battery utilising the integrated Intel graphics unit, and have the ability to restart my laptop and use the high-powered Nvidia graphics when I’ve got power nearby. Note that I have a Dell XPS 15 9560 with the FHD-display and smaller 56Whr battery, if you’re using the 4K-screen model with the larger 97Whr battery, I would expect your numbers to vary! …

About

Tom Wright

I like automation, productivity, team process, and watching the thing actually get out the door. @tomwwright on GitHub

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store