Myself and a couple other members of the Energetiq team recently made the long journey from Melbourne to Las Vegas to attend AWS re:Invent 2018. One odd little announcement worth spending some time on is AWS DeepRacer and the associated AWS DeepRacer League.
AWS DeepRacer (“deep” being a reference to deep learning I imagine, the family of machine learning that includes modern computer vision) consists of an autonomous “toy” car, and a collection of associated cloud tools for training models to drive it using simulations and reinforcement learning. Or as the product page puts it:
“AWS DeepRacer is the fastest way to get rolling with machine learning, literally. Get hands-on with a fully autonomous 1/18th scale race car driven by reinforcement learning, 3D racing simulator, and global racing league.” …
Recently I have been experimenting with AWS CodeBuild as an alternative to our aging Jenkins-based CI/CD platform. Overall, it’s a fantastic platform, I’m a big fan. One glaring omission, however, is a built-in mechanism for a build number that automatically increments on each build. Jenkins exposes a build number as an environment variable, which is useful information to include in a versioning scheme.
Let’s build a similar system using the AWS ecosystem, it’ll look something like this here:
We’ll store our build numbers for each project in AWS Systems Manager Parameter Store (SSM), as CodeBuild has a built-in integration to auto-populate an environment variable values stored there. Then we’ll use CloudWatch Events to listen in on CodeBuild build events and invoke a Lambda function. Our Lambda will simply extract the project name from the build event, then look up our existing build number and increment it. On our next build, CodeBuild will pull down the new value. …
If you have picked up Ansible as a tool for managing your AWS cloud environments, then I know how it’s going. Things are going great. Ansible’s rich library of modules for AWS (159 at last count) is enabling you to bash out playbooks for bits of your stack at an alarming rate: EC2, DynamoDB, S3, Route 53, you’ve got it all. You are swimming in idempotent automation that makes your job a breeze. Life is good.
That is, until you need to build something you don’t have a module for. For example: your team is building a new service that leverages Aurora clusters. Time for some more automation. You pull up your trusty list of Ansible modules… Hmm. …
Edit: by request of Data Victoria, links to the data discussed in this article have been removed.
As a Melbourne resident and daily commuter on our Myki public transport fare system (no comments), I was intrigued when I heard the dataset for the Melbourne Datathon 2018 was to be large scale, real world Myki usage data. What cool insights can we glean on how our bustling city uses its public transport network? Let’s find out! Best of all, we’ll check it out without transforming it from CSV, or even moving it out of S3.
Here are a couple of quick stats I gleaned from this 1.8 billion row dataset, with SQL queries that run in seconds, for much less than the cost of a cup of…
In some Docker Compose-based services I administer, I use Logentries to aggregate the log output from our containers. The token for the Logentries log is provided to the agent on the command line from the environment, something like this:
LOGENTRIES_VERY_IMPORTANT_SERVICE environment variable is then populated through some Ansible we have. This approach works quite nicely, but leaves us with the burden of creating and naming new logs in Logentries when we deploy new instances of services, as well as transcribing the tokens for these new logs into our Ansible configuration. Lame. 🙅
In this article we’ll put together some Ansible tasks that will leverage the Logentries REST API to create a defined logset and list of logs if they don’t exist, and/or retrieve the tokens for those logs — ready to be plugged into something like the Docker Compose environment situation described above. …
Managing Amazon’s fully-managed relational database service
In this article we will use Ansible to automate the configuration of Amazon Aurora managed databases. If you’re not sure what you’re doing here, maybe peek at the introduction, and take note that the automation here builds in part on what was built in a previous article about building a VPC. The scope of the automation will handily build the following
No-fuss AWS-managed Elastic clusters
In this article we look at using Ansible for automating the configuration of AWS-managed Elasticsearch clusters in Amazon’s Elasticsearch Service. If you’re not sure what you’re doing here, maybe peek at the introduction, and take note that the automation here builds in part on what was built in the previous article about building a VPC. The scope of the automation will handily build the following:configure
In this article we’re looking at using Ansible for automating the configuration of cloud networking in an AWS VPC. If you’re not sure what you’re doing here, maybe peek at the introduction. The scope of the automation we will build will handily configure all of the following:
I love automation. This series of articles, Automation with Ansible, is the documenting of some of the Ansible bits ‘n bobs that make my life easier when managing software infrastructure. This first article is just a little introduction: why I consider automation so important, and why I use Ansible when building automation for my team.
Check it all out on GitHub.
I am a huge advocate for infrastructure automation on my team. I love automation. I don’t want to spend too much time convincing you why you should focus more time on automation (if you’re here you’re probably convinced already), but here is a little shortlist of the reasons I think automation is a critical part of any software product. …
Say, for a moment, that you’re like me in two particular respects: you’ve recently decided you’re going to take the leap and move over to Ubuntu full-time after a few years of administering Linux machines in the cloud, and you own a Dell XPS 15 9560 (or a similar Nvidia GPU-equipped laptop). You’ll probably have noticed one significant detail upon that squeaky-clean fresh install of Ubuntu 17.10:
The battery life sucks.
There’s good news and bad news. Bad news: power efficiency just isn’t as good under Linux, compared to when running Windows and Mac. Your mileage may vary, I’ve heard it from some that their battery life is just as good, but overall, for most hardware configs, you’re gonna lose out. But good news, there is a lot we can do to improve on that measly single-lunch-break-spanning battery estimation we’re seeing. I usually see 6–8 hours on battery utilising the integrated Intel graphics unit, and have the ability to restart my laptop and use the high-powered Nvidia graphics when I’ve got power nearby. Note that I have a Dell XPS 15 9560 with the FHD-display and smaller 56Whr battery, if you’re using the 4K-screen model with the larger 97Whr battery, I would expect your numbers to vary! …