Components of an AWS Batch Job
ACM.6 Considering the composition of our batch jobs
This is a continuation of my series of posts on Automating Cybersecurity Metrics.
In the last two posts I explained how batch jobs can help cybersecurity:
How Batch Jobs Can Help Cybersecurity
ACM.4 Batch jobs for penetration testing, security metrics, incident response, and more
and why you might want to use MFA with batch jobs:
Creating an AWS Batch Job That Requires MFA
ACM.5 Series on my attempt to create an AWS Batch job that requires MFA
You’ll need to configure a number of things to create a batch job on AWS. I went through a quick tutorial to create a batch job manually in the AWS console just to get the idea what I’d need to think about when constructing a batch job. Those components are listed on this page:
What Is AWS Batch?
AWS Batch helps you to run batch computing workloads on the AWS Cloud. Batch computing is a common way for developers…
Compute environment: The compute resources you want to use for your batch job. That means what AWS compute resources you want to use like Fargate (containers) or EC2 (VMs). You can configure the compute resources to have different configurations which will affect your batch job performance and cost.
Job Definitions: A template that defines what will happen when a job runs. I provided a lot of examples of what you might want to do with a batch job in my initial post on batch jobs for cybersecurity:
How Batch Jobs Can Help Cybersecurity
Batch jobs for penetration testing, security metrics, incident response, and more
Jobs: The template (job definition) defines what a job will do. The job is the actual execution of your template or job definition.
Job Queues: Job queues handle the scheduling and management of multiple jobs. You can associate multiple compute environments with a job queue and assign priorities to jobs.
To get a feel for these components, I just ran followed the tutorial and manually created a job to see how it works.
Getting Started with AWS Batch
You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites…
Thinking through building the jobs the way I want is a lot more complex than that. I have concerns about who can kick off jobs when and how will I know if they were successful or failed? Where will the logs end up? Who can view the data I pass into the batch jobs? How much will it cost? Getting something to work is not architecting a solution, but in order to understand the components of your architecture, you’ll need to start somewhere.
My experimentation with batch jobs is going to be a bit free form, as time allows, and intertwined with other things I need to get done, but hopefully if you follow along you’ll understand where I’m going — and how I think about securing things in the cloud.
As I mentioned in another blog post I wrote, my code is written in phases. The first phase is always quite rough to flesh out a concept and improve it over time. I usually try to go back and reduce the chance for errors and simplify my code after the fact as I wrote about in this post (hopefully part of an upcoming software security book also on my to-do list):
Every Line of Code is a Potential Bug
How to reduce the chances of a security flaw in your application with the principle of abstraction
The code I’ll present is not production ready and can always use improvement. I’m figuring things out as I go. I already mentioned on Twitter I wrote a JSON template parser to create reports which I may share but it’s very rudimentary and specific to my needs.
But before I can get to all that, we need to think through some issues related to how we will run our jobs. How will we assign permissions, protect data, and do we have any network considerations?
AWS provides some sample batch jobs such as this one which grabs a script from an S3 bucket and runs it:
Example job definitions
The following example job definitions illustrate how to use common patterns such as environment variables, parameter…
Another option is to use a docker container. I need to run things that are a bit more complex than a single script. I want to have my batch jobs assume a role with limited permissions and install some libraries to help carry out tasks. Right away, I decided that I will use containers, not a single script.
The other benefit of a container is that there are various methods we can use to ensure the integrity of our code as it passes from dev to QA to prod, something I talk to clients at IANS research about a lot. Failure to ensure code integrity was the underlying cause of the Solar Winds breach so we’ll want to think pretty carefully about that if we are running sensitive batch jobs.
SolarWinds Hack: Retrospective
Part 2: What caused the breach and what does the malware do?
Docker containers it is! If you want to get a feel for AWS Batch and the elements involved, run through the tutorial above or run the sample CloudFormation templates. I’m going to be focusing on containers in upcoming posts as before we can create a batch job with a container — we need a working container.
If you liked this story please clap and follow:
Medium: Teri Radichel or Email List: Teri Radichel
Twitter: @teriradichel or @2ndSightLab
Requests services via LinkedIn: Teri Radichel or IANS Research
© 2nd Sight Lab 2022
All the posts in this series:
Automating Cybersecurity Metrics
A series of blog posts on cybersecurity metrics and security automation
Need Cloud Security Training? 2nd Sight Lab Cloud Security Training
Cybersecurity & Cloud Security Resources by Teri Radichel: Cybersecurity and Cloud security classes, articles, white papers, presentations, and podcasts