Thinking about a “serverless” API concept for PowerShell Tasks

Please insert PowerShell

“Serverless”-ness usually implies that, to the user, there is no infrastructure to consider, no middleware to maintain, just a task to be queued and executed, and an exit code returned. Last year, I gave some thought to the vast potential things like Windows Nano and older, but powerful, tooling like PowerShell could have for how we build new services.

PowerShell has been, historically, mostly an automation tool, but I’ve (since hearing about Nano, and Windows Containers) imagined it becoming a very good DSL for getting work done on container-based systems. PowerShell is, now, cross-platform, and running a container with an appropriate environment for running tasks is easier than ever. This can be a huge productivity boost for Windows systems administrators, automation engineers, etc. that use PowerShell regularly, and want to leverage new technologies in a more agreeable way with their existing environments.

So, let’s take a simple example: in another life, as an enterprise systems engineer, I had limited interaction with Windows systems, and wrote a simple script to download a file required for scheduled task:

Basically, every time a new config bundle was created, a scheduled task would run on a central server (let’s call this primary) and then the output from that job would get pulled down onto these secondary, client servers (secondary) from a push from primary. This was the extent of most configuration management needs for this fleet of servers, so a proper configuration management system, at the time, seemed unnecessary.

Seems like an ideal task to be made more functional, and less dependent upon each individual system, and made into something a sysadmin can toss, as a job, to the primary server (acting as the serverless endpoint) to execute and pull down the bundle from the object storage host (the URL for the file would be, for example, the argument for the above script) and distribute.

So, let’s take a look at how the application would look (at a high-level; the following won’t be, at all, production-ready but will demonstrate the workflow; this won’t, for example, include things like how to monitor jobs, or sanity check correctness) to manage this workflow if, for example, the target endpoint were just a single host running Docker as its container runtime to execute PowerShell scripts.

require 'sinatra'
require 'docker-api'
require 'httparty'
def downloadUrl(script)
body = HTTParty.get(script)
return body.body
end
def writeTmp(text)
nameUrl = HTTParty.get("http://name.generator.gourmet.yoga").body
yourfile = "/tmp/#{nameUrl}.ps1"
File.open(yourfile, 'w') { |file| file.write(text) }
return yourfile
end
post "/run" do
payload = JSON.parse(request.body.read)
script = payload['jobUrl']
data = downloadUrl(script)
fileName = writeTmp(data)
args = payload['jobArgs']

container = Docker::Container.create('Cmd' => ['./#{fileName}','#{args}'], 'Image' => 'microsoft/powershell')

resp = { containerInfo: container, jobUrl: script, args: args, pullResp: data.response.to_s, tmpFile: fileName }
content_type :json
resp.to_json

container.kill
end

It’s a very straightforward workflow that, to the user, relies on nothing else but HTTP, and performs provisioning, execution, and clean-up.

The above app does the following:

  1. Receives a POST request with jobUrl and jobArgs parameters; this refers to a URL hosting the script (for the sake of simplicity in this example, this could be a Gist or GitLab Snippet, as I showed above).
  2. Downloads that jobURL data into a tempfile
  3. A container is created.
  4. That tempfile is then shared with a container, and executed as the CMD for the container, and the jobArgs are appended to this shared file path. If this were, for example, a commercial serverless offering, you might track if the task in the container exited 1 or 0 and tracked execution times, memory usage, etc. to determine billing, but for our purposes, we’ll just worry about provisioning and cleanup for single task executions from an operations perspective.
  5. The container is killed. This completes, as it appears to the user, the process.

Making a request to the above application to download a text file might look like:

curl -X POST -d '{"jobUrl": "https://gitlab.com/snippets/1667100/raw", "jobArgs": "https://gitlab.com/snippets/1667527/raw"}' http://localhost:4567/run

Running locally, it would just create a tmpfile for your script (in this case Download.ps)and then download the file back to the shared directory). The “serverless” aspect brings about a few enhancements to the script to mind:

Because this is intended to enhance task execution: You can make the script upload the downloaded file to an object store like AWS S3, for example, or pass it to another service entirely. Whatever you want the task to be, and the process will execute the same way (either succeed, or return an error, like any other execution), so you can plan your jobs around this predictability.

As a service, this application can, itself, be containerized and deployed to, for example, a Docker Swarm or Kubernetes cluster, and extended to use more advanced queueing mechanisms, but the point is mostly to take a request from the user, and facilitate its execution irrespective of the presence of a server to execute it against.

In my example, for instance, my task could’ve been achieved by having my script create a bundle, upload it to an object store, and send a notification to a service on the primary that a new bundle was available, and then the systems could take over from there. However, this could be used for any PowerShell service that doesn’t depend upon local execution. A more ideal use case might be, for example, maybe technical, but non-IT stakeholders, where their usage of resources can be made more focused, and their jobs’ executions be made more manageable, where execution of their task is more important than things like what host to execute the task on, or whether or not other users are attempting similar operations at the same time; these are all things that can be controlled as part of your serverless API.

Some Additional Reading

I’ve written a couple of other pieces that touch on these concepts a bit further: