sshuttle and AWS Systems Manager Session Manager
Airwalk was recently asked whether it would be possible to use sshuttle with Session Manager, a part of AWS Systems Manager, so I decided to find out.
What is sshuttle?
sshuttle is a lightweight VPN that can run anywhere you have an SSH connection. It is written in Python and you install it on your client machine, typically with pip. You then tell sshuttle which remote host you want to use and it will connect via SSH and run some more Python on the remote host, creating a tunnel across which you can access arbitrary ports on hosts on the remote network.
If you’re a network administrator or someone concerned with network security, you’re probably thinking this sounds terrible. If you’re an engineer, this probably sounds amazing. I have a foot in both camps.
What is AWS Systems Manager Session Manager?
AWS Systems Manager (still known as “SSM” due to its former name of Simple Systems Manager — a rare naming inconsistency for AWS) is a set of tools that lets you view, control and patch your fleet of EC2 instances. One such tool is Session Manager.
Session Manager allows you to open a shell on your EC2 instances, either via the browser-based AWS Console or via the AWS CLI. While this sounds just like SSH, there are some great advantages:
- You do not need any firewall ports open or any bastion hosts. Session Manager only relies on your instance running the Systems Manager agent, a daemon that runs by default on Amazon Linux and is easily installed on other distributions.
- Access is controlled via IAM. You no longer have to manage SSH keys separately from your AWS policies. You can control who has access to which instances using IAM roles and policies. No more cleaning up users’ keys when they move on from your company.
- Access is logged to CloudTrail. Your audit trail is all in one place.
Session Manager examples
Here we can see me starting a Session Manager session via the browser:
And here I am starting a Session Manager session from the AWS CLI in my terminal:
Making Session Manager a little friendlier
While being able to connect to virtual terminals on remote Linux machines from your local command line is great, it would be a bit nicer if we didn’t have to learn a whole new way of invoking it. After all, we’ve been using SSH for years. The good news is, you can slot Session Manager right into your SSH configuration and have the ssh
command transparently use it. As the AWS docs show, if you enter something like this into your ~/.ssh/config
:
host i-* mi-*
ProxyCommand bash -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
…then you can connect to instances by typing something like this:
ssh ec2-user@i-0456ac191f9f975ec
In case you were wondering, this works transparently for scp
as well.
I should also mention that Session Manager can also be used to allow RDP connections to Windows instances, but I won’t discuss that here.
Not all plain sailing between regions
Now there are some caveats, most notably the fact that EC2 instance names are unique only within a region, so Session Manager needs some concept of which region you’re operating in. If you only use AWS resources within one region, just set a value for region
in your ~/.aws/config
file or in the AWS_DEFAULT_REGION
environment variable and you’re sorted:
export AWS_DEFAULT_REGION=eu-west-1
However, if you use resources in multiple regions, you’re going to have a problem and the default error messages aren’t very helpful:
Airwalk-Jim-Lamb:~ Jim$ ssh ec2-user@i-0456ac191f9f975ec
kex_exchange_identification: Connection closed by remote host
It’s only when you use ssh -v
that you can see the error from the AWS CLI:
[...]
debug1: Local version string SSH-2.0-OpenSSH_8.1An error occurred (TargetNotConnected) when calling the StartSession operation: i-0456ac191f9f975eg is not connected.
kex_exchange_identification: Connection closed by remote host
Even this isn’t super-helpful, but it’s telling us that SSM doesn’t have an agent connection to an instance with that ID, which should make you start thinking about whether that instance (in the current account and region) is the instance you really want to connect to.
Making SSH with transparent Session Manager region-aware
In order to allow us to use SSH as normal, not having to set environment variables all the time to switch regions, I thought it might be nice to allow a region suffix to be applied to the instance IDs that I type, so I added an additional block in my ~/.ssh/config
, before my earlier one, to handle hostnames looking like instance IDs but with a .
in them, giving me:
host i-*.* mi-*.*
ProxyCommand bash -c "aws ssm start-session --target $(echo %h|cut -d'.' -f1) --region $(echo %h|/usr/bin/cut -d'.' -f2) --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"host i-* mi-*
ProxyCommand bash -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
Now I can add a region suffix to my instance ID, like this:
ssh ec2-user@i-0456ac191f9f975ec.eu-west-1
And when I don’t add the suffix, it will still fall back to using the standard ways of specifying a region to the AWS CLI, ie the AWS_DEFAULT_REGION
environment variable or the value of region
in the profile being used in your ~/.aws/config
file.
Now I admit my config is a bit nasty, using subshells. If anyone has any better suggestions, please do let me know.
Finally, back to sshuttle
So after all that I have a working SSH configuration that lets me connect to my instances via Session Manager, but feeling just like I am connecting to them on port 22, I can invoke sshuttle, which in turn invokes ssh with my custom configuration and Boom! — I have my poor man’s VPN over SSM!
sshuttle -r ec2-user@i-0456ac191f9f975ec.eu-west-1 192.168.9.0/24
Now I can connect to hosts within the remote network by specifying the IP address and port number.
curl https://192.168.9.13/donaldwheresyourtroosers
If I choose to add --dns
to my sshuttle invocation, I can resolve DNS from the remote network and specify remote hosts by name.
sshuttle --dns -r ec2-user@i-0456ac191f9f975ec.eu-west-1 192.168.9.0/24
Closing thoughts
So there it is: sshuttle works just fine with AWS Systems Manager Session Manager. You can create a secure, auditable means for accessing your hosts and then blow a tunnel right through it.