Hardening SSH using AWS Bastion and MFA
What if your servers are open to a public network and unauthorized tunneling occurred ? or if someone used
rm -rf
command and accidentally deleted your project root directory on production but you don't have any clue what just happened?
You can prevent it with few extra steps.
- Prevent your production servers from exposing it to public networks.
- Use Multi Factor Authentications (MFA).
- Log each and every activity performed by user on servers.
- Define strong access policies.
- Setup the alerts.
SSH is essential to server management. This post will walk you though some of the options available to harden OpenSSH. Also, It will help you to understand how to define security and access policies to your production environments. The instructions may work for other flavors of Linux but is intended for Ubuntu 16.04 LTS.
Messing with SSH is good, only genius people can able to lock themselves out of servers. 😈 ~ Sourabh
Step 1: Set up Linux Bastion Host
A bastion host is a special-purpose computer on a network specifically designed and configured to withstand attacks. The computer generally hosts a single application, for example, a proxy server, and all other services are removed or limited to reduce the threat to the computer. It is hardened in this manner primarily due to its location and purpose, which is either on the outside of a firewall or in a demilitarized zone (DMZ) and usually involves access from untrusted networks or computers.
There is a step by step guideline available in AWS to deploy Linux Bastion Host in VPC. You may refer to following link:
Once you deploy your bastion host you can allow this host to access from the public network. All your production instances shall be in private subnet and only expose its SSH port to bastion host’s IP. Bastion host can track each activity that is happening in it and prints it into log.
But the next big challenge is keeping track of the SSH sessions established through Bastion.
Step 2: Record SSH Sessions Established through Bastion Host
Recording SSH sessions enable auditing and can help in your efforts to comply with regulatory requirements.
SSH to your bastion host and create a new folder to store all logs.
mkdir /var/log/ssh-bastion
Create an ec2-user to access this log folder and its content. No other user can access or list contents of this folder
sudo adduser ec2-user
Add empty password
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Allow ec2-user only to access this folder and its content
chown ec2-user:ec2-user /var/log/ssh-bastion
chmod -R 770 /var/log/ssh-bastion
setfacl -Rdm other:0 /var/log/ssh-bastion
Make openSSH to execute custom script on each logins
echo -e "\nForceCommand /usr/bin/bastion/shell" >> /etc/ssh/sshd_config
create a directory to write custom script
mkdir /usr/bin/bastion
Add a custom bash script which will invoke on each login and create a log under /usr/bin/bastion/shell
cat > /usr/bin/bastion/shell << 'EOF'
# Check that the SSH client did not supply a command
if [[ -z $SSH_ORIGINAL_COMMAND ]]; then
# The format of log files is /var/log/ssh-bastion/YYYY-MM-DD_HH-MM-SS_user
LOG_FILE="`date --date="today" "+%Y-%m-%d_%H-%M-%S"`_`whoami`"
LOG_DIR="/var/log/ssh-bastion/"
# Print a welcome message
echo ""
echo "NOTE: This SSH session will be recorded"
echo "AUDIT KEY: $LOG_FILE"
echo ""
# I suffix the log file name with a random string. I explain why
# later on.
SUFFIX=`mktemp -u _XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX`
# Wrap an interactive shell into "script" to record the SSH session
script -qf --timing=$LOG_DIR$LOG_FILE$SUFFIX.log --command=/bin/bash
else
# The "script" program could be circumvented with some commands
# (e.g. bash, nc). Therefore, I intentionally prevent users
# from supplying commands.
echo "This bastion supports interactive sessions only. Do not supply a command"
exit 1
fi
EOF
Make this script executable
chmod a+x /usr/bin/bastion/shell
Warning: Bastion host user could overwrite and tamper with an existing log file if they know exact file name using “Script” . So to prevent this add random suffix to the log file name and prevent the users from viewing the log folders.
Once you done restart SSH service to apply /etc/sshd/sshd_config modifications
sudo service sshd restart
Also to store all the log files for a long time you can copy those files at a regular interval to an Amazon s3 buckets. We have used s3 mv command along with our logrotate service.
cat > /etc/logrotate.d/ssh-bastion << 'EOF'
# Move log files to S3 with server-side encryption enabled."/var/log/ssh-bastion/*.log" {
su root root
daily
rotate 5
create 644 root ec2-user
missingok
compress
delaycompress
copytruncate
notifempty
sharedscripts
dateext
dateformat -SSH-01-%Y-%m-%d-%s
postrotate
aws s3 mv /var/log/ssh-bastion/ s3://bucket-name/logs/ --region region --exclude '*' --include '*.gz'
endscript
}
EOF
At this point, Bastion is configured to record all SSH sessions and the log files are copied to Amazon S3.
Step 3: Managing users accounts and SSH public keys to Bastion Host
To ease the management of user accounts, the SSH public key of each bastion host user is uploaded to an S3 bucket. At a regular interval, the bastion host retrieves the public keys available in this bucket. For each public key, a user account is created if it does not already exist, and the SSH public key is copied to the bastion host to allow the user to log in with this key pair.
For example, if the bastion host finds a file, john.pub, in the bucket, which is John’s SSH public key, it creates a user account, john, and the public key is copied to /home/john/.ssh/authorized_keys. If an SSH public key were to be removed from the S3 bucket, the bastion host would delete the related user account as well. Personal user account creations and deletions are logged in /var/log/bastion/users_changelog.txt.
Use following bash script to manage users and keys /usr/bin/bastion/sync_users
cat > /usr/bin/bastion/sync_users << 'EOF'
# The file will log user changes
LOG_FILE="/var/log/ssh-bastion/users_changelog.txt"
# The function returns the user name from the public key file name.
# Example: public-keys/sshuser.pub => sshuser
get_user_name () {
echo "$1" | sed -e 's/.*\///g' | sed -e 's/\.pub//g'
}
# For each public key available in the S3 bucket
aws s3api list-objects --bucket bucket-name --prefix public-keys/ --region region --output text --query 'Contents[?Size>`0`].Key' | sed -e 'y/\t/\n/' > ~/keys_retrieved_from_s3
while read line; do
USER_NAME="`get_user_name "$line"`"
# Make sure the user name is alphanumeric
if [[ "$USER_NAME" =~ ^[a-z][-a-z0-9]*$ ]]; then
# Create a user account if it does not already exist
cut -d: -f1 /etc/passwd | grep -qx $USER_NAME
if [ $? -eq 1 ]; then
/usr/sbin/adduser $USER_NAME && \
mkdir -m 700 /home/$USER_NAME/.ssh && \
chown $USER_NAME:$USER_NAME /home/$USER_NAME/.ssh && \
echo "$line" >> ~/keys_installed && \
echo "`date --date="today" "+%Y-%m-%d %H-%M-%S"`: Creating user account for $USER_NAME ($line)" >> $LOG_FILE# Restrict user from accessing other users home dir
chmod 750 /home/$USER_NAME
fi
# Copy the public key from S3, if a user account was created
# from this key
if [ -f ~/keys_installed ]; then
grep -qx "$line" ~/keys_installed
if [ $? -eq 0 ]; then
aws s3 cp s3://bucket-name/$line /home/$USER_NAME/.ssh/authorized_keys --region region
chmod 600 /home/$USER_NAME/.ssh/authorized_keys
chown $USER_NAME:$USER_NAME /home/$USER_NAME/.ssh/authorized_keys
fi
fi
fi
done < ~/keys_retrieved_from_s3# Remove user accounts whose public key was deleted from S3
if [ -f ~/keys_installed ]; then
sort -uo ~/keys_installed ~/keys_installed
sort -uo ~/keys_retrieved_from_s3 ~/keys_retrieved_from_s3
comm -13 ~/keys_retrieved_from_s3 ~/keys_installed | sed "s/\t//g" > ~/keys_to_remove
while read line; do
USER_NAME="`get_user_name "$line"`"
echo "`date --date="today" "+%Y-%m-%d %H-%M-%S"`: Removing user account for $USER_NAME ($line)" >> $LOG_FILE
/usr/sbin/userdel -r -f $USER_NAME
done < ~/keys_to_remove
comm -3 ~/keys_installed ~/keys_to_remove | sed "s/\t//g" > ~/tmp && mv ~/tmp ~/keys_installed
fi
EOFchmod 700 /usr/bin/bastion/sync_users
Create a cron to sync user every 5 minute. Add the following lines in your crontab,
*/5 * * * * /usr/bin/bastion/sync_users
Step 4: Manage MFA to SSH to bastion host
An authentication factor is a single piece of information used to prove you have the rights to perform an action, like logging into a system. An authentication channel is the way an authentication system delivers a factor to the user or requires the user to reply. Passwords and security tokens are examples of authentication factors; computers and phones are examples of channels.
Here, we have used an OATH-TOTP app, like Google Authenticator. OATH-TOTP (Open Authentication Time-Based One-Time Password) is an open protocol that generates a one-time use password, commonly a 6 digit number that is recycled every 30 seconds.
Follow the below steps to setup MFA in your bastion host,
- Install Google’s PAM Authenticator
sudo apt-get update
sudo apt-get install libpam-google-authenticator
2. Configuring OpenSSH
Because we’ll be making SSH changes over SSH, it’s important to never close your initial SSH connection. Instead, open a second SSH session to do testing. This is to avoid locking yourself out of your server if there was a mistake in your SSH configuration. Once everything works, then you can safely close any sessions.
To begin open up the sshd
configuration file for editing using vim
or your favorite text editor,
sudo vi/etc/pam.d/sshd
Add the following line to the bottom of the file.
/etc/pam.d/sshd
. . .
# Standard Un*x password updating.
auth required pam_google_authenticator.so nullok
The nullok
word at the end of the last line tells the PAM that this authentication method is optional.
Find the line @include common-auth
and comment it out by adding a #
character as the first character on the line. This tells PAM not to prompt for a password.
/etc/pam.d/sshd
. . .
# Standard Un*x authentication.
#@include common-auth
. . .
Save and close the file.
Next, we’ll configure SSH to support this kind of authentication. Open the SSH configuration file for editing.
sudo vi/etc/ssh/sshd_config
Look for ChallengeResponseAuthentication
and set its value to yes
.
/etc/ssh/sshd_config
. . .
# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
ChallengeResponseAuthentication yes
Making SSH aware of MFA
Add the following line at the bottom of the file. This tells SSH which authentication methods are required.
/etc/ssh/sshd_config
. . .
AuthenticationMethods publickey,password publickey,keyboard-interactive
Save and close the file, then restart SSH to reload the configuration files. Restarting the sshd
service won't close open connections, so you won't risk locking yourself out with this command.
sudo systemctl restart sshd.service
3. Allow 1st login and auto running google authenticator for each new user
Now, we need a way for users to be able to login once before setting up google-authenticator. Here is a script for checking if a user has not logged in and ran google-authentication yet, runs google-authenticator, then prevents that user from logging in again without either google-authentication or an ssh public key. To setup this script do the following,
A. Create a group and add all of the necessary users to this group.
groupadd google-auth
gpasswd google-auth -M bob,joe,smith
B. Now create the authusers file and set the permissions to be owned by google-auth
, then allowed write access by users in that group.
mkdir /google-auth/
touch /google-auth/authusers
chgrp google-auth /google-auth/authusers
chmod ug=rwx,o= /google-auth/authusers
C. Now add the necessary users to /google-auth/authusers
bob
joe
smith
D. Install the script to be ran. Just add these couple lines to an .sh file in /usr/local/bin, or you can copy it over if you did a git pull. I created my script with vim /usr/local/bin/google-auth-check.sh
#!/bin/bash
if [ ! -f ~/.google_authenticator ]; then
google-authenticator
if [ -f ~/.google_authenticator ]; then
sed -i "/^${USER}$/d" /google-auth/authusers
fi
fi
And then give it execute permissions.
chmod +x /usr/local/bin/google-auth-check.sh
we need to add this script to each of the ~/.bashrc
files in each users home directory.
find /home/ -name ".bashrc" -print0 | xargs --verbose -0 -I{} sh -c "echo 'sh /usr/local/bin/google-auth-check.sh' >> {}"
So when you copy your public keys in s3 bucket bastion will automatically sync those keys and will create a user account for that keys.
For more info about create and delete user go to Step 3: Managing users accounts and SSH public keys to Bastion Host
Try to SSH using newly created user to bastion host,
It will ask: Do you want authentication tokens to be time-based (y/n)
and we need to say yes. Then it will print out the QR code and ask if want to update our .google_authenticator
file. We do.
Scan that code into the Google Authenticator app on your phone and save those emergency codes!
When you try to SSH to Bastion next time it will ask you to key in 6 digit number appeared on your Google Authenticator App.
Conclusion:
With the help of Linux Bastion Host and MFA you can add multiple layers of securities. Also, it helps to prevent your servers from exposing it to the external world. Tracking of each user activity is easier so that you don’t need to ask one question,
Who did that ?? 😱
Bastion
OpenSSH
Linux
MFA