In this series we’ll be working through these tutorials on the basics of assembly. We’ll be using the Netwide Assembler (NASM) (https://www.nasm.us/) to write assembly programs on a 32-bit i686 Ubuntu Linux virtual machine.
The goal of this series is to document my journey as I learn how to program in assembly. I’m eager to improve my command over assembly language because I want to learn more about exploit development and low-level programming. Therefore, there may be asides or references to matters related to cyber-security / exploitation throughout the series.
Finally, this series is not completely original and is based directly off the tutorial created by https://asmtutor.com/. Most of the assembly code examples are pulled from that tutorial series. …
This blog post is not original. All exploits examined here are taken directly from Fu11pwnops’ Windows Exploitation Pathway series. It merely covers my experience — what I learned and what confused me while completing this tutorial:
This exploit pops a calculator after a malicious HEAD request is sent to the vulnerable application triggering a SEH overflow.
Our target application is an IntraSRV web-server that contains a buffer overflow vulnerability when processing HTTP HEAD requests. I completed this tutorial on a Windows 7 64-bit virtual machine.
You can download IntraSRV here:
Let’s start by running IntraSrv and attaching it to the Immunity…
(This blog post is not original. It covers my experience and the challenges I encountered while replicating this exploit: https://fullpwnops.com/local-seh-overflow/ in Fu11Shade’s “Windows exploitation pathway” sequence. Unless otherwise stated, quoted text is from Fu11Shade’s blog post. To configure your system to follow along with this tutorial complete: https://fullpwnops.com/immunity-windbg-mona/)
In this tutorial we’ll be exploiting an overflow that occurs when Millennium MP3 Studio 2.0 (install here: https://www.exploit-db.com/exploits/10240) attempts to open files with certain extensions. I completed this tutorial on a Windows 7 64-bit virtual machine.
Structured Exception Handler (SEH) based overflows work in many different ways. …
What happens under the hood when a function calls another function?
Firstly, it’s important to understand stack frames. Each function has a stack frame. A stack frame is a “collection of data associated with a subprogram call,” (www.cs.uwm.edu). The size and structure of the stack frame varies greatly and is determined at compile time. We call this collection of data a “stack frame” because we organize each subprogram call in a stack data structure. This means that the first subprogram call (collection of data) that is pushed onto the call stack will be the last stack frame to resolve (terminate). …
I thought I understood Linux file permissions — but it turned out there were a few conceptual gaps. This topic, while not the most exciting, is quite important. So read on and see if you have a solid grasp.
Firstly, any file or directory in Linux has different categories of permissions — user, group, and other. The different permissions set for these groups determine who can do what with a given resource. If we inspect a file in our system we’ll see the following:
Let’s dissect a weird bit flag program that took me a second to understand. in doing so, we’ll hopefully gain a more robust understanding of how bit masks and bitwise OR logic can manipulate values effectively.
Specifically, the program we’ll look at prints the binary representation of different access mode flags that are used in the open() function (included in the <fcntl.h> library). The access mode flags function to specify the permissions of the file that’s getting opened or created. The flags have “values that correspond to single bits”, (Hacking: The Art of Exploitation, Jon Erickson) and consequently, the flags can be added together (using the OR operator) to create new behavior. …
Often, there is more than one algorithm that can solve a problem. So, we need metrics to help us determine which algorithm is more effective at solving the problem at hand. Time and space complexity are the two metrics we use most consistently to compare algorithms. This article looks to provide some intuition for a very important concept which can sometimes seem overly vague or confusing.
The time complexity of an algorithm represents the rate of growth of time as a function of the length of the input. Since each operation the computer completes takes approximately constant time c, we are mostly concerned with how (the order by which) the number of operations increases (time) as the length of the input increases. …
I did a practice interview on Pramp this week. It didn’t go super well — which is frustrating because in retrospect it wasn’t that difficult of a question. Let’s jump in.
I was asked a question called “Sentence Reverse.” You are given an array of characters “s” that “consists of sequences of characters separated by space characters. Each space-delimited sequence of characters defines a word.” So, you are given something like this:
s = “perfect makes practice”
You are asked to “implement a function ‘reverse_words’ that reverses the order of the words in the array in the most efficient manner.” …
So we’ve all compiled programs before, but do you know how your computer divided up and saved the different parts of the program? Be patient, this kind of overwhelmed me at first. Let’s jump in.
A compiled program is broken into five segments: text, data, bss, heap, and stack.
The text segment is where the machine language instructions of the program are located. When a program begins executing, the RIP (the register that points to the currently executing instruction), is set to the first machine language instruction in the text segment. …