A Quick Primer On Big O Notation

Every blog post on “How to Become a Software Engineer” talks about how vital Big O Notation is, but what exactly is it? Here’s a five-minute introduction for those not in the know.

Maxwell Harvey Croy
DataSeries

--

You might be taking coding tutorials online, diligently working through labs in boot camp, or maybe you’ve just started your research into what a transition into Software Engineering looks like, and everything is starting to click into place.

Know the difference between an IDE and a Text Editor? Check. Are you able to build a CRUD app using your language of choice? Check. Can you pin down the runtime of an algorithm relative to its input?

Uh, hold up. What do you mean by runtime? And what exactly is an algorithm again? In a cold sweat, you Google “Runtime Analysis,” and there it is — Big O Notation.

In programming, we start our learning journey by focusing on the end goal. “If it works, I’m happy,” is a common phrase uttered by many a junior developer. But as you work on larger and more complex projects, understanding how your code scales becomes essential in ensuring that you design elegant and efficient solutions to the problems you face.

This is where Big O Notation comes into play. Big O Notation allows us to describe the complexity of our code algebraically in a way where we can estimate how fast or slow our code will function when placed under the…

--

--

Maxwell Harvey Croy
DataSeries

Music Fanatic, Software Engineer, and Cheeseburger Enthusiast. I enjoy writing about music I like, programming, and other things of interest.