Programming In Academia Vs Industry

Riddhiman Das
5 min readNov 10, 2011

--

I’ve programmed both in the academic setting and in industry. In college, I’ve been in research projects, directed readings with professors, developed apps and an autograder, and had a great experience doing so. I’ve also done programming in industry, actually worked on products that have shipped to users. Most of my work in industry has been either embedded systems programming or enterprise application programming.

A lot of people seem to think that programming in academia isn’t real world programming. While I agree that four years programming in academia solving theoretical problems in C or Matlab isn’t going to make you a rockstar in implementing RESTful web services in Java or .NET, I do think the core programming skills and principles carry over nonetheless.

In this post, I would like to highlight some of the differences I’ve noticed in my experiences with both so far.

One of the major differences in my experience between the two domains has been guidance. If I’m stuck on something in college, even if this is as trivial as figuring out the best practices, I can bounce ideas off my professors, and they understand that I’m learning by doing and are usually accommodating. An elegant solution is preferred over a hacked together solution, and you are encouraged to learn best practices doing it.

Now, I’m not talking about a programming assignment for class here. I’m talking about projects done under the guidance of a faculty advisor.

My experience in industry tells me that the some product is better than no product rule takes precedence over anything else. If you cannot figure out the most elegant way to do something, hack something together to have a working solution. If time and budget permit, you may have a chance to later go back and fix it, although sometimes you may not. You may have a mentor assigned to you, but he’s busy doing his stuff and will usually try to point you to something, and if it doesn’t help, and you are on your own to figure it out. If you ask TOO MUCH, you’re probably a bad hire.

Another difference I’ve discovered is documentation (comments in code). While programming in academia, if you do not document your code, someone’s going to get upset because they do not know the language you’re using and can’t figure out the algorithm, or just because you’re not following what has been taught in your courses. In industry, regrettably, no one has the time for extensive source code documentation. Maybe it’s just that I’m working under experienced professionals who know what the code does by just looking at ten lines out of a 500 lines class implementation, but such has been my experience. After all, were you hired to produce products and services or to write pre- and post-conditions for your methods?

I’m sure I’m not the only person who’s wished that this would change. For a developer, comments in source code are more helpful than external documentation any day.

Academic programming also tends to have evolving designs and architectures. Trial and error is better than something set in stone. A professor would rather work on an interesting problem than have meetings to discuss an architecture document. After all, wasn’t I supposed to have taken a software engineering class?

Programming in industry will always have written architecture, design and other documentation. If not, it’s probably not a real project.

I will admit that I’ve programmed more in an academic setting than in industry, but so far, I think there’s pros and cons with both, and both would do themselves a favor if they learned from each other.

— — UPDATE: I also posted this question on the stackexchange programmers community, and I received some very interesting answers. Here are some of them: — —

In a traditional undergraduate computer science program you learn just programming. But industry doesn’t want people who are just programmers, industry wants real software engineers. I know many job descriptions don’t seem to know the difference which only confuses the matter, but in the real world you need to be able to:

Gather and analyze requirements, when they aren’t directly given to youDesign and analyze architecture, with near endless possibilitiesCreate test plans and act on them, to evaluate and improve the quality of a systemWork collaboratively on a team, of people with different backgrounds and experience levelsEstimate and plan work, even if you don’t know exactly what to buildCommunicate effectively with stakeholders, who have different needs that don’t necessarily align

Negotiate schedule, budget, quality, and features, without disappointing stakeholders

Oh yeah, and you also have to be able to write code too, but that’s, on average, only 40–60% of a software engineer’s time.

So, it’s not that freshly minted computer science undergrads don’t know how to program (many are in fact, very good programmers) — it’s that many of them don’t know how to do anything else! — -

University

(I call this scenario university, because programming as an actual computer scientist is also different than what you do while studying)
Your teacher gives you:

A well defined, isolated problem, the solution of which can be provided within a short and well defined time span and will be discarded afterwardA well defined set of tools that you were introduced to prior to assignment

A well defined measure for the quality of your solution, with which you can easily determine whether your solution is good enough or not

“Real World”

In this scenario:

The problem is blurry, complex and embedded in context. It’s a set of contradictory requirements that change over time and your solution must be flexible and robust enough for you to react to those changes in an acceptable time.The tools must be picked by you. Maybe there’s already something usable in your team’s 10 year old codebase, maybe there’s some open source project, or maybe a commercial library or maybe you will have to write it on your own.

To determine whether the current iteration of your software is an improvement (because you’re almost never actually done in a software project), you need to do regression testing and usability testing, the latter of which usually means that the blurry, complex, contradictory, context-embedded requirements shift once again.

Conclusion

These things are inherently different to a point where there’s actually very little overlap. These two things work at to completely different levels of granularity. CS will prepare you for “real world” software development like athletics training would prepare an army for battle. — -

Academia is mainly focused on the “science of programming” thus studying the way to make efficient particular algorithm or developing languages tailored to make certain paradigms more expressive. Industry is mainly focused in producing things that have to be sold. It has to rely on “tools” that are not only the languages and the algorithms, but also the libraries, the frameworks etc.

This difference in “focus” is what makes a academic master in C practically unable to write a windows application (since we windows API are not in the C99 standard!), thus feeling as it is “unable to program”. But, in fact, he has all the capabilities to learn itself what he’s missing. Something that -without proper academic studies (not necessarily made in Academia)- is quite hard to find.

Originally published at www.rdas.co on November 10, 2011.

--

--