Running Linux Apps on Windows (and other stupid human tricks) Part I

Stephen Walli
10 min readApr 13, 2016

--

At the Microsoft Build 2016 conference last week, Microsoft announced the ability to run Ubuntu Linux binaries on Windows 10. There’s a good article by Steven J. Vaughan-Nichols describing the announcement. So, yes, pigs really do fly.

There is, however, a history of a previous way of accomplishing something amazingly similar that has remained so unremarkable in its passing that even Microsoft appears to have forgotten its own history. It was important enough to a group of us that I wanted to set it all down as I best remember it.

The Idea

“I know where we can get the people,” I said.

Doug smiled and said, “I know where we can get the money.”

It’s Spring 1995. Windows NT was the new [real] operating system from Microsoft replacing the highly successful program-loader-with-a-GUI that was Windows 95. I had just come from being an expert witness in a U.S. government bid protest on whether or not the Windows NT POSIX subsystem did or did not fulfill the U.S. government requirements of a POSIX operating system. I knew where its edges were, but I also now understood the potential to make it so much more useful.

Walli’s First Law of Applications Portability: Every useful application outlives the platform on which it was developed and deployed.

I coined the First (and Second) Laws of Applications Portability in the mid-1990s. I spent the 1980s rewriting working applications from platform to platform and architecture to architecture. There had to be a better way, and that’s what had attracted me to the IEEE POSIX and ANSI C standards in 1989.

Now, six years later, what if you could properly port all of your business-critical UNIX applications to Windows NT and have them behave with absolute fidelity? And by port, I mean type “make” at the command line and fiddle a bit in an afternoon, not rewrite the application over months of time to Win32. What if you no longer had to buy and maintain outrageously priced hardware from the UNIX system vendors, but could buy PC-class hardware? Microsoft was on an explosive growth curve and Windows NT was a proper operating system. Linux was still very much in its infancy and a long way from being proven. The UNIX Systems Labs v. Berkeley Software Design lawsuit had put a chill over the BSD community.

Doug and I had both recently left Mortice Kern Systems, building the UNIX tools for DOS and Windows, and delivering a POSIX standard shell and utilities across diverse operating systems (e.g. VMS, MVS, HPE/iX, CTOS, … and Solaris). I was consulting at the time and had just finished writing a book for X/Open on porting UNIX applications to the newly minted Single UNIX Specification, a kitchen sink superset of the IEEE/ISO POSIX standards.

Doug and I met at a reception at the Spring 1995 Uniforum trade show in Dallas. The back-of-a-napkin plan still looked coherent in the cold light of dawn and over the next few months we turned it into a real plan for a startup to allow us to port UNIX applications to Windows NT. I’d been part of the IEEE and ISO POSIX standards community for about six years. It was time to put my money where my mouth was.

By September, we had been through the grinder of pitching investors to no avail. We still believed in the idea, and six of us bootstrapped the company along with a proper “friends-and-family Series A”. We signed a rather unique source license agreement with Microsoft, and we were away!

The Architecture

The Windows NT architecture was essentially a kernel, a collection of functional subsystems (I/O, Security, etc.), and a set of environment subsystems (Win32, POSIX, and originally an OS/2 one). At the time it ran on Intel (IA-32), MIPS, DEC Alpha, and PPC.

An old OpenNT/Interix architecture diagram (c. 1998)

Every application process ran as a client of an environment subsystem that provided the system services interface. The OS services were then provided by the underlying functional subsystems via a fast local message passing system provided by the kernel. Each environment subsystem was its own executable and ran as a separate kernel process. The NT kernel interface had sufficient granularity that it could be anything. A user would see a desktop full of applications without needing to understand which app was running on what environment subsystem. NTFS could provide UNIX file system semantics so there was one coherent file system under the covers, (i.e., no pretending to be UNIX with a Win32 view of the filesystem, or large Win32 file “container” pretending to be a UNIX file system).

OpenNT (our product) was a replacement POSIX environment subsystem, providing much more of the system service interface of a UNIX system than was captured in the original Microsoft POSIX subsystem. The original was designed to survive the U.S. government certification process. We started with the 4.4BSD-Lite distribution as our porting base, because that was the source code USL said was free of their taint. (We of course started with the CD distribution created by O’Reilly & Associates.)

The Early Startup

We launched OpenNT 1.0 at the Spring 1996 Uniforum trade show at Moscone in San Francisco. We had good press. (Here’s an old GCN article from our launch.) We had running product. We could demonstrate early porting capabilities. A number of the investors who had previously told us “we weren’t going to make anyone rich” called us back, and we did our first round of formal VC money by May 1996. (US$2.2M was a reasonable amount of money in 1996.)

We could add staff now that we had a bit of funding. We could be a real company instead of six people running as fast as we could in our respective home offices, most of us still consulting on the side.

As fast as we added new kernel functionality we would port the next level of utilities to the product. BSD sockets were the first big addition, and it wasn’t terribly long before we had the Apache web server up and running our company website. (The Win32 version of Apache was still a slow, awkward rewrite.) One of the fun things we discovered in those early days was the Windows subsystem could crash, but our subsystem was still happily ticking away running the company website. Our performance numbers were also solid as our subsystem wasn’t carrying around the weight of the Win32 subsystem.

We entered our first acquisition discussion a few months after the first round of investment. Citrix was interested. (This was in the WinFrame days.) Our new board thought we should let this play a little longer as we were just getting going. We walked away. We even thought we dodged a bullet for a while as the Citrix stock would collapse in value to cash reserves not too long after discussions fell apart. Rumors of their demise were apparently exaggerated.

“Open Source” Before the Definition

This was all taking place in a world without the definition of “open source software.” As a set of developers, we were essentially responsible for our own legal due diligence. There just weren’t that many “open source” savvy lawyers in the mid-to-late 1990s. Every OpenNT developer understood the ramifications of the GPL. While über-conservative lawyers at large software companies might well fret about the legal ambiguity of some clauses, the average developer could read and understand the collaborative intent and reciprocity of the license pretty quickly.

Every OpenNT developer knew that before bringing software from outside the company into the product, they needed to get approval from management, which meant me and the core senior developers who had lived in this space for a while. We always took the most conservative interpretation. We had already seen what USL did to our friends at BSDi with a lawsuit.

We did check licenses from time to time with our external counsel. But developers need passing familiarity with the licenses and copyright anyway, so we could do a lot of the work from an architectural point of view prior to involving lawyers to both expedite the conversation and save on legal costs. Few startup companies have the legal exposure of Microsoft and require that level of legal due diligence before developers touch code.

Mortice Kern Systems was just up the road from us in Waterloo, and although we considered our product to be in a different space from theirs (we were about applications migration and they were the UNIX tools on Windows), we didn’t want claims of source code infringement as many of us had worked there. We were rigorous in our source code management so as to be able to demonstrate code pedigree and cleanliness should the need arise.

We also had the Free Software Foundation looking over our shoulder. They would have loved for us to have mistakenly introduced software covered by the GPL into the Windows code base. Our survival as a start-up depended upon the Microsoft license. Once inside Microsoft after the acquisition, we were even more careful, because we then had the ultra-conservative Microsoft legal team looking over our shoulders as well. We had email discussions twice with the FSF when we received notification that we hadn’t yet published our changes. There was always a latency in our making the code available while trying to get the product out the door. (In the late 1990s the FSF still required we supply CDROMs with the source. Simply publishing on the web was not sufficient.)

The FSF maintained the inbound contribution assignments for gcc. I signed these assignments twice for the work of my compiler developer, once as vice president, R&D at Softway Systems (our company), and once as the product unit manager at Microsoft. You can imagine the legal oversight I had the second time from Microsoft Legal and Corporate Affairs. They even added a small change that was accepted to the assignment to the FSF.

The Business Economics of Open Source (with a gcc example)

I still don’t believe there’s an “open source software” business model per se, but the benefits of using liberally licensed collaboratively developed software in our software business were clear. We gained huge advantage in time to market. The code was robust; it had stood the test of time and real-world honing across a host of architectures. The engineering economics of collaboration were compelling. We were never unclear about our product value proposition in relation to the software projects we used and to which we contributed.

We didn’t just take from the community, but engaged with different communities appropriately. We worked a lot of standards-related changes back to the pdksh team, had a developer that still maintained the ex/ed editors, and worked with the gcc community to contribute our changes and bug fixes back. This is obviously not the same level of business engagement as MySQL running its community around its own projects and products, or Red Hat and its engagement with the Linux community. At the time we had different goals and requirements. As a small company, we made small contributions.

gcc is an interesting example of our use of open source from a community engagement and economics perspective, as well as how to think about technical architecture versus license obligations. First let’s look at the community economics.

When you download gcc from the web, it’s a bundle of the compilers, linker, binary formats library, assembler, and debugger all in one tidy package. Our compiler developer was an 18-year veteran in compilers and operating systems (formerly of HP) and they made a coherent set of changes to the tools to get them to behave properly developing debuggable executables for the OpenNT subsystem. When we began to contribute changes back, we discovered FIVE different communities hiding behind that single download with varying degrees of interest in accepting our changes. It was quite the negotiation.

In the end, we tried to hire Cygnus (as the gcc experts of the day) to make/facilitate the changes, but in the late ’90s this would have cost US$100K+ and they couldn’t start for at least 14 months as they were so backed up with work. (This was prior to their acquisition by Red Hat.) We finally hired Ada Core Technologies, as they too employed a primary committer on parts of gcc that could best facilitate a set of changes across the tool set back into the core. It was considerably less expensive and they could begin immediately.

Second, the gcc compiler provided technical architecture versus license obligation challenges that might be subtle for some. We used gcc to build our own world (with the exception of the subsystem), because the gcc compiler in those days was better than the Microsoft compiler, and we needed to use gcc if we were to use gdb. (An artifact of the environment subsystem world in the late 1990s meant we could not use Visual Studio and its debugger.) Because using the gcc headers and libraries would have effectively attached the GPL to our own programs, we used the gcc compiler in conjunction with our own libraries (derived from the Microsoft C library) and our own headers. (We had a lot of experience building standards-based portability headers.)

Our willingness to work with the GPL and to use the GPL in conjunction with any of our code made us cautious. We had the Microsoft asset we were required to protect, and we also wanted any code we contributed into community to be planned and not simply through the license obligations.

Recognize that none of these contributions back to the community were out of altruism, especially considering the cost in engineering time for both us and hiring ACT developers. This was a deliberate business/engineering decision. We wanted the engineering expediency of working in a collaborative world. Despite the initial time to market advantage of using open source, if we had continued to live on our own code fork our team would need to make months of changes when a new version of gcc came out, rather than a few weeks of changes if the OpenNT-related contributions were accepted.

This is the economic strength of the open source collaborative development world — you are distributing the cost of maintenance and development across a number of players, while improving the software’s robustness through a test bed of users stretching the software in new ways. Everybody wins. When compared to developing and maintaining a compiler from scratch, which was neither a core competency (despite our talent), nor the primary value proposition to our customers, our investment was still cheap. Anything we created would be years behind gcc. To us, this was no different than IBM’s investment in the Apache web server.

In Part II, I talk about the end game for the startup, the acquisition, and some final thoughts.

I would very much like to thank the brilliant editors that made these two posts so much more readable: @vmbrasseur, @rikkiends, and Amy Ilyse Rosenthal

--

--

Stephen Walli
Stephen Walli

Written by Stephen Walli

Tech exec, Founder, Writer, Open Source Advocate, Software Business Consultant, working at Microsoft (again!) in the Azure Office of the CTO

Responses (2)