Why Choose Apache-2.0 License?

A Carefully Weighed Decision

Here’s why I’ve come to think Apache-2.0 is generally the default preferred license for most projects:

  • Is permissive (same as MIT, BSD-2)
  • Is wordier so there is less room for misinterpretation
  • Does not force hardware vendors to do anything
  • Contains explicit patent language
  • Has the Apache Foundation behind it

Most of my professional life I have had to read about open source licenses. It is not something I particularly enjoy but have come to understand has great importance for any developer (or writer for that matter). Now that The KnowledgeNet Foundation is a thing I want to pick the right license that will encourage adoption while protecting the foundation, contributors, and anyone choosing to use anything we create in a way that does not overreach. The Apache-2.0 license seems to fit this need best. No wonder Swift and Kubernetes are using it.

Not the First That Comes to Mind

“Isn’t Apache that old web server,” is often the response from some. “Why would I use a license from the Apache organization.”

This is something no person who has been keeping up on licensing news lately would really utter—especially after Apache took on Facebook (not literally of course) over their highly controversial BSD+Patents situation for React, GraphQL and other “open source” contributions. As you will remember, Facebook declared they wanted to protect themselves from frivolous patent litigation by reserving the right to withdraw any license from anyone who sued them for anything patent related—of any kind.

Many blogged about companies like VMWare rejecting React because of Facebook’s weird license (including me) but it was not until the Apache Foundation itself declared “no Facebook libraries” that Facebook and the world took note. They stuck to their policy and Facebook relented to one of the more traditional licenses. It is no wonder Facebook went with MIT instead of Apache-2.0 given the circumstances of their bludgeoning. But there may be another reason.

Apache-2.0 Explicitly Grant Patents (MIT Does Not)

Ariel Reinitz (a patent lawyer) writes a great summary of why this explicit granting of patent rights matters. In short, it gives some legal wiggle room that has not been tested in court. That makes me (and others) uncomfortable. From my experience at Nike and IBM uncomfortable is a thing.

Uncomfortable can block using a technology, hiring a developer, or going public with a contribution. It means that if there is a comfortable place to go, go there instead. Apache, with its long, explicit language that says ultimately what most think MIT does, is more comfortable, even if it is harder on the eyes and has a lot more legalese.

Some might even argue that Facebook liked MIT because of the patent wiggle room. We will never know until someone sues Facebook or Facebook chooses to surprise the world and sue people from their stealth GraphQL patents some suggest they have.

Apache Sticks with Software

There are a lot of great things about the GPLv3 license (patents, like Apache, grace periods for compliance) but one of them is not forcing all hardware manufacturers who put GPLv3 software on them to make the hardware upgradeable. That makes GPLv3 absolutely unusable to me and the entire Linux Foundation. It is really too bad.

The intention is well taken. The biggest denial of service attack the world has yet seen was launched by a bunch of toasters and set-top boxes with software that could not even be updated without buying a new device. But this concern for “freedom” removes the freedom for hardware manufacturers to decide what is secure and save themselves from potential litigation or worse by hackers reverse engineering the content of their ROM chips to find zero-day exploits. It seems both have the same intent, but different means. The security argument can go two ways:

  1. The Internet is more secure when the chip in any device has to be unsoldered from the device in tact and then reverse engineered to extract the binary kernel and applications.
  2. The Internet is more secure when hardware device makers make blundering-idiot mistakes that allow a default root password to be used and have no way to update the thousands of devices that most people do not even know exist.

After much conversation and consideration, I agree with the first method being more secure. Even if device manufacturers were forced to allow their devices to be upgraded as GPLv3 requires how many of them would have the capacity to launch such an upgrade in the event of a catastrophic security fail (as was behind the DynDNS attack)? I’m thinking not many, if any. So forcing that issue does not make us more secure. The alternative is a better fail safe. In fact, it would likely be cheaper for the company to pay to replace all the failed devices than to put a system in place to securely, dynamically update all those devices over the Internet.

Then there is the consideration for doing the update. Such devices would have to be enabled to connect to the Internet to receive the upgrade. Now we have that many more thousands of devices that can potentially be attacked remotely through the Internet. Better to allow them to embed chips as securely as possible and make them have no connectivity at all.

The price is an issue as well. In the electronics world resources matter more. Enabling a $50 router to have all that software and hardware to be upgraded (over the Internet or by an owner) would like double the price of the device. Like I said, it is cheaper to pay for everyone to replace an insecure device than build the possibility of remote upgrade into every device.

In summary, the GPLv3 anti “tivoization” language is based on a sincere desire, but just so wrong from a practical perspective. It is like Stallman and the gang saw “DRM” and immediately ran (like they did from the W3C, though I agree with a lot of their points, hence KnowledgeNet). This is not how I perceive the world or software and hardware playing out.

By the way, bash is GPLv3. Does that mean no bash command line on any GPLv2 Linux devices? It wouldn’t fit anyway. All the more reason to stick with a POSIX shell (Bourne) and avoid using and learning bash-isms (which was always a bad idea).

GPL is Out

The only reason I would ever pick a GPL license with all of this going on is if I was writing software specifically for Linux. I’m not. I suppose we could release our C reference library under GPLv2 “or other” and be fine to avoid the problem, but then others who might want to use the C version in commercial software would be unable to use it. So Apache-2.0 fills the need perfectly.

Say a company wants to use a C BaseML stream handling library, it could because Linux would be covered under its own terms, and their app could be under other licensing. They could mod the library to their liking and parameters and include without any penalties thereby being more likely to adopt and contribute to what I hope will become something of an Internet standard (with an ABNF and RFC and all that good stuff).

Apache Foundation is Just Awesome

They became something of a cult hero to me when they stood up to Facebook’s well-intended, but poorly executed “+patents” fiasco. Apache Foundation had the sense and courage to do the right thing while still honoring the needs of corporations without fully demonizing them as the GPL (and EFF) tend to do.