A sprinkling of 2016 tech predictions

Joe Walnes
18 min readJan 26, 2016

--

Larry the Squirrel contemplating 2016

I’ve just finished reviewing my 2015 technology predictions. I got four spot on correct, one sort-of-correct-ish, and two totally and utterly wrong.

Roll on 2016.

Just like last year, when first reading some of these ideas, it may sound like I’m a little crazy (I won’t deny that). But please take a moment to read the why. If you have ideas, or see something inaccurate, please leave a comment.

There’s a lot here. Don’t read the entire article, just jump to the prediction that enrages you the most:

Microsoft will open source core Windows OS

I thought I’d lead with a whammy. Microsoft to open source Windows!

Sort of.

Not the full Windows we think of running on desktops and servers. Rather a bare bones operating system capable of running headless services such as data-stores and web-apps. We’re talking about a subset of the kernel, a minimal set of drivers, core user-land services and some remote administration tools.

Why? Well, Microsoft are losing their ground in the hosting game. The lines between enterprise IT (where Microsoft does very well) and the magical elastic inter-cloud (where Linux does very well) are starting to become blurred. Microsoft need the tech community to treat the Windows platform as a valid choice to build the next Twitter or YouTube on.

Even with a free core Windows, Microsoft has plenty of opportunity to make money. Cluster management tools, monitoring, increased sales in Windows on developer laptops, performance add-ons, and anything that runs on top of Windows.

Perhaps the biggest challenge in open sourcing the core Windows kernel is unwinding the Microsoft’s code from third party license intellectual property. With just the kernel, core services and very few drivers, this becomes feasible.

Android will start a major shift away from Java

When Android was announced, there was a lot of excitement about them using Java (sort of) as the core language.

Let’s go back to 2008…

Apple had just opened up iOS for third party apps, forcing developers to learn Objective-C, a weird quirky language that was little known to all but a small community of OS X developers and NeXSTEP developers from the 90s. To many frontend developers who had previously been exposed to C# or Java, the leap to Objective C was tough.

Then Android appeared it there was much relief. A huge community of developers could relate to Java. It was a popular language taught in computer science university classes, had significant traction in the enterprise, was already established in the low end phone market, and not too different from C# to draw Windows developers. There was a familiar IDE, OO components, a rich ecosystem of libraries — this was much more appealing than Objective C. Sure the Dalvik VM wasn’t really a JVM, but it looked close enough.

As expected, hoards of developers jumped on to Android.

There was some complex relationships related to licensing and IP, but Java was owned by Sun, a company that had historically been pretty easy to work with.

Life was good. And then Sun was acquired by Oracle.

Oracle and Google have been at each others throats for years now, tussling over Java in the Android platform. This isn’t good for either of them. Meanwhile, iOS continues to thunder forward, and with Apple’s introduction of Swift, we can’t even complain about the weird language anymore.

In February 2015, a mysterious commit appeared in the Android codebase bringing in what appears to be the entire official OpenJDK. No comment from either Google or Oracle. Eventually in December, a Google spokesperson came out with:

“As an open-source platform, Android is built upon the collaboration of the open-source community” … “In our upcoming release of Android, we plan to move Android’s Java language libraries to an OpenJDK-based approach, creating a common code base for developers to build apps and services. Google has long worked with and contributed to the OpenJDK community, and we look forward to making even more contributions to the OpenJDK project in the future.”
(source)

Wow. What a load of substance-less jargon. A “common code base for developers to build apps and services”. What does that even mean? Let’s look at some of the source now in the Android code base:

  • Java Swing GUI toolkit, including bindings for Windows, GTK and Motif
  • Java AWT GUI toolkit — something not even Java developers have used since the 90s
  • Enterprise authentication services including Windows NT domains, kerberos, and LDAP
  • OS X file system support
  • Native C bindings for running on Solaris OS
  • Applet frameworks (remember that?), printing services, Java Management consoles…

How does any of this benefit the Android community? It doesn’t.

If I were skeptical (I am), I’d guess that the only reason this is here is as some kind of compromise between Oracle and Google. Oracle may be happy with the result, but this does not benefit Google or the Android community.

Java is continually growing baggage that’s holding Google back, slowing innovation, restricting each move and resulting in a lot of time in law suits and paying damage costs.

If Google could go back in time to I’m sure they wouldn’t want to pick Java as their platform. While the initial traction benefitted them, the future is looking bleak.

So if not Java, then what?

I don’t know.

Google certainly have the in-house expertise to build their own language. They’ve produced Go, Dart, the V8 JavaScript engine, and of course Dalvik.

There’s also JVM compatible languages that don’t have ties to Oracle, such as Groovy (which the Android community already have exposure through the Gradle build system recommended by Google), Scala, Clojure and Kotlin. If Google were going to pick one of these languages they’d have to find a way to isolate them from the underlying shared JDK libraries.

Google could move more towards web based installable applications. We’ve seen attempts to push these including HTML 5 offline web-apps that can be installed on a home screen, WebOS, PhoneGap/Cordova, and React Native. If the performance is good enough and the hooks are in place, this could be a compelling solution.

And… there’s Swift. Although it’s an Apple product (and Google has had their fair share of lawsuits with Apple), it’s open source and has a license that isn’t going to screw Google. It’s clean, fast, modern, appealing, pragmatic, but most importantly has a huge existing mobile developer community. Mobile developers everywhere would be happy if Android and iOS shared the same language.

Whatever happens — the hurdle to overcome is how the transition path. When Apple introduced Swift, they did a great job of allowing near-seamless interoperability with Objective C. Developers could incrementally introduce it to their projects and mix and match Objective C and Swift libraries.

With Java it’s trickier — Google need to get away from it altogether. The core libraries, right down to splitting a string would have to be different. I imagine Google would allow a mixed mode (both allowed) as a transition period with the intent of eventually moving to a pure (no Java libs) world, which will take many years.

Strong backlash and caution against SaaS shutdowns

We’ve seen the cycle so many times now. That email sitting in your inbox with the three words we dread to read: “our incredible journey”. It seemed like only yesterday you read an email from the same company explaining how they’re excited to be acquired and nothing will change.

You trusted that service as it promised to revolutionize the way you capture personal notes, curate playlists, suggest reading, generate diagrams, arrange thoughts, publish websites, organize your email, share your photos, sell your products, track your client invoices, curate recipes, and collaborate with your team.

And just like that it’s gone.

It’s easy to suggest never using the cloud, but let’s look at the advantages it brings…

  • We should be able to use software without having to become sys-admins. If I’m start a business selling home-made dog hats, I shouldn’t have to learn about setting up SQL databases, securing web servers, configuring web-apps or requesting certificates. We just want to click a button.
  • We’re not an island. We need multiple people around the world to be able to access our data. It can’t just sit on our laptop.
  • We shouldn’t have to lie awake at night worrying about the latest zero day SSL vulnerability or previously unthought of X-jacking approach a researcher has just discovered. We want to know that there’s a team of experts defending you.
  • We want to know that our services are continually getting better. Faster, more functionality, improved experience, more value. And we’ll take advantage of this for our own work.
  • We want things faster and cheaper. Software development teams invest significant resources working on installers, upgraders, remote diagnostics tools, platform portability, etc. A SaaS model eliminates a great deal of that, which means we the customers get it faster and cheaper.

Ok, so what alternative models are there? And why aren’t they working?

  1. More open data standards and data liberation. Most services now offer some kind of export functionality. Unfortunately, for the vast majority of users it’s useless because unless you’re a programmer you can’t actually do anything useful with it.
    I recently spent three weeks going back and forth with the Google Apps for Work support team, trying to migrate a folder of documents from my personal account to a company one — it was basically impossible without document degradation (lost meta-data, image quality reduction, broken formatting, spreadsheet formulas stopped working) . If Google can’t even effectively move user’s data to move around their own product, what hope is there for us to liberate data across different services?
    Open standards can fix this, but it’s a long path. And standards are inherently at odds with innovation and service uniqueness — there’s no industry standard for that unique feature added last week.
  2. Pay more, for less. While it’s not fool proof, if you’re using a service that’s making a sustainable profit from its users and not desperately seeking acquisition, your chances may be higher of this thing staying around. If you’re getting something for free, well, you may be getting what you paid for in the long run.
    Now take a look at Posthaven’s pledge. Their business model is you pay for the blog and they are committed to keeping it running forever and never being acquired. Will it work? I don’t know.
  3. Run the software yourself. Back to the old model of ensuring you own the software and can keep it running yourself. This certainly protects you from losing everything when the service shuts down, but you lose all the benefits above. To many, the skills required to do this make it a non-starter. Some may pay other people to be their “IT person”. Either way it takes time and/or money, which defeats the whole purpose of SaaS scalability.
    There may be some hope though. Containerization technology like Docker can simplify packaging and distribution of services. Even more interesting is Sandstorm, which takes containerization to the next level making it functional and simple enough that non-technical end users can point-and-click to seamlessly deploy and configure services in a cloud of their choice, even on-premise.
  4. Layered services. Services built on services. Lower level services are stable and lower risk. Higher level services are innovating more but at more risk of shutdown. In the event a higher level service shuts down, users can fall back to the next level down in the stack until another higher level one appears to take its place.
    Okay, here’s a concrete example… Sharing photo albums. A lower level service such as DropBox can provide storage of photos, and a higher level service can provide a beautiful photo gallery on top of DropBox. In the event that higher level service is shutdown, your photos aren’t going anywhere and another high level service can fill its place.

In 2016, I hope to see a lot more caution around selecting services and discussion around how to make these more sustainable.

p.s. Yet here I am, posting this article on medium.com. Do I never learn?

React.js evolves into browser standards

Every few years we see the next-new-hotness in web technology frameworks. No doubt, React is the current hotness.

But over the years, these frameworks have mostly been based on the same set of UI design patterns. React introduced a fundamentally different programming model to web-apps — the Virtual DOM. This allows apps to simply re-render a lightweight representation of the DOM tree whenever anything changes and the React framework code will take care of the nasty business of applying incremental changes to the displayed UI.

Will React be the framework of choice in 5 years time? Who knows.

But I will bet on the Virtual DOM concept being around for a long time and becoming the preferred model for the next generation of frameworks. We’re already seeing many non-React frameworks adopt the approach (e.g. Elm, Mercury) and even higher level based-on-React-but-hides-React frameworks (e.g. Om).

The issue is, the work required to update the physical DOM based on the Virtual DOM is still quite involved. There’s performance considerations, dirtying the real DOM (look in the web-inspector for a React based page and you’ll see additional IDs scattered all over the place) and effective debugging really requires a separate browser extension to map the physical DOM back to the Virtual DOM, which further slows things down.

It makes sense for browser vendors to take on this work. Browser vendors have low level control of the DOM and can perform optimizations deep in the data structures and native code to allow fast resolution of Virtual DOM to physical DOM. And they could provide native developer tools for inspection, debugging and manipulation. The interface could be thin, maybe even a single method on the DOM Element interface to apply a data structure. And a polyfill could exist that would allow graceful fallback on older browsers.

At this point, React would still exist — it would just provide the component framework. One of many frameworks that could be backed by a Virtual DOM.

Virtual DOM could be a standard developed by the WhatWG community.

There’s another reason it would be appealing to become a standard — it may just save the Web Components spec. For a few years this has appeared to be floundering, with slow progress and adoption. In the years we’ve been talking about Web Components, we’ve seen React, grow up, gain huge adoption. And React is just, well, err, better. Much better.

But all is not lost for Web Components. Okay, the bar has been raised by other web frameworks, but as Dion points out, Web Components has the potential to provide an interoperability between components built with different technologies. If I want to include a third party color picker in my web-app, I shouldn’t be limited to the React only ones. Web Components provide that seam — the technology used by each component should be an implementation detail of that component. Except, that doesn’t work when you span Virtual and browser DOMs. You can’t include a Polymer based widget in the middle of a React Virtual DOM tree.

So, the Web Components community and Virtual DOM communities need to figure out how this interopability should work. Because if they don’t Web Components won’t be able to play in a Virtual DOM world and it surely will be the death of Web Components.

The Virtual DOM also brings some other interesting possibilities. For example, it would allow a web-app to run its UI code on a background worker, passing the Virtual DOM over a message channel when it changes. The advantage of this is the UI code is no longer on the browser rendering thread which eliminates UI freezes.

Related to all of this is the syntax required to build a Virtual DOM. JSX is syntactic sugar added to JavaScript to enable this. The logical step here is to propose this to the TC39 community as an ECMAScript standard. Stranger things have happened.

Apple will release self-hosted enterprise iCloud alternative

Remember when the iPhone came out and everyone thought it would never be able to compete with Blackberry in the enterprise sector? Well, Apple adapted bringing in a whole host of enterprise management tools. Each year we see fewer Blackberrys and more iPhones being used in the workplace.

Over the past few years, iCloud has been fighting to gain traction over alternatives like Google Drive (Docs), Dropbox, Microsoft OneDrive, and others. Tight integration with iOS / OS X, and simplicity for app developer adoption helps, but they need a bigger trick up their sleeve.

And they found one — privacy.

Unlike much of the competition, Apple are going out their way to architect their system to keep your data private, including from government agencies waving subpoenas. Apple are moving towards an architecture where even if they wanted to look at your data, they can’t.

Just look at their privacy policy. It’s refreshingly readable, reassuring, and transparent. Right down to how they handle government information requests.

But still…

You know what makes companies feel even safer than privacy policies about their cloud hosted data? Keeping the data themselves on premise. And if you want to do that, you don’t turn to Apple, you turn to Microsoft.

The cloud is growing, but the need for on-site data is still alive and kicking. Not just for privacy, but for privacy, regional laws, regulatory or mission critical reliability.

So, I predict Apple will release an on-premise alternative to iCloud for corporate IT departments. iCloud enabled apps (not just Apple, but third party too) will be allowed to store data on site with minimal effort.

Perhaps the biggest challenge is providing a platform to run it on. Their attempt at building a data center friendly server was a flop and discontinued. OS X Server has minimal usage and is unlikely to be ever taken seriously as long as it has to run on funny looking hardware like cylinders and slices of toast that lack rack mountability, serial port management consoles, redundant power connectivity, etc.

If Apple were smart (and they are), I would expect to see the first incarnation of this as virtual image that can be deployed to existing physical infrastructure and plugged into existing enterprise storage solutions.

Microsoft will open source EdgeHTML rendering engine

Microsoft will continue their open sourcing streak with releasing the core EdgeHTML rendering engine, to compete against Mozilla’s Gecko, Chrome’s Blink and Safari’s WebKit.

The latest indicator that this will happen is Microsoft recently open sourced Chakra Core — the JavaScript engine powering EdgeHTML.

So why do we care? Well, we already have many other rendering engines available, and the others are cross-platform. Edge can only really compete on standards compliance and performance. The result is it may raise the bar for all the engines.

Lawsuit against SourceForge

SourceForge. Once the shining light on the hill for the developer community. A home for the open source community to collaborate on projects and distribute to the world. It seems like such a long time ago now. How the mighty have fallen. As the rest of the internet moved forward, SourceForge became stale.

Just try downloading something from SourceForge. The downloads page has become infested with misleading ads masquerading as download buttons, just ready to install the latest crapware on an unsuspecting visitor. Sure, those more savvy knew about which was the legit download button (hint: usually the smallest and hardest to find), but the savvy wasn’t their target.

Slowly SourceForge drove the users away.

Oh, they were well aware of the issue. In 2013 they acknowledged “from time to time, a few confusing ads show up”. They claimed progress (1 , 2).

Let’s see that progress shall we…

Here’s a selection of buttons I grabbed from the SourceForge downloads page today! Not one of these is a genuine download link.

Okay, so we understand why credible open source projects want to get away from SourceForge.

But then what happened?

After the highly popular GIMP project had enough and walked away from SourceForge… something else happened:

The situation became worse recently when SourceForge started to 
wrap its downloader/installer around the GIMP project binaries.
That SourceForge installer put other software apart from GIMP on
our users' systems. This was done without our knowledge and permission, and we would never have permitted it.
(from GIMP developers mailing list)

Yep, SourceForge started modifying the installers to include their own revenue generating tools on unsuspecting end users’ machines. Ouch!

SourceForge’s response: the project wasn’t hijacked, it was abandoned. They stepped in and took control of the project, locking out the original authors, and modified the installers. How kind!

So, if you dare to leave SourceForge, you now know what might happen.

A few weeks later, they pretty much stood by their message, and then after a backlash released that maybe this wasn’t the best policy.

Here’s a better account of the full GIMP story.

And it wan’t just GIMP. A similar series of events happened to VLC media player (previously one of SourceForge’s most popular projects).

And NMAP (a frickin’ security diagnostics tool!).

It got so bad that Google Chrome even started blocking SourceForge pages.

Over the years, SourceForge has just continued to anger the community. As it approaches boiling point, I’m expecting someone to flip their shit and take it to court. We’ll see…

GitHub Pages: Free and seamless HTTPS for all

I host nearly all my websites on GitHub Pages.

It’s easy.

It’s free.

It supports custom domains.

It fits my workflow (git push!).

It’s reliable.

It’s easy to undo mistakes, because all config and content is stored in git.

It’s backed by a global CDN resulting in faster load times across the world and resilience to DoS attacks.

It provides full transparency of service outages, which are rare.

For these reasons, I believe GitHub is on track to become one of the worlds most excellent static web hosts.

But… enabling HTTPS on your GitHub pages site is painful.

We’re living in an era of HTTPS not just being a nice-to-have or critical for credit card sites, but it’s an essential. We can’t trust our connections any more — there’s all kinds of nasty just waiting to snoop and manipulate your content from governments, to your ISP, to the malicious wifi access point you just accidentally connected to in the coffee shop.

Beyond that, HTTPS enables us to do more. Many features are coming in HTTP2.0, which requires encryption. New browser features like ServiceWorkers and potentially others will require HTTPS. Even search engine rankings are influenced by HTTPS.

GitHub Pages provides the essentials for web hosting, and HTTPS is the missing essential.

It’s possible to make your site look like it’s using HTTPS using CloudFlare. It’s free, but it’s awkward to setup and doesn’t provide true end-to-end encryption — it’s still possible for somebody to snoop or man-in-the-middle between CloudFlare and the GitHub’s CDN.

Ok, so how could GitHub do this?

There are two key things things that make this possible:

  1. Server Name Identification (SNI): this allows an HTTPS server to serve multiple virtual hosts on a single IP
  2. Lets Encrypt: a free and automated certificate authority (CA). This will allow GitHub to be provided with a new certificate whenever a new GitHub Pages site is activated.

Note: GitHub actually out sources the content hosting to an external CDN provider — Fastly. This was a good move for GitHub as it allows them to focus on what they’re good on, but it also ties their hands a little. This would have to be done as a collaboration between the two companies. I’m not underestimating the work — it’s complicated.

But does it make business sense to GitHub? I don’t know. GitHub has the chance to become the premier hosting provider — if that’s what they want to be, this is essential. But maybe that’s not their business model.

They also have the opportunity to monetize this. For example, to make it a feature only for paid plans.

SQL Server on Linux, with free edition and NoSQL features

That’s free as in beer, not as in speech.

According to Hal Berenson, back in the 90’s Microsoft seriously considered porting SQL Server to UNIX platforms such as Solaris. At the time it didn’t make business sense.

So what’s changed?

Well… again… the cloud. While Windows still has considerable market share in corporate IT environment, it still has relatively small traction in the internet facing data centers. The tight coupling between Windows and SQL Server that used to work in Microsoft’s favor is now a hinderance. On one front SQL Server is still fighting an age old battle competing with Oracle (supports Linux, by the way). On another front SQL Server is competing with the open source relational engines like PostgreSQL and MySQL (or whatever fork of MySQL is trendy these days). And now there’s the specialized NoSQL battle with Riak, Cassandra, Redis, MongoDB, Couchbase, ElasticSearch, Neo4J, etc. This is a tough fight for Microsoft outside the corporate IT world.

In 2015, Microsoft put everything on the line to reinvent .NET as an appealing platform to develop your next SaaS startup on. They open sourced pretty much all of .NET, brought the platform to Linux and OS X, embraced technology like Node, Docker, NGINX, and even open sourced the latest incarnation of Visual Studio (which, by the way, is built on open source technology like Node, Atomic, and runs beautifully on Linux and OS X). What’s missing from this picture? Yeah, SQL Server.

Ok, so how would this work technically?

SQL Server is itself a large suite of tools including the core relational engine, management tools, clustering / high availability, reporting services, management / performance / monitoring dashboards, OLAP engine, enterprise integration, business intelligence, ETL pipelining tools, etc. It’s big.

But guess what? The typical cloud deployment doesn’t care about most of this stuff. It cares about a rock solid core storage engine, high query and transactional throughput, easy of use, and the ability to easily integrate into their existing automated deployment scenario (whether that’s Docker containers, Ansible playbooks, Ubuntu packages, AWS CloudFormation templates, whatever).

And at the core of SQL Server is the relational engine, that provides exactly that. Once you peel back all the enterprise layers, there’s a robust, mature and fast engine just begging to be used. And that engine is already available standalone in the form of the lean and mean SQL Server Express. Currently this is Windows only, but the task of porting just the core engine to Linux (and possibly OS X) is considerably less than porting the entire suite.

So, I expect we’ll see a Linux version of the core relational storage engine and enough wiggle room to fit it into existing deployment infrastructure. Like SQL Server Express, the basic version will be free, leaving room for the paid “Enterprise Cloud” features in the future.

And on top of that, Microsoft are likely to start attracting the NoSQL crowd by exposing lower level high performance storage APIs such as key-value storage.

In case you missed the many “follow” buttons, follow me on Twitter for no reason.

Thanks to Pexels for the squirrel.

--

--

Joe Walnes

Software and hardware. Designer. Developer. Maker. Inventor. Procrastinator.