Some 2015 tech predictions

Joe Walnes
11 min readDec 27, 2014

‘Tis the season to make irrational predictions about what we’ll see next year. Here’s mine.

I appreciate that some of these ideas sound a bit crazy at first, but please go on to read the rationale behind it. If after that you still think I’m on crack, it’s all good — we can still be friends. Also, if you see something that’s inaccurate, please leave a comment. You know the drill.

Update Jan 2016: Looking back on these, I got four pretty much spot on, one sorta right, and two totally wrong. Read on…

Apple open-sources Swift compiler,
open-source server-side ecosystem emerges

Apple has already brought significant contributions to the open-source community, most notably around WebKit and LLVM/Clang. These are BIG things and Apple have shown years of dedication to this with well run projects.

The architect of Swift is Chris Lattner. He created LLVM as a research project, was later hired by Apple where he continued work on LLVM and released Clang. Although I’ve never met Chris, everything I read that he’s ever written shows me that he “gets” open-source and communities. Swift is his baby — I believe he’ll do everything in his power to open-source this.

But what about Apple trying to block this? Well there’s no reason for them NOT to open-source it. There is little competitive advantage of keeping something like Swift proprietary. Apple heavily uses open-source themselves and the more community there is around Swift, the better it is for them.

So what exactly would Apple open-source? Well, just the language compiler and a few core runtime classes. It would be decoupled from the Objective-C runtime classes, iOS, OSX, XCode and anything else Applish. It would look a lot like Clang: A standalone language parser frontend that emits IR code that’s fed into the LLVM compiler to create executable binaries or libraries.

What’s the use of being able to code in Swift if we can’t use all the libraries? Well that’s where the open-source community pitches in. Just as we saw Google’s V8 runtime give life to Node.js (which not even the V8 team saw coming), I expect we’ll see frameworks, event loops and a broad eco-system of Swift libraries appear at a rapid pace. Because there’s no Apple dependencies, these will be easy to port to many platforms including Linux, Windows, FreeBSD, even Android and Windows Phone.

But why do we care about another language? Because Swift is actually a pretty awesome language. It does a pretty good job of creating a balance of ease-of-use, IDE/tool friendliness, easy on the eyes, blending functional + OO paradigms, and performing well. Sure there are many things that aren’t as “pure” as other languages, but it’s hard to fault the overall pragmatism. In addition, whether we like it or not, Swift will gain a large following quickly from iOS developers and this gives them an easy path to places beyond the phone.

Update Jan 2016: Nailed it. At Apple’s WWDC conference in June 2015, Swift 2.0 was announced along with the intent to open source it. In December, Apple followed through by launching swift.org, and the Swift source appeared on GitHub.

We see large companies open sourcing projects like this all the time, but there’s a big difference between publishing the source code online with an open source license, and actually building an engaging open community. I have to admit I was expecting the former from Apple, but they blew us all away with how well they did it.

They published the entire source history (30,000+ commits) right back to Chris Lattner’s initial commit in 2010. They embraced the full community model of GitHub, using GitHub issues, and pull requests as the canonical way of contributing to the project. They genuinely opened it up for anyone to contribute to it and we immediately saw new language proposals, APIs, performance improvements, etc, being collaborated on and even pioneered by the outside community. The code was well documented and easy to build with standard open source tool chains — it felt familiar and welcoming.

Oh yeah, and out of the box it worked on Linux too, acting as a showcase of portability to other *nix-y platforms.

Within just a few weeks, we saw the server side app frameworks starting to emerge, such as Perfect, Taylor, and uhhh Tailor.

Internet Explorer starts using an open-source rendering engine (like WebKit)

Heh. This would be a serious WTF if it really happened. But really, it could…

Microsoft has traditionally had a “not invented here” mentality. If they didn’t write it, it didn’t exist.

But look at what “New New Microsoft” has done (or announced) recently:

  • Open-sourcing of Core .NET libraries, CLR, ASP.Net, C# compilers
  • Open-source contributions moved to GitHub under MIT license
  • Cross platform .NET (Linux / OS-X) now officially supported. Joined forces with Mono
  • Embracing tools like jQuery, Node.js, NGINX, libuv, Docker, Apache Cordova, OpenCV, etc, and in some cases making significant contributions back
  • Azure is more about Linux than Windows these days
  • Now heavily involved in web standards committee, collaborating with counterparts in Mozilla/Google/Opera/Apple etc
  • Visual Studio 2015 to include Clang/LLVM support so it can build iOS and Android projects
  • Office suite releases for iOS and Android

The old days of Microsoft are over. Satya thinks different. The “not invented here” syndrome of Microsoft is no longer.

So why do I think IE will replace the rendering engine (Trident, which has been used since IE4) with another engine (e.g. WebKit, Blink or Gecko)? Simple: Because they want IE to be the best browser and they’re no longer held back by the previous “not invented here” culture.

By switching to an existing rendering engine Microsoft can reduce development costs, focus more effort on the browser user experience, bring IE to more platforms, and comply with more web standards. This is the New New Microsoft. They know they’re the underdog these days and brute-force is no longer the answer — they need to think like a startup.

Update Jan 2016: Got this one so wrong. Something even better happened. Let’s go back a little…

It was the end of the road for Internet Explorer and Trident rendering engine (at least I got that bit right). Microsoft continued with its reinvention open sourcing many more products and embracing core technology from outside Microsoft.

But they weren’t done with their browser and rendering engine yet. In April 2015 at Build they announced their new browser Microsoft Edge to replace poor old Internet Explorer, and along with it EdgeHTML to replace the Trident Rendering engine.

I gotta admit, when I heard this announcement, I was initially a little skeptical. For years we’ve been watching Microsoft demos of how the next generation of their technology is going to save us, only to be disappointed when we try it for real. IE has been no exception, was this going to be yet another disappointment cycle? It wasn’t.

The Edge user interface itself is fast, snappy and clean. If I regularly used Windows I would use this as my default browser. I look forward to this being available on other platforms one day.

The rendering engine, EdgeHTML, cleaned house. It’s designed for the modern web. It dropped decades for old quirks, IE specific features (ActiveX, Silverlight!) and brought in many modern features supported by the competing engines. Although it has many features missing (still early days), those that it does support it does incredibly well and the performance can sometimes leave the competition behind.

Oh yeah, and one final whammy on this… In December 2015, Microsoft announced the JavaScript engine in Edge will be open-sourced. And they followed through. This looks to me like a step in open sourcing the entire EdgeHTML rendering engine. I’ll be following this…

Google Code shutdown

Firstly, this is not about the backing source control system Mercurial being less popular than Git. Bear with me for a moment…

When Google Code added Mercurial support, Mercurial and Git were roughly equal in popularity. Git was more functional, but Mercurial was a lot simpler to use. In fact, almost everyone I spoke to at the time preferred Mercurial and honestly I thought it was going to be the winner (so let that be a lesson about listening to my predictions). Project hosting sites that had typically used centralized source control systems like CVS or SVN scrambled to add Git and Mercurial support (including Google Code).

Then GitHub happened. They realized that the it’s not just the source control system that should be decentralized but every aspect of the project. Projects could be forked with a single click, pull requests created and tracked, network graphs explored. It created an organic and discoverable open-source ecosystem, the likes of which we never saw on Google Code, Sourceforge, etc. Anyone could explore ideas in existing projects without having to gain committer access. It was magical.

GitHub may have just as easily decided to bet on Mercurial instead. I believe if that would have happened, Mercurial would be the most widely used system today. BitBucket did something similar for Mercurial and did pretty well, but GitHub always had the lead.

It was the project hosting sites that lead the source control systems, not the other way round.

So, back to Google Code. It could have been something huge and it could have made Mercurial the winner, but Google Code never grokked the importance of “social coding”. Even though the source code was decentralized, the projects themselves were still centralized. http://code.google.com itself looks like it hasn’t been updated for a loooooong time. Meanwhile, GitHub (and BitBucket) have continually improved.

It was not Mercurial that damaged Google Code…
it was Google Code that damaged Mercurial.

Over the past two years we’ve seen Google release new open-source projects on GitHub, then existing projects starting to migrate. Recently, Go started migration too — this is no casual move because it affects the import paths used in a vast amount of user created Go code which will build breakages. Yeah, the writing is on the wall for Google Code.

When SourceForge fell out of favor it was sold. It’s now filled with ads, especially deceiving ones on project downloads page which try to trick users into downloading some malware infested turd burner. In fact, for a while Sourceforge were actively modifying genuine project releases to include spyware.

Google won’t do a SourceForge. If there’s anything we’ve learned from Google over the years is that they’re not afraid of shutting down projects that don’t work out. By the way, I really respect Google for this — killing products takes guts.

Google Code — I salute you. You did well, but it’s now time to step down.

Update Jan 2016: Yep, this happened. Google handled it well and I respect them even more for making this decision. They even provided tools to migrate your project (including issues) to GitHub. A graceful shutdown.

Apple will NOT open up iPhone 6 NFC APIs

iPhone 6 now has NFC hardware. But unlike Android, I don’t think we’ll be able to access it soon. 2 reasons for this:

1. Revenue: Apple do not want to enable third-party payment apps which could take revenue away from Apple Pay. Ok, this is a weak reason because they could enforce this in AppStore guidelines and block any app that attempts it — so let’s go onto the next reason…

2. Security: There cannot be any chance of an app interfering with an Apple Pay transaction.

I believe Apple will eventually create APIs to enable use of the NFC hardware, but it will be at a much higher level than you’d get with something like Android. It will most likely be an extension to Notification center where you can register for notifications of interactions with certain type of NFC terminals.

(Of course, I should point out that my previous iPhone NFC prediction was completely wrong!)

Update Jan 2016: Got this right. They kept the NFC APIs to themselves and focused on higher level payment APIs. Smart move on Apple’s part. I believe they’ll continue to do this.

Google self driving cars start a war in Washington

Just look at the shit Tesla have got from auto-manufacturing lobbyists.

Oh, and self-driving cars also open up areas for automated shared transportation, so that will bring in the transportation lobbyists that have been practicing on Uber.

Oh, and car safety will improve, so here come the insurance company lobbyists.

It’ll be a donkey show. Luckily it will be fought with Google who won’t back down from a good fight. I’m rooting for Google.

Update Jan 2016: Sorta, maybe — this is going to play out over a much longer duration. No doubt, self-driving cars have been a huge topic this year. We’ve seen Tesla dip their toe in the field with a software update (wow!), and rumors that Google and Ford are collaborating. And of course, it’s been a legislation shit show, with California shunning these new cars, while Texas begins opening the door. Self driving cars are happening, but this is going to be a decade long transition. There’s a strong chance my kids may never need to learn to drive a car.

Apple Watch gives life to “intimate” social networks

I can’t remember who said this, or even the exact words, but it went something like:

“Every time a smartphone gets a new peripheral, it creates an entire industry of startups”.

This was true for accelerometers, GPS, compass, BluetoothLE, NFC, pedometer, etc.

Think of Apple Watch not as a standalone device, but merely an extension of your phone better connected to your body.

With heart-rate monitoring, subtle glances of a screen, a touchscreen that detects force (I expect to see gesture recognizers like rubbing, tickling, poking, etc), oh and a vibrating iPhone in your pocket, you’ve got the perfect storm for the next generation of intimate social apps.

We’ll see loads of them. One of them will be crazy popular for reasons we don’t yet know. Facebook will buy it for some order of billions dollars.

Update Jan 2016: Got this wrong. I haven’t seen anything significant yet. There was the odd attempt, but little traction. Commenter Aaron Brager brought up a good point that WatchKit is still very limited, and while the hardware is capable of doing these things, it’s not available to app developers yet.

Uber package courier service

Over the past year or so we’ve seen Uber trying out new things like getting ice cream delivered or having puppies and cupcakes delivered to your house.

At first glance this looks like some fun marketing techniques, but what if they’re testing the water for something else?

Uber have essentially built a sophisticated logistics matching engine. Moving people around was the first step but that can easily be extrapolated to moving objects around efficiently.

I tested this out once myself when I needed to get a product prototype across town and the easiest/quickest way I could think of was to call for an Uber, leave the product with the driver, track the car’s progress across town so I could see when it was close to arriving and asked someone at the other end to go and look for the car. It worked really well. This is a service that Uber could easily make inroads into.

There’s another big-player in this space too: Amazon. But it’s a different kind of logistics. Amazon is a hub and spoke model, where all products come from a single location. Uber are smart enough to not try to compete with that — instead they can focus on peer to peer.

Update Jan 2016: Got it right. UberRUSH. (Okay, someone pointed out to me that this was actually announced before I published my predictions, but I honestly didn’t know about it. I’m still gonna take credit).

In case you missed the many “follow” buttons, follow me on Twitter for no reason.

Unrelated but serious looking cat photo from Pexels.

--

--

Joe Walnes

Software and hardware. Designer. Developer. Maker. Inventor. Procrastinator.