The 4 key points to takeaway from this year’s TensorFlow 2018 Summit

Mark Bugeja
Apr 16, 2018 · 5 min read

In this post I am going to highlight some of the current difficulties in the industry and how certain points in the keynote addresses certain issues that affect us as developers and researchers.

Premise #1— TensorFlow hasn’t been around for long, in fact, it started back in November of 2015. With a limited number of Artificial Intelligence (AI) developers, getting, responses to sometimes basic questions is problematic. Furthermore, due to the frequent updates and changes in the base code of TensorFlow, a number of tutorials are not yet updated to the latest version .

Keynote address #1 — The first speaker in this year’s keynote, Rajat Monga TensorFlow Director at Google, presents a set of statistics on the number of developers and researchers that have downloaded and presumably used TensorFlow. Although this seems trivial it has multiple implications. A stronger open source community means further advances in the technology, as well as, better forums where more members are able to answer questions that new or advanced users might have. This also implies a larger number of developers starting their journey into AI. From a personal experience we’re now seeing more developers posting questions or contributing answers to questions on TensorFlow. Although the problem with blogs and tutorials not updated to the latest TensorFlow version persists, it’s very common to find comments on blogs posted by other developers that have solved the issue. In most cases by straight up googling the issue you will probably find the solution.

Keynote TensorFlow Summit 2018

In addition, the TensorFlow team has released a number of avenues for different level of expertise in Machine learning (ML), to get new and veteran TensorFlow developers going. The main resources mentioned include the new blog, YouTube channel and a crash course on ML.

Premise #2 — When developing a system that uses ML to solve a task we traditionally train the model and store it on some cloud service. Although, this process works it comes with certain set of disadvantages. TensorFlow Lite is not a new product but in this year’s summit the TensorFlow team have optimised it further to enhance usage with Internet of Things (IoT) devices.

Keynote address #2 — More optimisation on TensorFlow Lite, more IoT using ML. TensorFlow Lite converts a trained ML model into a compact smaller file using FlatBuffers to optimise on speed and file size. This enables usage with Android and IOS apps as well as support for Rasberry PI.

TensorFlow Lite Architecture

The reason why this is extremely desirable is the advantages that it offers in terms of latency, availability, privacy and cost. When running ML models on the device you reduce the need to be connected to a network in order to send requests. Consequently, this also reduces latency as we do not need to wait for a response anymore so we can provide better support for real-time data processing. Moreover, with the issues in privacy we have been seeing for the past couple of days, by using TensorFlow lite their is no need to pass information to a server anymore as the ML model works on the device itself. Finally, this also implies that we do not need to use a server farm to process information on the ML Model, thus, reducing the overall cost. Do these number of advantages come with a price? Well, the short answer is yes, albeit it is different depending on the case. At the end of the day, when adding a ML model to your application, implies, that your application will have a larger file size. Having said that, hardware memory is getting cheaper, so, is it really an issue?

Premise #3 — As a developer working in AI if you are building an application using JavaScript and working with ML one of the available solutions was to use deeplearning.js. This library offered a number of functionalities, such as, the use of pre-built ML models. The only problem was that if you are porting a desktop application you would need to retrain your model or use a web service solution.

Keynote address #3 — In this year’s summit the TensorFlow team announced the integration of JavaScript as one of the new supported languages in TensorFlow. We did have a previous version deeplearning.js which provided ML functionality and integration with front-end development and TensorFlow.js is its successor.

TensorFlow.js flow

This has huge implications as we do not need to install any drivers or libraries to run ML on the browser. We can now also take advantage of the support that TensorFlow.js offers with WebGL which is used for 2D and 3D graphic rendering which can be further supported with GPU acceleration. With a number of pre-trained models, as well as the ability to port saved models. Using TensorFlow.js facilitates development of web applications that use ML that works both on desktop and browser. We are sure to see a number of unique and also entertaining applications in the near future. For more in depth details on how to start I highly recommend you look at the following link.

Premise #4 — AI is not a new field. In fact, it has been around for decades. ML gained prominence as our hardware became more powerful, thus, we could handle more calculations and work with larger data sets. Since AI started gaining traction for the past couple of years it’s sometimes very difficult to find experts in the field. For a small business or developer looking to use ML to solve a problem it might be daunting at times. Especially, if you do not find a pre-trained model that fits your expectations.

Keynote address #4 — AutoML has been around for some time. The theory behind it, is the development of ML models that auto-configure the layers and structure to find the optimal ML architecture to solve a task. In the past years, we’ve seen a breakthrough with AutoML now achieving results that are better than human constructed models in the field of Artificial Vision.

Accuracies of NASNet and state-of-the-art, human-invented models at various model sizes on ImageNet image classification.

This means that developers looking to use ML to solve a problem, can take advantage of this platform to auto-tune a model for their needs. Although, limited to vision problems, we can easily expect this to change for the better in the near future.

For me these were the key areas and points that give us an insight of where Google and TensorFlow are going to expand their efforts in the next couple of months. I hope you enjoyed reading this blog post and look forward to any comments on the summit that you submit down below :)

Mark Bugeja

Written by

Researcher at University of Malta working on #AI, #ML, #VR and Transport. Co-organiser at #GDGMalta @mkbugeja