The pictures of flowers are generated by StackGAN algorithm by giving text input of descriptions

Where will AI likely to have impact in the next 5 years ?

One of the key area that is on the exponential growth curve is artificial intelligence, particularly the kind that is known as “Deep Learning”. Since the re-birth of Deep Learning Neural Network in 2012, the following advancements were made

- Image recognition improved from mere 70% to around 97% (a little higher than human accuracy at 96%)

- Tremendous improvement in speech recognition accuracy that now in commercial speech recognition it is approaching perfection

- Applications of image recognition in medical imagery has consistently shown huge potentials from skin cancer diagnostic to diabetic retinopathy, all of which have shown accuracy equivalent to domain expert pathologists or better than those pathologists.

- Applications of Deep Learning in self-driving shows promises in solving the self-driving car problems from lane detections, traffic sign reading to accident predictions.

- Training and predictions can now be done in the cloud or local mobile devices, thanks to the advancements in GPU engineering by Nvidia

- Every major tech company that matters now has an AI group or looking to buy one out.

- Every 3 months a new frame work comes out that makes life much easier in training an AI.

- Human lost to Alpha-Go

- The hottest thing in AI research community now has gone beyond image and pattern recognition, which are well understood and ready for commercial applications. Generative Adversarial Networks (GANs) are the new black magic. You feed in a text description and the network will draw a photorealistic image. Speech synthesis is now reaching a level is that nearly indistinguishable from human speech. The limitations of computers to “uncreative work” may not hold true for a very long time.

So given all the advancements that were made in a very short period of time since adoption of Deep Learning technology into mainstream research and industry ~ barely 5 years; what can we expect that will probably happen in the next 5 years ?

So the first area that we can expect a lot of impact is any image recognition tasks. One field which we are currently working on is facial recognition as a mean of payment authorization method, in which the field has shown incredible improvements since the adoption of Deep Learning methods. In face recognition standard test, called “Labeled Faces in the wild” , the state of the art methods are now standing around 99.7% accuracy comparing to human’s accuracy of 97.5%. Every month there is a new method which is better than the previous one, so in face recognition, really we are approaching this “Super Human” level. There is no reasons why we can not make the same advancements with other biometric methods, such as finger print recognition, Iris recognition, palm recognition. All the methods which you would have thought to be “finished” sometime ago are probably waiting for a re-born with Deep Learning and would certainly make quantum jumps in their accuracies. This will allow unprecedented accuracy in doing 1:N person identification, especially when multiple biometric signatures are combined (Note, this is vital here, you’ll jump in multiple of magnitudes in lowering false acceptance rate probability by combining multiple signals)

The other area, which I am equally excited about but not yet working on, is medicine. Recent research in skin cancer classification , diabetic retinopathy and breast cancer have consistently shown super-expert abilities in doing diagnostics from those images. The field of pathology is about to be revolutionized , so is the entire field of radiology which is based primarily on doing diagnostics on images. The technology will reach the level that you have seen in many sci-fi films that you can simply lie in a general scanning machine, which will run all kind of diagnostic tests base on combination of sensors and give you back the probabilities of getting certain diseases. The key thing is, Deep learning tends to work really well whenever its trying to emulate our senses and there is no reasons why we will not have AI-enabled always on stethoscope given to people within heart disease risk group. We are really at the beginning.

The impact it self, will not be directly from technology it self, but from how the technology can be applied in a way that drastically alters the way we do things. AI-enabled medical devices could mean that you never really have to go to the doctor: your doctor is always with you. AI enabled face+biometric identification doesn’t only mean that you will not need keys to get into your house, but that your house will know you, your desks will know you, your cars will know you. The entirety of things you interact with will know you and configure themselves to your personalized choices. AI-enabled speech recognition is not going to end at “Siri, turn up the volume” but it will enable new kind of interfaces in a way we haven’t seen (Just imagine what a hotel reception might look like if speech + chat gets just a little better)

We are entering an era that from now onwards, everything will either be intelligent or to be put in a museum, where you can find horse carriages, candles, type writer, steam locomotives among other things.