Machine learning will be the most exciting tech trend of 2018: Persistent Systems CTO

Persistent Systems
Persistent Insights
5 min readDec 21, 2017

Dr Siddhartha Chatterjee, Chief Technology Officer at Persistent Systems, believes that machine learning will be the dominant technology trend in the coming year, even as he is cautious about the future of cryptocurrencies such as bitcoins.

Below are edited excerpts from an interview with Moneycontrol:

Of the 2017 technology trends that you predicted at the beginning of the year, which one in your opinion, was the most widely adopted?

All of the trends that we predicted have generally been adopted widely over the course of the year. Based on our experience with clients and projects in 2017, I would say that machine intelligence (Machine Learning and Deep Learning) has been adopted into mainstream projects at a more accelerated pace than initially anticipated.

On the consumer front, we saw widespread adoption of voice-enabled smart devices like Amazon Alexa, Google Home, Apple Pod etc. The quality of voice interaction is good enough to play an important role as virtual assistants in smart homes. The usage of bots linked to popular messaging platforms like Facebook and Slack has grown, with many businesses using them for better customer engagement.

On the flip side of the adoption story, the enterprise security lessons seem to not have been learned as well as we could hope. As the boundaries of the enterprise continue to blur, people remain the weakest link in the chain.

Which technology trend excites you the most as we move forward into 2018?

We are still finalising the technology trends we will call out for 2018, but it is pretty clear that machine learning is going to be the most exciting, dominant, and pervasive trend in both the consumer and enterprise spaces.

Cryptocurrency, especially bitcoins, have certainly gained mainstream recognition this year, probably because of their sky high valuation. How do you see this rise in value panning out in the coming year?

We are in the very early stages of cryptocurrency and blockchain-enabled transformation. Whether justified or not, the rise of valuations this year is largely unrelated to fundamentals or to the potential of any cryptocurrency. It is driven by a mix of diverse factors: fear of missing out, hedging against the decline of certain fiat currencies, misinformation about certain cryptocurrencies, etc. Over the next year we will see regulators step in to regulate the exchanges, but some of the volatility will still continue due to the introduction of futures trading.

In the long run, some of these currencies and tokens will continue to exist and enable new applications and others will not.

Human machine interactions have also gained momentum this year. Where does this stand right now?

Millennials are playing a key role in the rise of chatbots, with almost 60 percent of them saying they have used chatbots at least once, according to a study by Retale. Customer-facing industries such as retail, banking, telecom, and utilities are the ones who are adapting to this at a rapid pace.

While digital assistants like Siri (Apple) or Cortana (Microsoft) are primarily used as organisational tools, “external” smart speakers like Alexa and Google Home are primarily designed for consumer use. As with the first generation of smart watches, a lot of the interest in this technology is around the “cool factor”, and the real applications are few and far between.

While there are 15,000+ Alexa skills, it is unclear how many of them are actually of any use (I own one of each device, and their use tailed off considerably after the initial excitement wore off). This will follow a normal maturity curve as the platforms stabilise, consolidate, and improve in interoperability.

How do you see human-machine interaction working in India, with its diverse accents and languages and differences in connotation within different regions using similar words?

In general, this is a difficult problem. Let’s divide the problem into its two natural pieces: speech recognition and synthesis (STT / TTS) and natural language processing (NLP) / natural language understanding (NLU). Speech recognition is largely a solved problem by most leading platform players such as Google, IBM, Microsoft, Apple, Amazon. These are robust, with good support for major Indian languages. Similarly, TTS engines are robust as well.

The more difficult problem is that of NLP/NLU, primarily because of the large amount of context, semantics, and ambiguity inherent in human communication.

Consider a concrete example, where a user asks “दिल्ली का weather कैसा है?” to a conversational agent.

In order to answer this question, the system would presumably have to access weather information for Delhi from a source such as weather.com. Note (at least) two complicating factors: first, the smooth transition back and forth between Hindi and English and second, a simple transliteration of दिल्ली would produce DILLI rather than DELHI. It is not that techniques are not known for dealing with many of these problems; it is more that, when it comes to Indian languages, solutions do not exist in robust packages that are easy to consume (unlike, for example, in the case of European languages).

Despite these problems, two indicators suggest that the issues are surmountable if there is the will to do so. First, the example of Baidu in China has shown that the problem can be solved at scale. Second, the rise of startups such as liv.ai beginning to address these problems in the Indian context is a ray of hope.

India saw quite a lot of discussion around privacy, linked primarily to its biometric ID database (Aadhaar). As that discussion matures, how do you see its effect on the new emerging technologies? The Context here is how cultural perception of privacy shapes the understanding of the concept in the digital world.

The debate around Aadhaar has brought citizen data privacy to the forefront. Historically, this had not emerged as a pressing issue even through the e-commerce boom. We expect the Government of India to come out with a data privacy and security policy that applies to both government and private enterprises that handle consumer data and sensitive information.

We need a policy that governs the collection, storage, use, sharing, and deletion of such data. Also, the impending introduction of the EU’s GDPR (General Data Protection Regulations) in 2018, while not specifically India-centric, will expose Indian citizens and enterprises to certain ideas around privacy (such as the “right to be forgotten”) that will further shape the discussions, policies, and implementations around this topic.

Where do you see India contributing most in the emerging technologies of 2018?

India will contribute across the board in the emerging technologies of 2018. For a very interesting outside-in view of this trend, see Thomas Friedman’s recent op-ed piece in the New York Times.

As I have picked machine learning as the most exciting trend for 2018, let me talk about contributions from India in that space. In this area, we are seeing a two-fold contribution from India — there are a significant number of start-ups working in multiple domains not limited to banking and finance, HMI, healthcare, retail, and insurance, and specialised ML/AI labs set up in bigger corporate environments which are rolling out general-purpose ML platforms such as Nia from Infosys and Ignio™ from TCS.

There is widespread adoption of new open-source technologies and platforms by smaller outfits too that offer customised service solutions to clients with data analytics requirements. Overall, the machine learning community in India is very vibrant and contributing in multiple ways for widespread technology adoption.

Originally published at www.moneycontrol.com.

--

--