Michael Nestor, Founder of LifeTracker, Explains the Nature of AI

Michael Nestor, founder of LifeTracker

AI is one of the hottest subjects these days but what kind of technology should be classified as AI project? Michael Nestor, a founder of LifeTracker, AI-based service that helps to plan life in more efficient way, explaines how AI works in his blog on Medium. With the permission of Michael, we republish the text of his blogpost here.

Statistics and Neural Networks

The origin of AI is not connected to the breakthroughs in science, but more often with the increase in computing power per square millimeter. The algorithms that are used today in the field of machine learning, existed previously. AI technology is based on two approaches:
1. Neural networks
2. Statistical methods
… and various combinations.

Statistical methods is often called Big Data. That’s how the recommendation engines work (right after you bought a book on Amazon, you can see recommendations for your next purchase, based on similarities in buying patterns of other customers that made purchases befor you). That is how your profile is compared to other customer profiles.

The same pattern is true for trends in general — on the basis of past behaviors and data (for example, transactions) it is possible to make projections into the future. For example, the same methods are often applied to develop patterns for making business decisions. Often it is clustering or regression method. For these purposes the entire open source libraries are created (for Python, R, Matlab, etc.) and everything you have to do is to collect large amount of useful data, find the right methods of processing and analysis, and configure them. The vase expertise in such type of analysis is the main asset of recommendation engines of Amazon, Netflix and other large online retailers.

The neural network works in a bit sexier way. Its principle of operation is similar (to approximate the reductionist theory) to the work of neurons in the human brain. That is why this area of activity is so exciting for the majority of tech professionals. The theory is available in Wikipedia or Google. The most important in neural networks is that no one knows what is happening inside them, but anyone is able to teach them something. It can be done by anyone — even you can do it! The bottom line is that you should input in such a system a large number of labeled data for a long time. For example, there is a picture with the number nine, here is a picture with the number nine that is handwritten slightly different and so on. The system divides the digit nine for a certain number of areas (“cells” for easy understanding), and eartificial neuron is connected to each of these area. Every time when the neuron is activated at the right moment (for instance, when it “reads” number nine), the system remembers it, eventually adjusting the weights of connections between neurons. As a result, a system with a large number of such elements may, by a massive operation to remember any combination of stimuli.

All kinds of “recognitions” — text recognition, speech recognition or image recognition are based on neural networks.

The New Era of AI came with the deep learning concept. If we remove all the “mist”, “esoterics”, speculations about deep learning as well as popularization of DeepMind and the ideas of Yann LeCun, deep learning is nothing more than neural networks, done in several layers/stages.

For example, you need a separate network to recognize a cat ear, a separate network to recognize a leg, other parts of the body and so on. Together, these networks operate on different levels of abstraction , starting from the lines/patterns through the details to determine the object as a whole — and naturally they work betteraltogether than a single neural network. Naturally, there are plenty of other implementation details, besides ideology, and, in addition, every company has plenty of its own developments, which are distinguished by its specific tasks.

What a person without tech background needs to understand on this topic is the fact that AI technology is applicable for narrow and very specific tasks (I do not agree with the admirers of the latest AlphaGo) such as: speech recognition, image, text, data patterns. A learning cycle for each object needs several hundreds of thousands of cores (processors) and lasts several weeks. Of course, easy tasks will take a week or two with help of your laptop, but in general, neural networks learning demands great computing power and time. After that you can connect to the already trained network (or its copies) different interfaces od requests and the network will be giving fairly accurate answers to your questions (if they were presented in the training material).
And here we approach the most interesting part of AI.

Artificial Intelligence is intelligence in the first place

To avoid complex academic explanations, it is possible to define AI as a system capable to generate certain behavior to achieve its own goals. You can add to it the ability to set goals, but it would be presize t to call it the ability to set sub-goal, to do the task decomposition. For instance, humans cannot choose the goal between to survive or eat, so do machines — they can make descision beyond the bigger goals they are programmed for.

The neural network is categorically not suitable for pure goal-setting. This methods work only for the peripheral sensory perception. But you can not uplod the entire world, with all ontology, all objects of the environment (each in tens of thousands copies and marked with the appropriate tag), not to mention the dynamic behavior. At least, no one could label all the data. Part of the problem is solved with the help of the Internet — and that is the way how famous IBM Watson learned. But not completely as there is still not enough computing power available to process most of the tasks.

I finish the part of AI concept that can be found in any popular blog or leading technology media like TechCrunch. Now I i would like to describe cognitive architectures and codification of human behavior.

Cognitive architecture it attempts to simulate an entire cognitive process — from perception to modeling the world before making decisions (according to the existing and updated objectives of the system in real time), and to generate behavior. For instance, drones are able to work autonoumosly using specific cognitive architecture. This is what allows you to potentially make decisions that did not have references from the past experiences. Modeling the human decision-making process even more difficult. Usually big research centers at large universities and the military experiment with such type of modelling. This topic is too specific for a small overview and a bit more complex.

And on the second component of AI there are no many publication. Beause AI deal with two different human emotions. The first component is usually very popular in media because, when you hear about the victory of machine over human in chess or quiz you fell delighted. On the second hand, when you read about some algorithms, for example, built-in in Facebook that can predict your behavior and reactions even for several months in advance, your enthusiasm dies down and you get scared. But the most typical reaction for the second component behind AI is not fear but aggression. No one on the earth wants to realize that he is not unique, and his behavior can be predicted for many days to come.

How AI ​​is used to design products

In general, what is called traditional AI now? I think media influenced how AI is perceived by calling AI all related to ML (machine learning) technologies such as NLP (natural language processing), neural networks and most of Big Data (when investors reduces their investments in Big Data, it is started to be called AI). Also, any solutions that can perform any human function is very often put into the AI ​​record. For example, this is how x.ai, a personal assistant that schedules meetings for you, works. Regarding the last example, ask yourself — what kind of behavior the system generates? Does it have any unexpected ingredient? Do such a system have to make a decision? Wouldn’t you like to criticize such «AI-technology”? But formally — the system generates some type of behavior. It accepts your request, looks throught the data (in calendars), informs you on the appropriate time for the meeting. That is beneficial. But no neural networks or deep leaning... Unless it speaks to you.

Now I am bringing you to a fairly simple definition of any “potentially AI» startup or product. It is necessary to answer the question: whether the system generates new behavior, which is not predesigned? Is the system able to recognize the incoming (from the environment, from a user) data, to adapt to the user’s behavior or environment and learn (means to change its behavior from day to day)? If the answer is “yes“— it is a potentially AI project.


Symbiotic AI

LifeTracker is based on understanding the potential of AI technology as a complement to a human, not a substitute. This is a system that can understand humans and learn from them, but did not act completely from their behalf. We love to talk about Enhanced Intelligence, or Augmented Intelligence. There’s no magic. Your diary remembers more than you are today. Does it have intelligence? Hardly. Does it have the properties of memory? Definitely. Your clock is ticking better than you. Do you consider the clock more intelligent than you? Hardly. But there is a task of counting time they perform better than you. All these tools are able to extend your capabilities. And if a watch is a pretty trivial (for today’s standards) device, to control more specific aspects of our lives we need more sophisticated new tools. These tools will have properties of symbiotic AI that can complement you, understand you, control the environment around you to your advantage.

Examples of the fields where AI/ML are used

Natural user interface (or hybrid interfaces) as addition to existing services and devices based on speech, text (chatbot), mix with the GUI. Neurointerface is even even more interesting. Siri belongs to this section, it is a clear example (though at this point I always like to say that Siri is able to do 10 times less than 8 years ago. That is why the creators of Siri left Apple and created viv.ai with the support of the same investors that have supported them with Siri.

Adaptation of the interface, remembering choices. It’s very uncommon and not yet developed area. Just imagine that you use the same copy of Outlook and Word at your desk for five years without interruption. Five years later, the buttons and features that you never use, still visible in the interface and the features, which you repeatedly have to look for at the menu, still not appeared in the form of visible buttons-icons-shortcuts. I would bought a plug-in that adapts the interface for me as I do not like anything to be manually configured (I do not work on Linux as many developers. By the way, in 2015 according to Stack Overflow, MacOS was named as the most preferred OS among developers, instead of Linux, for the first time).

Content recommendations. Curated content is very trendy right now, so if you find out how to offer people something what they really interested in, you will make a significant impact in the world. This applies both to the editorial content (created by professionals) and User Generated Content.

Product recommendations. Intelligent e-commerce solutions work if the following way: you should predict customer behaviour and suggest the relevant goods to the customer.

Proacticve behavior of services and devices. For example, when Google Now or iPhone’s calendar tells you to leave for the next meeting half an hour before, because of the traffic jam. You certainly input a lot of data in your devices (at least the time and the place of the next meeting as well as your current location), but still they generate a “wow effect” as well as the benefit to the user.

Automation of support functions, call centers, sales. Replace the people in monotonic functions.

Customer behaviour analysis. Emotions analysis. Analysis of references. Analysis of real behavior and its causes, the search for correlations. This is a dangerous area, it is easy to go to the usual statistics, which has been doing big business, you do not get here.

Automation of research, data collection /analysis. Build the next Palantir, but even smarter, or the similar solution for a narrow niche — learn how to automatically process the sales data archives, illnesses or crime statistics to predict the similar events according to incomplete data patterns.

Above there are only several example, not mentioning robotics, stand-alone devices (drones, cars), IoT (the same devices with sensors and the ability to generate behavior — for example, turn on the lights or your favorite music).

Good luck!