Artificial intelligence in the field of deep learning of past and present
Lei feng’s network (search for “Lei feng’s network” public attention): author of Rancho, this article details a 1) seven important stages in the development of artificial intelligence, 2) depth of learning in the development of artificial intelligence, 3) end author think about deep learning challenges and future development.
Dave Bowman: Hello, HAL do you read me, HAL? HAL, do you see me?
HAL: Affirmative, Dave, I read you. David, I see you FOSSIL iPhone 5
Dave Bowman: Open the pod bay doors, HAL. HAL, open the door
HAL: I’m sorry Dave, I’m afraid I can’t do that. I’m sorry, David, I can’t do
~《2001: A Space Odyssey》~
This bustling two years artificial intelligence not only technology giant with AI technology and product breakthrough, and favor of many startups access to risk capital, almost weekly reports of investment related startups in the area can be seen, and last spring, no doubt Google its Deepmind developed artificial intelligence AlphaGo and South hanlishishi chess battle, AiphaGo win big points on AI with admiration but also led to thinking of how AI will change our lives. In fact, from the 40 ‘s of last century the birth of artificial intelligence, prosperity and valleys of experience again and again, first we look at the past half century history of artificial intelligence.
| Seven chapter in artificial intelligence development
The origins of artificial intelligence: artificial intelligence really was born in 40–50 of the 20th century, this time in mathematics, engineering, and computer scientists in the field of exploring the possibility of an artificial brain, trying to define what is the intelligence of machines. In this context, 1950 Alan Turing published a report entitled “can machines think” paper, became a landmark, the famous Turing test to define what is a machine intelligence, he said as long as there are 30% of human subjects unable to distinguish the tested object in 5 minutes, you can view the machine passes the Turing test.
Figure 1: the Turing test; Alan Turing himself
The first golden age of artificial intelligence: the origins of artificial intelligence is now recognized as the Dartmouth Conference of 1956, and computer scientist John McCarthy at the meeting to persuade the participants to accept “the AI (Artificial Intelligence)”. More than 10 years after the Dartmouth Conference was the first golden age of artificial intelligence, a large number of researchers have pounced on a new area, is applied to the computer algebra Word problems, geometric theorem proving, top universities to establish artificial intelligence project to ARPA, among others, a lot of money, even researchers believe the machines could soon replace humans do all the work.
Artificial intelligence first downs: in the 70 ‘s, because of your computer’s performance bottlenecks, grow in complexity and the lack of data, many commitments unfulfilled now common computer database to support Visual Basic can’t find enough algorithm to train smart has no way of talking. Later scholars are two types of artificial intelligence: difficult to achieve strong AI and can attempt weak AI. Strong artificial intelligence is the one to think, do the “common tasks” weak artificial intelligence are to address single issues, we are still in the era of weak AI so far, and many of the trends of stagnation has affected the funding of the project, AI was involved in several years of downturn.
The appearance of expert system: after the 1970, academics come to accept a new idea: not only to study artificial intelligence solutions, and knowledge. So, the expert system was born, which uses digitized knowledge to reason, imitating a field of experts to tackle the problem, “knowledge management” has become the focus of mainstream AI research. In 1977, made by the World Conference on artificial intelligence “knowledge project” inspired by Japan’s fifth generation computer programme, United Kingdom Albi plans, Eureka in Europe and United States star camera introduced, brought rapid development of expert system, the emergence of a Carnegie Mellon’s XCON system and Symbolics, such as IntelliCorp new company.
AI funding crisis for the second time: before in the 1990 of the 20th century, most of the artificial intelligence project is supported by funding from government agencies in research labs, funding to directly influence the development of artificial intelligence. The mid-80, Apple and IBM desktop performance than using expert system of universal computer, fades in the scenery of the expert system, AI encounters the funding crisis.
IBM’s deep blue and Watson: after the expert systems, machine learning has become the focus of AI, the purpose of which is to make machines with automatic learning capabilities, through the algorithm makes the machine from a large amount of historical data in the study and making of new judgment to identify or forecast. At this stage, no doubt IBM is a leader in AI in 1996, deep blue (based on exhaustive search tree) defeated World Chess Champion Garry Kasparov, 2011 Watson (rule-based) against human players in the television quiz show, especially when the latter comes to this day is still the problem of natural language understanding, becoming a milestone in our understanding of human language a step further.
Deep learning of the strong rise: deep learning the second wave of machine learning, in April 2013, the MIT Technology Review magazine first in deep learning in 2013 as the top ten disruptive technologies. In fact, deep learning is not new, it is a traditional neural networks (Neural Network) development, where there is the same between the two, using a similar hierarchical structure, and different is that deep learning using a variety of training mechanisms, with strong communication skills. Traditional neural networks used to be popular directions in the field of machine learning, and later due to parameters difficult to adjust and training issues such as slow fade people’s vision.
But one called Geoffrey Hinton, University of Toronto old professor is very persistent in neural network research, and Yoshua Bengio, and Yann LeCun (invented is now most widely used deep learning neural network model-convolutions CNN) put together a viable deep learning programmes. Iconic thing 2012 Hinton students greatly reduces the error rate on the image classification competition ImageNet (ImageNet Classification with Deep Convolutional Neural Networks), beating industry giant Google, makes academia and industry uproar, not only academic significance Was to attract more industry large-scale investment in deep learning: 2012 16,000 CPU-core computing platform for Google Brain training 1 billion neurons in the deep Web, automatic recognition without outside interference “Cat”; Hinton DNN start-up acquired by Google, Hinton joined Google; and another Daniel LeCun joined Facebook, as the AI Lab Director ; Baidu establishing deep learning Institute, led by a former Google Brain Wu Enda in charge. Technology giants not only increase investment in AI, a large number of startups is riding the wave of deep learning, artificial intelligence hot topic.
| Main engines of artificial intelligence: deep learning
Machine learning is divided into two stages, originated as a shallow 20 ‘s of last century (Shallow Learning) and fire up the depth of learning only in recent years (Deep Learning). Shallow learning algorithm, we first invented the back-propagation algorithm neural network (back propagation), why call it shallow, mainly because of the teaching model is only one hidden layer (middle tier) model of shallow, shallow model has a big weakness is the limited parameters, and calculated cells, characteristic expression ability.
Shang century 90 generation, academia proposed series of shallow layer machine learning model, including rage of support vector machine Support Vector Machine,Boosting,, these model compared neural network in efficiency and accurate rate Shang are has must of upgrade, until 2010 Qian many university research in are is with fashion of SVM, algorithm, including author I (at as one machine learning professional of small Shuo, research of is Twitter text of automatically classification, Is the SVM), mainly because of the shallow model algorithm analysis simple, training methods are relatively easy to master, neural networks but relatively quiet during this period, it is difficult to see in the top academic conferences research based on neural network algorithm.
But actually later people found, even training again more of data and adjustment parameter, recognition of precision seems to to has bottleneck is Shang not to, and many when also need artificial of identifies training data, spent large human, machine learning in the of 5 big steps has features perception, image pretreatment, features extraction, features filter, forecast and recognition, which Qian 4 items is had to personally design of (author after machine learning of hell like of torture finally decided turned). In the meantime, we are dedicated Hinton old professor has studied a number of hidden layer neural network algorithms, hidden layer is actually the shallow depth of the neural network version, try to use more neurons to expression, but why such suffering to achieve it, for three reasons:
1. error back propagation in BP algorithm with hidden layers increased attenuation; optimization problems, in many cases only local optimal solutions;
2. the model increases when the amount of training data there is a high demand, particularly huge identity data is not available, will only lead to overly complex;
3. the hidden layer structure parameters, the size of the training data, you need to take a lot of computing resources.
Figure 2: traditional neural networks and hidden-layer neural network
In 2006, R.R Hinton and his students. Salakhutdinov, in the journal Science published a paper (Reducing the dimensionality of data with neural networks), successfully training a neural network, changed the whole pattern of machine learning, although only 3 pages, but now Word’s daughter. This article articles has two a main views: 1) more hidden layer neural network has more badly of learning capacity, can expression more features to description object; 2) training depth neural network Shi, can through drop dimension (pre-training) to achieved, old Professor design out of Autoencoder network can fast found good of global most advantages, used no supervision of method first separate on each layer network for training, then again to fine-tuning.
Figure 3: image and training, encoding decoding → → trimmer
We can see from Figure 3, deep pre-training network is a layer-by-layer by layer, each layer’s output; introduction of encoders and decoders, through primitive input and encoding-decoding error after training, these are unsupervised training process; at last identified samples through supervised training for fine-tuning. Training benefits of the model in a layer by layer close to the optimal location to get better results.
That’s Hinton presented in 2006 the famous deep learning framework, and when the application deep learning network, will inevitably encounter Convolutional neural networks (Convolutional Neural Networks, CNN). CNN’s principle is mimicking human neurons excited structure: number of individual nerve cells in the brain can be made only when a particular edge in the direction of the reaction today is CNN feature extraction method. Playing a analogy, dang we put face very near a Zhang people face pictures observation of when (assumed can very very of near), then only part of neurons is was activated of, we also only see people face Shang of pixel level points, dang we put distance little opened, other of part of neurons will will was activated, we also on can observation to people face of line → pattern → local → people face, whole is a step step get senior features of process.
Figure 4: the basic integrity of deep learning process
Depth study of “deep” (there are many hidden layers), the advantage is obvious — features strong communication skills, have the ability to express a large amount of data; Pretraining unsupervised training and save a lot of human identity than traditional neural networks through layer-by-layer-by-layer method of training reduces the difficulty of training, such as degradation of signal problems. Deep learning in many academic fields than the characteristically tend to have shallow learning algorithms to improve, driven researchers discovered the new world in General flock to deep learning in this field, so now not too shy said with deep learning papers.
| Deep learning of important areas of development
Deep learning, first in image, sound and semantic recognition has made considerable progress, particularly in image and sound field compared to traditional algorithm can greatly enhance the recognition rate, it is easy to understand, deep learning is a humanoid brain algorithms to perceive the outside world, and direct external natural signals than the images, sounds, and text (non-semantic).
Image processing: image is an area of deep learning early attempt, Yann LeCun, Daniel began as early as 1989 Convolutional neural networks research, made in smaller scale (handwriting) image recognition results, but has yet to break through on the pixel-rich picture until 2012, Hinton and his students in ImageNet breakthrough, recognition accuracy improves a great step forward. 2014, Hong Kong Chinese University Professor Tang Xiaoou led of computer Visual research group development has name for DeepID of depth learning model, in LFW (Labeled Faces in the Wild, people face recognition using very widely of test benchmark) database Shang get has 99.15% of recognition rate, people with eye in LFW Shang of recognition rate for 97.52%, depth learning in academic research level Shang has over has people with eye of recognition. FOSSIL iPhone 5
Certainly, when dealing with real scenes of face recognition is still far from satisfactory, for example, face is not clear, light conditions, partial occlusion and other factors will influence the rate of recognition, so in practice combining machine learning and artificial confirmed, more appropriate. Domestic life face was recognition of companies, branch Olson, Sensetime, in which Face++, Linkface, flying is search technology that comes in front of, in the real world use or deep in the vertical segments of data accumulation. In the field of emotion recognition based on facial recognition technology, reading and Facethink of science and technology (Facethink for the angels early investments in the Gulf) was one of the few to enter the field of start-ups.
Speech recognition: speech recognition has long been using Gaussian mixture model to model, is a monopoly for a long time on modeling, but despite its error rate reduced, but commercial grade application remains difficult, which in practice by the noise of the environment issued a usable level. Until the advent of deep learning, the recognition error rate best in the past based on the relative decline of more than 30%, reach the level of commercially available. Microsoft’s Yu Dong Dr and Dr Deng Li, is the earliest practitioner of this breakthrough, their first deep learning is introduced together with the Hinton voice recognition and success. Because the speech-recognition algorithms mature, HKUST, cloud flying know, Si-chi on the general recognition rates differ in HKUST iflytek is a forerunner in the promotion, from military to civilian use, including mobile Internet, telematics, smart home has extensive coverage.
Natural language processing (NLP): even now depth learning in NLP field and no made like image recognition or voice recognition field of results, based on statistics of model still is NLP of mainstream, first through semantic analysis extraction keywords, and keywords match, and algorithm judge sentences function (calculation distance this sentences recently of identifies good of sentences), last again from ahead of prepared of database in provides user output results. Obviously, this is obviously not smart, only implementation of a search function, and the lack of real language abilities. Apple’s Siri, Microsoft’s small ice, Turing robot, Baidu secret and other Giants in the field of power intelligent chat robot, and the scenario in the country is mainly customer service (even if customers hate machine customers, hoping for the first time directly linked to human), I think haven’t appeared on the market of high maturity. Small ice competitors also are quite interesting, her vision is “you just talk to me”, while other competitors focused on segment faces segments still need general chat, personally think that small ice after several years of data accumulated and algorithm improvements have a certain advantage to stand out.
Why the slow progress in the depth of learning in the field of NLP: the voice and image, its constituent elements (contours, lines and voice frames) without pretreatment can clearly reflect the entity or phonemes can be simply applied to neural network in the recognition. And semantic recognition big not same: first a text Word is after brain pretreatment of, is not natural signal; second, words Zhijian of similar does not representative its mean similar, and simple of phrases combination up zhihou mean also will has ambiguity (especially Chinese, like “absolutely didn’t thought”, refers to of is a called absolutely of people didn’t thought does, also is said unexpectedly of didn’t thought does, also is one movie of name does); dialogue need context of context of understanding, need machine has reasoning capacity Human language flexibility, and a lot of communication is needed based on knowledge. Very interesting, deep learning of imitation recognition mechanism of the human brain, the text processed by the human brain signals, but effect is far from satisfactory. Basically, algorithm or weak artificial intelligence now, can go to help mankind to quickly automate (recognition), but I still can’t understand the matter itself.
| Discussion on challenges of deep learning and development
Benefited from the increase of computing power and the emergence of large data, depth of study in the field of computer vision and speech recognition has achieved remarkable results, but we also see some limitations of deep learning, problems to be solved:
1. the depth of learning in the academic field and achieved good results, but on the business activities of the enterprise to help also was limited, because of deep learning is a process of mapping, a mapping from input to output b, and in the business activity if I already have such a pairing of a → b, why do we need machine learning to predict? Let the machine itself in the data for this pairing relationships or predict, is still a challenge.
2. the lack of theoretical basis, which is plaguing researchers questions. For example, AlphaGo this game won, how do you understand it is very difficult to win, its strategy is like. Study in depth this layer of meaning is a black box, it is also in the process of actually training network-black box: how many hidden layers to train neural networks need, how many valid arguments and so on, have no good theory to explain. I believe that many researchers in multi-layer neural network time, or spend a lot of time on boring parameter adjustment.
3. deep learning requires a lot of training samples. Due to the deep study of multilayer network structure, ability to express its strong features, the parameters of the model will increase, if the sample is too small is difficult to achieve, needs huge amounts of labeled data to avoid overfitting (overfitting) fail to represent the entire data.
4. in the chapter on deep learning NLP application also mentioned that current models or a lack of understanding and reasoning abilities.
Therefore, deep learning the next trend will also involve the solving of these problems, Hinton, and LeCun and Bengio three AI leaders authored a paper (Deep Learning) of the last-mentioned:
1. in unsupervised learning. While the study in depth the study well, overwhelming effects of unsupervised learning in training, but learning is unsupervised learning in humans and animals, we perceive the world through our own observations, to be closer to the model of the human brain, unsupervised learning, needs to be better developed.
2. reinforcement learning. Reinforcement learning refers to the mapping from external environment to conduct learning through trial and error return function to find the optimal behavior. In practice because of the amount of data is increasing, can learn a valid data in the new data and make the correction is very important, depth + feedback mechanism of reinforcement learning can provide incentives to leave the machine independent study (the typical case is AlphaGo).
3. understanding natural language. Old professor said: to let the machine read language!
4. transfer learning. Applied models of migration to the trained data effectively on the mandate of a small amount of data, that is, the learned to effectively solve the distinct but related areas of this look very sexy, but good models existed during migration of trained self bias, so the need for efficient algorithms to eliminate these errors. Fundamentally, is to make the machine as humans do have the ability to quickly learn new knowledge.
Since the publication of deep learning in Science by Hinton, in a short span of less than 10 years, brings vision, voice of revolutionary progress, set off the artificial intelligence boom. Although there are still many poor places, there is a big distance from strong artificial intelligence, it is the closest to the algorithm of the human brain works, I believe that in the future, with the improvement of algorithm and data accumulation, and even hardware-like appearance of neurons in the human brain material, deep learning of intelligent machines will go even further.
Finally, we conclude with Hinton’s words this article: “It has been obvious since the 1980s that backpropagation through deep autoencoders would be very effective for nonlinear dimensionality reduction, provided that computers were fast enough, data sets were big enough, and the initial weights were close enough to a good solution. All three conditions are now satisfied.” (Since the 80 ‘s of last century we know, if you have computers fast enough, data is large enough, the initial weight is perfect enough, based on back-propagation algorithm for automatic coding is very effective. Now, all three are available. ）
Lei Feng network Note: article by author starting for reprints please contact the original author. Rancho from Angel Bay ventures, focusing on investment in artificial intelligence and robotics, he worked in Japan for ten years, deep study AI bots, like black technology, welcome all types of AI and robotics entrepreneur tried, micro-hongguangko-Sir.
Originally published at pirmotion.wordpress.com on August 11, 2016.