Latest in AI
Like Mammals, Like Machines, Brain Imaging and More!
“AI is the science and engineering of making intelligent machines, especially intelligent computer programs”
— John McCarthy, Father of Artificial Intelligence
AI envelopes an extensive set of computer science for understanding, rationale and learning. It contributes to a wide variety of fields encompassing philosophy, computer science, psychology, mathematics and biology and has been dominant in various fields such as Gaming, Natural Language Processing (NLP), Speech Recognition and Expert Systems.
Here is our pick of the top 5 research developments in Artificial Intelligence over the last few weeks, presented to you crisp and concise.
Like Mammals, Like Machines:
Have you ever seen a machine that moves freely like an animal? The agile nature of mammals in navigating the world seems so effortless and natural that the intricacies in the underlying processes are undermined. Spatial navigation is still an enormous challenge for robots and artificial agents, whose capability in this area is far transcended by that of mammals. A research on mammalian grid cells by researchers in DeepMind, however, has changed this perception.
A grid cell is a type of neuron present in the brains of many species that allows them to understand their position in space. The ability to mimic the navigational capabilities of animals could boost the technology behind a lot of systems, from drones to self-driving cars. These results are evidence that imitating brain-like algorithms could lead to more powerful machine learning tools. This could mean that more lifelike AI systems are much closer on the horizon than once thought.
Shirts and watches that track health issues:
A group of researchers from The University of Waterloo have found a futuristic way to tackle health problems. They have merged AI with wearable technology (like shirts) to determine the outbreak of health issues (the data from wearable sensors and AI can evaluate changes in aerobic responses) like respiratory or cardiovascular disease in a system called the Hexoskin. They have found that health related benchmarks can be predicted very easily in this way and thus it will be possible to continuously monitor a person’s health for any discrepancies even before they realize that they require help. The research has created a means to process biological signals and has defined parameters to determine fitness. This study is the first to exploit AI with wearable sensors in independent activities of daily living and can have a considerable impact on the well-being of the population. This is a prominent example of a multi-faceted research on how AI can be a pivotal point in the health industry by predicting health issues in an individual at early stages of a condition. The team plans to test Hexoskin on mixed age groups as well as on individuals with health issues to further examine how wearing it may help in diagnosis.
Likelihood of dropping out of college:
Jade Software data scientists have built a machine learning tool to predict the likelihood that a student will drop out of university, using 15 years of student data which includes information like distance of place of residence from university, age, grade point of student, mode of payment for their study, means of enrollment, etc. It has been tested in New Zealand (one of the countries with the lowest course completion rates) and Australia. The tool was found to be 92% accurate. It can play a major role in cautioning universities by indicating that a particular student has a considerable chance of dropping out, so that the concerned authorities can intercede in the matter. While this is a new application of predictive analytics, the tool is fundamentally the same as those utilized by companies such as Flipkart and Amazon to predict consumer’s buying habits, or Netflix to predict the kind of shows viewers are likely to watch. One great risk or drawback of this model is that the parameters that students are evaluated on may gradually change over a period of time and thus a model trained in the preceding year cannot be used in the current year. The findings can be passed on to the university to analyze potential drop-outs at an individual level, which can thus help the university offer appropriate support to the students.
Life Expectancy after heart failure:
A team of researchers from the University of California, Los Angeles (UCLA) have built an algorithm that can foresee which heart patients will survive a heart transplant and their life expectancy, thus allowing doctors to make a more personalized assessment and potentially reduce health care costs, by limited usage of resources. The algorithm, named “Tree of Predictors”, takes different data parameters like blood group, age, Body Mass Index (BMI), etc. for prediction and 30 years of data was used for the same. It was found that the algorithm provided better predictions than those developed by other research teams. This technique is modeled on human thinking, the salient feature of which is that numerous alternative results are worked out for the same problem by taking into account the variability of each patient. The Tree of Predictors algorithm can be widely used for observations from many medical databases and other complex databases, to recognize handwriting, to predict fraudulent credit card usage and the popularity of news items.
Ultrasound technology to produce real-time images of the brain:
Brett Byram and his research partners at Vanderbilt University have used ultrasound technology to produce real-time images of the brain in which certain areas get stimulated by certain feelings, thus creating an effective way for people to control robotics by thinking about it. Scientists have spent decades anticipating such advancements, but it was unattainable until recently because ultrasound beams have the ability to rebound around inside the skull. Current methodologies for imaging the brain are rudimentary. Electroencephalography, which measures the electrical activity in the brain, can’t look deep into the brain, thereby producing only surface-level images. Byram says that he wants to incorporate machine learning into electroencephalogram technology so that doctors can not only visualize brain perfusion (how blood flow corresponds to changes in thought) but also areas of stimulation analogous to movement and emotion. The applications of this research are boundless. At the fundamental level, it could allow for images clearer than those doctors are used to seeing (like that of the heart, brain or womb). In addition to studying brain activity, the researchers hope that the system can eventually clone brain signals and could thus be unified with software, artificial limbs and other types of robotics, turning ideas into actions.
That’s all for this edition. Surely, the advancements are so rapid that the pace of growth is seemingly at the speed of light. Stay tuned for the next article in this series!
(This Article was authored by Research Nest’s technical writer, Nivedha Jayaseelan)