IBM teams up with MIT to create AI that can hear and see the way humans do

When we hear something around us or see anything like movies, we can easily understand and predict what’s going to happen next. It’s a pretty easy task for every human, but not for the autonomous robots running by artificial intelligence. It’s kind of impossible for those AI systems to understand the world around us. However, the good news is, IBM and MIT have started a multi-year partnership, which aims to create artificial intelligence that understands audio and visual data the way humans do.


MIT’s Department of Brain and Cognitive Sciences and IBM Research are forming the IBM-MIT Laboratory for Brain-inspired Multimedia Machine Comprehension’s (BM3C), which will be looking particularly at the problem of computer vision and audition. MIT will conduct research while, IBM will provide the technology and expertise from its Watsoncognitive platform.

“In a world where humans and machines are working together in increasingly collaborative relationships, breakthroughs in the field of machine vision will potentially help us live healthier more productive lives,” said Guru Banavar, Chief Scientist, Cognitive Computing and VP at IBM Research. “By bringing together brain researchers and computer scientists to solve this complex technical challenge, we will advance the state-of-the-art in AI with our collaborators at MIT.”

As IBM says, it’s virtually impossible for a computer to understand what happened in an event and predict the next, proceeding pattern recognition and prediction will be one of the biggest challenges. The ability of an Ai to quickly summarize and anticipate events could be very useful for everything, starting from a mechanic repairing machine to a health care worker taking care of the elderly patient.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.