Neural network output and explainability

I’ve got a background in recommender systems, one of the major principles of which, being a medium that’s largely predicated on making data sexy to sell goods, is the explainability of the generated results.

Techniques such as collaborative filtering are more naturally suited for telling the story about why certain products are being suggested to a user. Not so with neural networks — by design, they’re black boxes, with the deliberate intention of crunching massive datasets to compute predictions or classifications. So we never really know how the sausages are made, we just run the algorithms are trust in their infinite wisdom, even if it’s not meant for human eyes.

This is why I find MIT’s The Car Can Explain! project so intriguing. The ambition is to make sense of neural net computations within the scope of self-driving cars, by analyzing the on-board log files generated by trips.

I’m really keen on seeing where the team goes with this.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.