Preview: 15 data science experiments coming to NYC Media Lab’s 2017 Summit

NYCML’17 will include the latest thinking in AI, machine learning, and computer vision from the City’s universities, media corporations and accelerators.

From NYC Media Lab’s Machines + Media Conference, hosted by Bloomberg. April 25th, 2017.

Data science is one of the key technology areas reshaping the way media is produced, distributed, monetized and consumed. NYC Media Lab’s annual Summit, coming up on Thursday, September 28th at The New School, will host several demos, startups and workshops related to data, pointing to key trends in AI, machine learning and computer vision.

In the 2016–17 season, NYC Media Lab has advanced thought leadership this area. The Lab has produced interviews with leading data-driven executives; have managed several prototyping projects around data with Member Companies such as Verizon, Bloomberg, MLB Advanced Media and A+E Networks; and is managing weekly topic newsletters to bring the extended community the latest news, R&D and events in and around the data industry.

Earlier this year, in collaboration with Bloomberg’s Office of the CTO, NYC Media Lab produced Machines + Media, a conference focused entirely on data science applications in the media. Pulling highlights and insights from the event, the Lab published a white paper that takes a detailed look at how data science is impacting the economics, revenue and organizational structure of the media business.

Below, please preview some of the top data related projects, prototypes, startups and workshops to be featured at the Summit. Each stems from rapid prototyping experiments, the Combine accelerator, and from relationships across NYC Media Lab’s university consortium and the NYC innovation community at large.

Registration for NYCML’17 is open here. We hope you will join us.

Prototypes, Startups & Demos


Christopher Mitchell and Thomas Dickerson. NYU Courant Institute & Brown University.

Geopipe builds immersive virtual copies of the real world automatically for architects and beyond. The platform gives users detailed models in an instant. Simply select an area in its web interface, choose the level of detail you need for your application, and download your model in industry-standard file formats.


Catherine Schmitz; Parsons School of Design.

RAICH, short for Robust AI Chat, is an artificially intelligent chatbot who loves to chat about the history of artificial intelligence. The bot is quirky, kind of pushy, and very excited to share fun facts with anyone who is curious enough to ask her a question. She also knows that artificial intelligence is a big topic and is happy to explain terms and give definitions in addition to sharing her facts. This bot lives on several devices.


Jie Feng and Svebor Karaman; Columbia SEAS.

EyeStyle turns inspiring visuals into shoppable fashion products. It leverages advanced computer vision and machine learning technology to power a platform that automatically extracts visual characteristics of an image and matches that to a database within just seconds. Customers can use the app to discover products and merchants can use its API service to add their inventory and get surfaced to the right customers at the right time.

Ability Project

Claire Kearney-Volpe, Serena Parr, Gabriella Cammarata, and Paul Myers; NYU Tandon and the NYU Ability Lab.

Following a commissioned NYU Tandon student research and development effort around accessibility, home entertainment and new media, the team will present inclusive UX design and prototypes for the future of TV for consumers with disabilities.

Searching the Web with Neural Networks

Rodrigo Frassetto Nogueira and Kyunghyun Cho; NYU Courant Institute and the NYU Center for Data Science.

Immersion of 3D objects inside 3D video

Bill Marino and Nicolas Thein of Uru; Cornell Tech.

Uru breaks down the walls between augmented reality and normal video. It takes in a 2D video with no depth data and use computer vision technology, including deep learning, to automatically detect the planes inside that image. The platform can then realistically immerse 3D models inside the video.

Beam Me Up and Down, Scotty: A Two-Way Full-Duplex Wireless Link Through Beamforming

Mahmood Baraani Dastjerdi, Tingjun Chen, Gil Zussman and Harish Krishnaswamy; Columbia University, Electrical Engineering.

This demo shows the first full-duplex wireless link over long distances by employing phased-array beamforming for self-interference cancellation.

Wearable Self

Jiyeon Kang; Parsons School of Design.

Wearable Self is a jewelry collection generated by self-data, which is an attempt to make quantified self metrics more meaningful for individuals who use wearable devices. Through personalization and customization, it creates new opportunities for users to interact with their data, resulting in meaningful objects that they can wear.


Jonah Brucker-Cohen and Brad Mehl; CUNY Lehman College.

Lively is an interactive web based toolkit for developing and deploying dialogues at live public events through input from mobile phones. Lively turns responses into an interactive group experience, with animation, video and image recognition. It allows hosts to get more people involved and gain valuable insights about attendees.

Entity Mapper

Anne Luther; The New School Center for Data Arts.

The Entity Mapper is an open source web application for visualizing qualitative data as an interactive node-link diagram. By abstracting away the time-consuming process of constructing a visualization manually, the tool allows the researcher to focus on deriving insights from their data through an instant upload format.

Touch-Less Music

Brandon Kader; NYU ITP.

A touch-less gesture system that performs music and visuals simultaneously using computer vision.

Data-driven workshops. Happening on Friday, September 29th across partnered locations.

SEARCH: Past, Present, Future

Led by Yael Elmatad, Lead Data Scientist at GIPHY.

GIPHY started as a way to catalogue animated GIFS and since then has evolved as a platform of creation and discovery. Fundamentally, surfacing content has been a core challenge. We have a team dedicated to providing the best Search experience for animated GIFs. During this workshop we’ll take you through the history of search algorithms, from pre-digital days to web 2.0.

Creative process with AI

Led by Marc Maleh, Global Director; and Marc Blanchard, Global Head of Experience Design at Havas Cognitive.

This hands-on workshop will explore how to make AI part of your creative process without being an expert in coding or data science. We’ll start by breaking down the most common myths about AI, followed by test-driving a new ideation process to help you create radically innovative products and customer experiences powered by AI, technology and data.

Immersive Recommendation

Led by Deborah Estrin, Professor; and Longqi Yang, PhD Student in Computer Science at Cornell Tech.

This workshop will present research on immersive recommendation that enables richer experiences for every user across diverse platforms. Estrin and Yang use deep learning to extract personalized preferences from varied digital traces, and have demonstrated this approach across a range of recommendation domains (creative art, food, news, and events) using diverse digital trace modalities (text, image and unstructured data streams). Recent trends in recommendation and user modeling will be discussed.

Leveraging Emerging Enterprise Technology in Media

Led by Kelley Mac, Corporate Engagement at Work-Bench.

The technology that shapes and powers how the media industry captures and retains its customers is progressively changing at a fast clip, driven by advances in areas such as AI, cloud infrastructure, mobility, and VR/AR. Work-Bench’s workshop will delve into how we’re looking at the future of enterprise technology in media, how enterprises can start to reap its rewards, and why NYC is the headquarters for innovation in this space.

Register for NYCML’17 here.

For more information regarding the NYCML’17 program, contact Alexis Avedisian,