Using Google’s APIs to Augment Creativity

This week’s reading, Adobe says it wants AI to amplify human creativity and intelligence by Frederic Lardinois was particularly interesting because it is the first thing we have read this semester that really speaks to how cognitive systems can help people like us, designers. I think I am not the only one who was thinking about pursuing a project similar to this, where we tried to figure out how AI could augment creativity (before we knew we could not be our own user). And Adobe’s ideas about Sensei are good ones. Automatically scanning photos for particular attributes or tags, finding stock images based on a sketch, or removing image backgrounds.

However, after watching the Introducing Awareness APIs video, I couldn’t help but feel that the engineers at google were thinking a bit bigger, or further down the road, than those at Adobe. It introduced some new ideas to me about how AI could be used, like geofencing, activity monitoring, and automatically connecting to other devices based on proximity. This got me wondering if, in addition to the ideas Adobe already has, if any of these google APIs could also be used to help augment creativity. I don’t know the answer, but a few things that came to mind are: what if when you got near the office every day, your device automatically put creative apps that might be useful during your workday front and center, and other apps that might be distracting in the background. Or what if when you traveled to another city for a conference, it would tailor activity recommendations that appeal to your creative side (museums, exhibits, performances, etc). Or maybe when you went to present and idea to a client, you could easily throw your presentation up on any of their devices simple because you were in close proximity to them. These are just a few ideas that came to mind off the top of my head. I wonder if anyone else has any ideas?

This is not to say that Adobe’s ideas for Sensei aren’t creative enough, more just an observation that the scenarios google described when describing uses for their new APIs felt a bit more ambitious than what I read about Adobe Sensei. Maybe google is just thinking a bit further down the road, but I wanted to see if any interesting ideas would emerge if we applied that type of thinking to try and boost creativity.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.