Hacking About With AI and Watson

What is a hack day at Bloom & Wild?

Code And Wild
Code & Wild
6 min readJul 26, 2018

--

Here at Bloom & Wild, we recently had our 3rd hack day. Just like our first two, the hack day is for everyone at the company, and not just the technology team. The teams are formed from a mix of people from different departments, each with one developer (the developers tend to have a huge weight on their shoulders to build something!)

Each hack day has a theme and each team will meet up before the big day to decide what they’re going to tackle. On the hack day, we are given the full day to build/ design something that we’d present at the demos at the end of the day. B&W hack days have been great fun, where I got to work with people who I don’t normally work with on a daily basis AND being a techie, I get to explore (play) with a different technology.

Our third hack day was probably my favourite, where we explored AI (Artificial Intelligence). The challenge was to see if we could use artificial intelligence across the company; it was kept broad so it could be used to create better experiences for our customers or simply to improve internal processes.

Our challenge…

Our team started to brainstorm ideas, but we deviated towards thinking of ideas that hopefully other teams weren’t thinking about (it’s all about the w̶i̶n̶n̶i̶n̶g̶, ahem, taking part). Eventually we came across this problem…

Our bouquets are seasonal, meaning we change the range of our bouquets all the time and one of the tasks involved is to upload new images and tag them. Whereas tagging on social media means adding the name of friends in pictures, for us, it means tagging the attributes of the bouquets. For example:

Collection Type: Letterbox (fit through letterbox), hand-tied (too big for letterbox)

Colours: Red, pink, yellow etc

Stems: Rose, peony, lily, sunflower etc

We decided to tackle this area to see if we could use AI to automatically tag new bouquets. Having not done AI since my university days (lets just say it was a while ago), this felt like a substantial challenge to accomplish in a day :)

What we found online

Starting our research on image recognition, there were plenty of services out there that could help us, such as Google who have Cloud Vision API, Clarifai and Amazon Rekognition. Google & Amazon were a bit limited as they had their own classifications, i.e. you couldn’t create custom model to say letterbox or hand-tied. Clarifai looked like an excellent service, but as the title of this blog suggests, we decided to give IBM’s Watson a try.

For those who have not heard of Watson, Watson is famous! (at least in the techie world). Watson is a supercomputer developed by IBM that rose to fame in 2011 when it challenged the two best players of the popular US quiz show Jeopardy and won.

Back to the hack

We felt confident that we could create a simple model that would demonstrate Watson’s capabilities. It also helped that in 2016 IBM launched Visual Recognition API, allowing developers to explore (play) or use for real.

We created 2 custom models:

  1. To classify the collection type, ie is it a letterbox or not
  2. Does the bouquet have a yellow colour
This is a letterbox bouquet (see the flat box behind the bouquet, it fits through a letterbox!)
This a hand-tied bouquet (see the big box, it doesn’t fit through a letterbox)

Step 1: Training. For the model to learn what a letterbox bouquet is, it needs example images of letterbox bouquets and similarly it needs example images of yellow bouquets. The model also needs negative images, which for the letterbox classification means images of hand-tied bouquets and for yellow classification means non-yellow bouquets, so bouquets with purple, red, pink would suffice. Here’s the 10 positive & negative images we uploaded for the letterbox classification:

Step 2: Wait for the model to build. That didn’t take too long and we could check the API for status updates

Step 3. Classify a new image. We uploaded new images and the scores were mind blowing (at least to us, as we had no idea if we were doing it right!)

Probability of letterbox: 0.012
Probability of letterbox: 0.896
Probability of letterbox: 0.92
Probability of letterbox: 0.06
Probability of letterbox: 0.629 (different image to training images)

It wasn’t that easy

All of the above didn’t sound difficult and we felt like were were racing ahead, but just like a an episode of Top Gear, there were twists that almost derailed us, specifically in the many errors with IBM’s services…

like this
and this
or this
last one

The Demo

As the developer in the team, my area of `expertise` was the Bloom & Wild iPhone app (i’ve been developing it for over 3 years). The app has grown over the years, but at its core is the ability for customers to swipe through a carousel of bouquets or plants and purchase one or more, which can then be shipped to the UK, Ireland, France or Germany. Naturally, for the demo we hooked up the app to Watson. We wanted to get real time scoring of our current bouquet range, here’s some examples that used our new new bouquet imagery that wasn’t included in the model training:

Did you notice the score for the Ines bouquet? It’s the one with the green background selling for €39. You may have spotted the lack of box in the background, which confused the model as all of the training images had either a small box or a big box in the background. As a result, the letterbox probability was 0.15 & hand-tied was 0.016. We thought this was a fair output from the model, given there was no pattern to the model’s training images.

Results and next steps

How did we get in on? Well we won!! That’s not quite true, but to us (and no disrespect to the other teams in the hack day), we got an AI model built and hooked up to the app in a day, so we felt like we won :) Plus our Data Team were itching to know how it worked!

Next steps are to see if we can gather a large set of images (we have them somewhere), build a model and see if the scores we got were a fluke (maybe the model found a different pattern in the images that wasn’t the box). With that said, developing a model that could recognise different stems sounds far off, but if we think of an iterative approach, starting off with letterbox/hand-tied, then colours and eventually different stems could be done. I’m sure we’d also learn a lot on the way (insert sarcastic laugh here). If we do make progress with this, we’ll definitely share our story.

Sites we found helpful

For those interested to play with Watson’s Visual Recognition, here’s their API documentation we used to connect the model to our app.

Watson has several ways of building a custom models, the easiest way we found (if you’re not a big user of Terminal) was their API Explorer:

Written by Adam Francis, Head of Mobile

--

--