UX + Computer vision. AI suggested composition images in real-time

Personally, I love food photography.

One day I asked myself. How can UX be helpful to beginner photographers to make pictures like a pro?

Last year I was dreaming to make an app for myself that suggests on how to make a better composition real-time.

Studying machine learning basics in DataRoot Unversity this idea seemed possible and so close.

You start to acquire new skills better only when practicing something like never before.

What if my camera will be my personal coach so I don’t waste time to learn composition and start making pictures faster and better.

The cruel reality is always here when it comes to bringing this idea to life.

As a UX designer, I started to think about the audience who will use except me.

Idea validation

Only practitioners and experts could provide insights on whether the idea is good enough to keep on moving.

One way to validate is to start conducting interviews with pro food photographers. They know better what beginners lack in photography.

After hours of interviews suddenly I found 1 insight that caught my attention.

People spend so much time on Pinterest, Instagram, googling for inspiration for a certain type of scene.

This is tedious work that should be automated.

Why not show similar high quality and well-composed images instantly according to what the camera “sees”.

My plan to realize the idea is the following:

— Find Computer Vision guys to explain how it should work

— Find 1 example to test. We choose apples.

— Make the dataset to train our neural network

— Approach to highest possible accuracy score

— Demonstrate the result and iterate further

Last year after Global Hack Weekend hackathon my team is already in the game. 2 Computer Vision Engineers engaged in this process.

Guys spent 2 weeks on choosing an appropriate neural network.

I spent time on gathering dataset with apples. Challenging task as you should handpick almost 2,5K apples. Part with outstanding photos, and another part with awful.

To gather only 2,5K awesome and awful apples you should have a great imagination to filter out from waste.

Another challenge was to prepare photos with good and bad lighting.

To distinguish between good and bad lighting is worked another neural network.

To show this idea we went to London on HackXLR8 hackathon.

Pitching idea

I decided it’s really important to show random people asking for the look and feel. Gathering objections helped me on the go make improvements to the pitch.

Finally, 1 hour before the hackathon we managed to show the working prototype.

Lessons learned:

  1. Always learn new skills
  2. Thoroughly prepare questions for interviewers with endless “why”s
  3. Prioritize the ideas and distinguish one that is possible to prototype faster.
  4. Collaborate with the right team.
  5. Always push yourself to help engineers
  6. Always be in charge of every smallest process.

UX + Computer vision can make a difference in many domains just like food photography.

If you’re looking for UX designer I would be glad to join your team.

Don’t miss other practical knowledge.

Senior UX designer. My UX Design is all about: Common sense in every detail, Proper domain knowledge, Scientific approaches. Portfolio compozio.com

Senior UX designer. My UX Design is all about: Common sense in every detail, Proper domain knowledge, Scientific approaches. Portfolio compozio.com