Using OpenCV to transform screen designs into low-fi block designs

tingting chang
pushtostart
Published in
7 min readMar 19, 2019

This article is an introduction of our toolkit mockdown which automatically transfer high-Fidelity images to low-Fidelity image prototype. Try it out.

The first question I think everyone will ask is why we need this toolkit? Before I answer this question, I would like you guys to guess how long it will take you to find a way to convert the following image from left to right? Also, you can guess how long it will take a designer to modify the left image to right?

Google Analytics Home

If you are good at Photoshop, Sketch or similar design software, it may at least take you 3 minutes to create the right size of rectangles one by one and cover UI elements with filled rectangles. However, it only takes 0.637 seconds for our toolkits to do those tedious labor work. Imagine you are a designer and you are so excited to design some cool and popular stuff for your company. But your boss assigns you a task which requires you to create both high-fidelity and low-fidelity prototypes for the coming product. It is exciting to design 70 high-fidelity images but it will not be fun for you to manually using your design software to transfer them to low-fidelity images. Purely this boring transforming task will take you at least 210 minutes (3*70) of your life. What if you have hundreds of high-Fidelity images need to be converted to low-Fidelity? Why not find a smarter, faster way and minimize our work? We are in the 21st century, a world full of magic with programming and AI. This is our intuition of creating mockdown, because there is a need here we come up with a solution.

In case some of you have no idea what the low-fidelity images for, I will quickly wrap up some basic information about it. Low-fidelity images is a fast and easy way to translate high-level concepts of the product’s functionalities. It only contains the shape of elements, basic visual hierarchy, etc so it becomes more clear for your team members(including non-designers) to understand their expectation about the upcoming product. Some big company’s website also uses low-fidelity images as a simpler demonstration of their concepts and structures. As for high-fidelity images, they appear almost the same with the actual product and they use most of real UI elements, contents in the final product. Some popular online low-fidelity prototype examples as following:

slack on https://pixelco.webflow.io/
home page of https://www.rollworks.com/
home page of https://reply.io/

If you go to https://epic.ai/mockdown and upload a high-fidelity image and get your low-fidelity image in less than 1 second. You may say “Wow! awesome! what just happened?”. In the computer vision world, openCV, a programming library aiming image processing and video capture, plays a crucial role. Mockdown is purely created by the combination of python3 and openCV3. Let’s dive into how openCV3 do the tricks on high-fidelity images and I will use the Google Analytics Home as an example to show how this process works.

Since color in our case is not an important feature an image with three channels will take three or four times longer to process than a grayscale image, our first step is to open the image in grayscale mode. In openCV, you can simply do that using cv2.imread('example_image.jpg', 0) (at the same time make a copy of this grayscale object as imblocks which will be where our bounding boxes) and you get an image as follows:

The second step will be blur this grayscale image to remove noises and make the computer better identify shapes using OpenCV function cv2.GaussianBlur. It may sound confusing, here is a very good article about why this step is necessary: https://blog.drecks-provider.de/why-you-should-blur-an-image-before-processing-it-using-opencv-and-python/. We get the following image at this step:

The above steps are just warming up to be prepared for the following image processing steps. The third step will be a bit of exciting: we are detecting the edges of the blurred image using OpenCV function cv2.Canny . We can see OpenCV does a pretty job about detecting edges:

What we are gonna do with those perfect detected edges? Step four will find contours on above edges using a function in OpenCV and we get a list of contours.

Step five, we will draw those contours back on the grayscale image using OpenCV’s cv2.drawContours function. From here you can see that step two, three and four help to find contours on the original image then step five will draw contours on the grayscale image and we get the following image:

You can see the result from step five, we have many contours around the objects on the image but our goal is to let that detailed information covered by a bar. Apparently, we need to do some work with our edges and soundings on step six. For each contour we have to consider some corner cases, maybe it is too small and close to its neighbor and not worth us to give a separate bar to cover it then we should increase the width to merge nearby rectangles; maybe it is too big and covers our whole screen then we should remove it. For the information about each contour, we can simply use OpenCV function cv2.boundingRect to get its height, width, position on the image. Finally, we can draw rectangle boxes on top of those contours using the range of contours using OpenCV function cv2.rectangle . The cv2.rectangle has one parameter called thickness which can be set to 1 then you will get a rectangle box with lines, or you can set to -1 then you will get a rectangle box filled with the color you picked(Here we are using black). From here, we can get two separate images as follows:

Are you happy with the result from step six? You can see those bounding boxes vary with their height and width and they do not look so neat for us. From step seven, we are gonna find a way to group them together. openCV has some handy functions to help you reach this goal. First of all, we are gonna use cv2.getStructuringElement to find all the rectangular structures, then we use cv2.morphologyEx to combine rectangles and pick cv2.MORPH_CLOSEoperator to close small holes inside the foreground objects, or small black points on the object. After the above preparations, we can now finally start to find contours on newly grouped objects. Like before, we will get a list of contours, but we have to filter out some boxes within other boxes.

From step seven, we get grouped bounding boxes but with black color. Our last step eight will fill those bounding boxes with their foreground color. The procedure is quite simple: we need to map those bounding boxes from step six and map them into the original image, find a mean color of their matrix value. Eventually, we need to calculate the four corners of a rectangle and use function cv2.ellipse2Poly computes the vertices of a polyline that approximates the specified elliptic arc then use function cv2.fillConvexPolydraws a filled convex polygon. Last but not least, we replace the area within four corners with foreground color we computed and we get the final result as following:

I hope you will not find it is a bit of overwhelming. Even if you do, no worries. Mockdown is simply a useful and handy tool. Share some takeaways for readers: computer is really bad at certain things when there are lots of noises so we cannot 100% rely on them before we create an ideal scenery for it to perform the best; human brain is super delicate and sophisticated, instead of wasting it on those repeated, tedious labor work, we should borrow the power of the computer and focus on our mind for that computer cannot do.

Thank you so much for reading this article. Feel free to upload your design image and get your low-fidelity image on https://epic.ai/mockdown Any feedback will be appreciated.

--

--