Here I go. So I’m planning to write down the fun parts of my development lifecycles.
For this Episode I want to talk about a recent event. I suddenly got a call from one of the managers about a project estimation. And he sounded troubled. Long story short the developers aren’t accepting the estimated time the client approved for. Hence I was asked to come and rescue. Yeah… I’ve saved the day multiple times.
So what was the problem? Out of all the requirement there was one requirement that kind of scared the developers. It was a small gray area for everyone. The requirement was detect the static wall and change paint color.
Simple as it may sound the execution of the feature isn’t as easy as that. OR IS IT?
Well lets go through our thought process. So what do we do? As an augmented reality app developer I suggested hey lets just detect the horizontal surface of the wall and paint it. I already did it. And it’s kind of found as a tutorial everywhere.
It wasn’t as simple as that. For an app to have AR features it needs to be very feature rich. And they aren’t cheap. The client wanted to support low end phones.
We are back at square one now. So how do we do that? First we had to understand the problem. Lets break it down.
- First we need to detect edges of the wall
- Ensure the edges don’t cover the chair , picture frame etc.
- Tap to paint the wall only
So how do we solve the first problem? Hmm Image processing is the first thing that comes to mind. And the first thing that comes after that is OpenCV. So we asked google uncle this, “ OpenCV paint wall”. And uncle gave us some hopeful pictures.
Looking at this we realized okay this is our savior. We finally mustered enough courage to start developing the application. Even though we had almost zero experience with this library.
So we did what we were supposed to do , consulted someone with Image Processing background. So I asked him to guide me on how should I do it? he said this to me.
No. OpenCV BAD!
And I was like what do I do now? I asked okay why bad and what do I do?
He replied it’s not reliable enough. Try EmguCV or Tensorflow or Tensorflow Lite.
And then I realized he’s asking me to train some AI to detect wall. But AI can’t just detect wall it’s just a flat vertical surface. So, He’d train common home stuffs and remove those from detection and hopefully it will show the wall to paint.
And for a quick example, he showed me this.
This looked promising and I though WOW, My problem solved. Just need some tweaking. But I spoke too soon. I sent him to test my wall and here’s the result.
Yeah…. It wasn’t detecting the wall at all because the trained dataset didn’t know the stuff was near my wall. Clearly this Method isn’t going to work. Because I can’t really train every item in the existence just to find the wall that’s not on the dataset. It also sounds a very stupid idea.
Then I went back and started testing out OpenCV. He did share some resources for that. I read some stuff and started to fiddle with it. After 3–4 days of hard work to learn a new framework I was able to achieve edge detection. THE UREKA MOMENT! I didn’t run out naked but did dance a little since I’ve working from home and nobody can see.
Well anyway lets take a look at how it looked.
I was able to somehow detect the edges and Sort of satisfied. I kind of solved our 1st and 2nd problem. Now I need to paint the wall. But first I need to bring back the color of the images first. After a hour or so I realized how to color the edges.
So I guess I can color the wall some how. I did another break dance at this point.
But I was nowhere close to bringing back the color.
I started again on the next day. And finally realized how to do that. And this is how it looks.
Guess what Did I do then? you guessed right! Did a dance with my cat.
Well anyway. Now I was 80% done. All That is left to do. is fill the borders. How do I do that?
Something like MS Paint Fill color.
Or this from Photoshop
I remembered that There’s something called Flood Fill algorithm. I did hear about it in university. Never did it for real tho. [Please go to the link and try to understand how it works if you don’t because this will be important to understand the joke]
Well anyway I googled if there’s flood fill for OpenCV? YES ! there was. And it was a simple function.
But I couldn’t make it work. Because yeah of course I can’t possibly learn image processing library in 2 days.
I gave it quite a lot of time but I just couldn’t do it. Did I tell how lazy I am?
Well anyway. I was trying to figure out something easier to fill the boxes. I just have to fill the boxes right. Then I remembered I’m using Unity. Why not use it’s rendering capabilities which I understand a thousand times better than OpenCV. I could write my own flood fill algorithm if needed in hours. But good things happen to Lazy Developers. I found a flood fill algorithm over a community Wikipedia.
So I used that and tested how it works in my EDGE DETECTED IMAGE. Oh boy, It works!
Now one last problem. I need to remove the Detected edges….
I almost have no idea how to do that. I tried so many things.
- Tried to do something like anti-aliasing, The edges were a single pixel or two line drawn red color. All I have to do is check Top,Down,Left,Right for any other pixel and change the color. That should solve it.
- Tried to save the pixels and redraw the whole thing on a new image. Yeah memory waste.
- Tried to remove the lines from OpenCV. Which I by the way still don’t understand much.
- And this and that.
I just couldn’t fix it. So the code I just copied had 2 functions.
FloodFillBorder(this Texture2D aTex, int aX, int aY, Color aFillColor, Color aBorderColor)FloodFillArea(this Texture2D aTex, int aX, int aY, Color aFillColor)
I was using the border one and fed it the color of the border. And it worked as I wanted.
So I thought. Lets give the other one a shot. So I used that and there was almost no changes. So I opened up the code[the reference was given before].
And started reading what it had. It’s not like I can just copy and paste stuff only. I can read and understand code too. Which I usually don’t because I’m too lazy to check things that are already working. Well anyway I realized the algorithm was fine but the image I was feeding wasn’t. If you look at the Wikipedia about the implementation the logic clearly says, It checks the pixels for the same color. It may work for Paint. But in real life images that’s not how it will go. 100% of the times the next pixel will be a different shade. Shade not a different color. So I thought. Hmmm this code needs some improvement.
It needs some kind of tolerance to accept nearby shades too. So I did what I do best, Told google uncle this, “Unity flood fill with tolerance” and google uncle did what he does best. Gave me this link to some good guys Gist .
Why waste time coding things that are already done by someone else? I just copied this over and ran my program. And this is what happened.
Yeah. lmao. Here’s a list of things that happened.
- We didn’t need any SDK to do this.
- Didn’t need AI or some stuff
- Copied a code from Wikipedia
- App is significantly faster because no 3rd party SDK needed.
- App size is smaller.
Let me explain why this works. But first go back to the algorithm. Re-read it.
Well what the function does is it keeps filling the pixels with the given color as long as it has the same shade of color within the tolerance range. AND what is an edge? Anything that has a different color. Our algorithm only accepts different shades not color. So yeah by default the algorithm handled Edge detection. On which we spent a very very long time.
We were thinking of using AR, Image Processing, AI…
I can’t believe the solution was this easy. Whoever is reading this trying to implement their own version. Bruh/Sis just do a flood fill with whatever renderer you are using. The code sample should be the same more or less. At least in the functional part.
Here’s one thing I did learn from my (Mis)Adventure.
Biggest Problems have the simplest solution