Monthly Challenge

Charlie Gerard
13 min readAug 2, 2017

--

  • July 2017: Learn Java.

This month’s challenge is to learn Java. Of course, “learning Java” is a bit too broad and to be able to determine if I “succeeded”, I need to have a specific goal, so I’m gonna say, I’d like to build a basic Android app in Java (with tests?).

My knowledge of Java is close to nothing and I’ve never built an Android app neither before so I’m not sure how achievable this goal is in a month, but we’ll see!

I usually code in HTML, CSS, JavaScript (React.js, Angular.js, Node.js) and Ruby so this is going to be different and I’m quite excited!

The following notes are supposed to be just a brain-dump / journal and I will write another post at the end of the month to share what I’ve learnt.

Day 1

Before diving straight into building the Android app, I decided to go back to basics and find some video courses to learn a bit more about data structures, etc…

I started with the Java course on Codecademy which I finished in a few hours. As expected, it only covered the very basics of the programming language including:

  • Variable types (int, char, boolean, etc…).
  • The keyword void.
  • Data structures (ArrayList and HashMap).
  • Constructor.
  • Class inheritance.

Of course there is more to learn but I’m happy with that for day 1! :)

Day 2

For day 2, I decided to start working on the Exercism.io Java track.

If you’ve never used Exercism.io, it’s an open-source platform where you can learn more than 30 programming languages with exercises. I heard about it before but never used it so it’s a good way to see how much I can learn from that.

Each exercise has tests, so your goal is to pass these tests with your implementation. They start pretty simple with functions returning strings with and without passing parameters.

The Java track has 72 exercises so I’m not sure if I should go through all of them or just see how many I can get through in a few days.

I also need to start thinking about what the Android experiment is actually going to do so I’ll know where to focus my efforts.

Day 3

Today hasn’t been a very productive day… I completed 1 Exercism problem and am thinking about moving on to starting to setup the Android experiment.

I’ve been thinking about what I’d like the app to do and I’ve decided to build a morse code translator with the device’s flashlight.

I started following a tutorial, learnt the basics of Android Studio and how to setup a small Android app with multiple activities but the functionality of my prototype has to head a different way so I’m probably going to stop following the tutorial for now and try to go my own way to start accessing the device’s flashlight.

Day 4

I didn’t do much today, I modified the first version of the app built from a tutorial to something closer to the functionality of the final app.

I have one main activity with a on/off button and I am trying to get the flashlight to turn on when I press the button once.

However, I’m running into a few issues. Every time I launch the app on my phone to test it, it crashes when I need to access the flashlight…

I haven’t been able to fix that yet.

Day 5

I don’t know how much time I’m gonna be able to dedicate to this today but it won’t be very long. My main focus is to fix the issue happening with the flashlight and to get to turn it on without the app crashing.

I think the issue comes from how I request permission to access the flashlight.

I made some changes to the code and managed to fix the main issue; the app is not crashing anymore but the flashlight is not turning on/off… I have no idea what’s going on but I’ll have to have a look at it another day.

Day 6

Had absolutely no time to work on that today :/

Day 7

Tried to make some small changes to the app to get the flashlight to work, but again, nothing… :/

I found a few repos on Github and cloned them to see if other people found a solution but it seemed like it did not work neither…

Day 8

Kept on looking for solutions online and I finally found a repo which works so I’m having a look at the code to try and understand what is going on.

The app I found is doing more than what I need, so I’m going to use the parts of the code that are only relevant to what I want to achieve. I’m not interested in only doing a copy/paste of the working code so I’m really going through all the files to see how they work together and see how I can use that to fix my code.

Also, the code I’ve been writing so far is getting a bit messy because I’ve been trying a lot of different solutions so I’ve decided to delete everything and start from scratch.

Day 9

Nothing.

Day 10

Having trouble sleeping so I’m spending some time trying to fix the issue I had 2 days ago.

I managed to get the flashlight working but it was triggered by default when opening the app and I couldn’t get to trigger it only on click.

After a bit of debugging, I actually realised that my onClick event wasn’t triggering at all and the issue seemed to be in my build.gradle files where some configs were missing.

It’s now all working! At least for the first step…!

I now have:

  • A simple “activity” with a button.
  • On click: flashlight is triggered on and button shows “on” icon.
  • On second click: flashlight turns off and buttons changes to “off” icon.

For the next step, I’m going to build the functionality to get a user input, map each letter to the corresponding morse code and simply log the result for now.

Now that this is working, we need to understand how Morse code actually works. For example, the letter “A” is represented as “. -”.

  • A “.” or dot represents one unit of time.
  • A “-” or dash represents 3 units of time.
  • A space between elements inside a character is one unit of time.
  • A space between characters inside a word is 3 units of time.
  • A space between words is 7 units of time.

The speed can be decided in terms of Words Per Minute (WPM).

To get the flashlight to turn on and off for the correct amount of time, I used a Thread. Using a thread allows me to pause the execution of the code for a specific amount of time, while the flashlight is on or off depending on which character it is translating.

I’m not sure this is the best solution but it seems to be working fine for now.

At the moment, I just worked on translating a single word, I still need to make some changes to handle sentences. After that, I want to try and implement a Morse code → Text translator with computer vision but that may be too ambitious.

Day 11

This morning, I added some code to handle the pauses between letters and words and everything seems to be working just fine! :D

Although, I just realised I forgot to handle capital letters so at the moment, nothing happens unless all the letters are lowercase… That should be a simple change though.

That’s it, done! Now something else I realised is that I should add some kind of indicator when the translation is finished. A simple sentence can take maybe 1 minute to be translated; it can seem quite long if you have no indication of the progress of the translation.

For now, I think I’ll just “recycle” my on/off button from the beginning of this project. I’m not really interested in turning the flashlight on and off with a button anymore but I can still use it to show the “on” icon while the message is being translated and switch to the “off” icon when the translation is done.

Day 12

I managed to switch the on/off icon to show the state of a translation.

Now that this is working, I would like to move on and start working on the Morse code → text part of the app using computer vision but before that, it would be good to have a proper first screen with 2 buttons to switch modes (text → morse / morse → text). I need to make some changes to my current setup.

Day 13

I added a new activity to separate the home screen and the text → morse part of the app. I added a button to the home screen to access this part of the app but I need to add a Toolbar with a back button to go back to the home screen afterwards.

At the moment, I did not add anything to this new activity so I need to move the text input field and button to it.

After that, I need to add another button on the home screen to link to a new activity for the Morse code → text functionality.

Some redesign would be good as well cause it really doesn’t look appealing at the moment…

Day 14

Nothing.

Day 15

I added the toolbar on the new activity to be able to go back to the home screen but I still need to move the UI elements to the new activity so the home screen will only have 2 buttons to choose which feature to use.

I now moved all the code responsible for translating text to morse code in its own activity. :) I can now add another button for the 2nd feature and start working on it.

It didn’t take me too long to add a new button and link it to a new activity for the 2nd feature, so now that that’s done, I can start doing some research around adding some computer vision to this app. I’m thinking that my first step should be to try and display the camera feed on the app.

Camera feed:

Following a tutorial, it did not take me too long to get the camera feed displaying on the activity. However, I need to make some changes because I do not want the feed to take the entire view. I think the view should be split half/half between the camera feed and the morse translation into text.

I changed the height of the FrameLayout element containing the video but the ratio is not right, the video looks stretched.

I’ve been trying a few solutions but nothing seems to work. I’m thinking about moving on to doing some research on the computer vision side for now and come back to this issue later.

OpenCV for Android:

Getting OpenCV to work in the project is a bit more tricky. I’ve run into issues I’ve never encountered before.

  • I think I missed a few steps when installing the OpenCV SDK in the project so it didn’t seem to find the library.
  • My app did not want to run anymore with the following error: `INSTALL_FAILED_NO_MATCHING_ABIS` because I copied the files from the wrong architecture when copying the openCV SDK files into my app.

After fixing this I was able to run the app again but this time, it’s crashing when trying to access the camera :/ …

Day 16

After spending quite a lot of time trying to figure out what was happening with the camera not working anymore, I realised that the application permission for the camera was turned off. After turning it on again on my phone, everything worked just fine.

I should try and figure out a way to ask for permission when launching the app so it won’t happen again.

After installing OpenCV on the app, I created another activity to try and get the camera feed to work with OpenCV without changing the previous activity I created.

The “test” activity is working fine so I could remove the other one that’s not using OpenCV but I want to try and run an OpenCV sample on it before.

I looked for some code samples and found one to detect colour blobs. I copied and pasted the code and it worked just fine! :D So now I know that my setup should be all good to start working on light/flashes recognition.

Day 17

I had an issue with the orientation of the camera preview that was displaying in landscape instead of portrait and finally managed to find a solution! :D

As I’ve been focusing on that, I haven’t spent much time looking for light detection in OpenCV so that’s what I’m gonna spend some time on now.

Spent a few hours trying to figure out how I’m going to do that and it seems like converting the feed from RGB to grayscale and then adding a threshold could help me detect light from the rest of the feed. Took me quite a few hours to find a solution that worked but this one did the job. I now get the feed in black and white.

Day 18

This morning I managed to add some blur to the image so it should be easier to detect bright spots. I probably need to add some threshold now.

Day 19

Didn’t do much today, just played around with different threshold values to try and get the lights out of the background noise.

I’m gonna try and use the findContours method to identify these blobs.

Day 20

I’m not super sure I’m going in the right direction now but I still need to try and identify the blobs visible on the camera feed and then find a way to have a on/off state for them when they’re visible or not.

Day 21

I didn’t really get anywhere yesterday but today was a bit more productive. I managed to get some contour detection for the blobs of light in the camera preview feed. This solution was very helpful.

Now, I’m thinking I should calculate the size of each blob detected considering the biggest one is probably the light we’re interested in tracking.

Day 22

I found a way to get the largest blob but depending on the environment, the light detection is hard as the background is too noisy, so I decided to change the strategy a little and draw a rectangle at the center of the camera preview as well as getting the bounding rectangle of the largest blob detected.

The user will be asked to point the phone at the light so that the largest blob is inside the rectangle displayed at the center of the camera preview.

This way, it may be easier for me to track some kind of on/off state to be able to translate that to text… Hopefully :/

After doing some research, I’m thinking I may have to change the way this feature works again :/. This tutorial seems to deal the logic a little better so I’m gonna try and reproduce that and see how it goes.

Day 23

I’m trying to draw a circle at the coordinates of the *touch* event but for some reason, when I log the coordinates, they are correct but when I try to draw a circle at these coordinates, it is not drawn at the correct location.

I wonder if it is because I’ve had to rotate the feed displayed from the camera preview because it was displayed horizontally rather than vertically, so maybe the coordinates are mapped to the non-rotated feed…?

I managed to make the circle appear where the user touches the screen but now I think I’m going down a rabbit hole… I’m not sure what my next step should be…

Day 24

Got a bit lost yesterday. Didn’t really know what steps to take to get to what I want.

Today I tried a few different things again and I may have found an “ok” solution to my problem. I started the day by putting aside what I did yesterday; instead of trying to get an average brightness value of the user input (the circle), I tried to select a single pixel by drawing 2 lines (horizontal and vertical) using the coordinates of the user’s touch on the camera preview. I wanted to get the intersection point of these 2 lines to get a pixel brightness value but I think it’s not a really good solution because the camera can move so a single pixel is too specific; also, I actually still don’t know how to get the brightness value.

Instead, I kind of went back to something I was thinking about before. I can have a rectangle in the middle of the screen, ask the user to place the light inside this area, check for the area of the blob of light and depending on its size, detect if the light is on or off.

So… I now managed to have a counter that increments every time the light is detected inside the center area. What I should add now is a timer that detects how long this light is detected for.

Day 25

Not much.

Day 26

I managed to create 2 timers to count the duration of the light when it is on and off to be able to detect characters and spaces.

I started by testing my logic with some console messages but now I need to figure out how to code the translation into text.

The camera preview feed does not know the length of the message to translate, but we can use the length of each “pauses” to print out the letters and words.

Day 27

Nothing.

Day 28

Tested my logic and it doesn’t always work but I could play with an online morse translator and log some characters using my app.

Now that I can get some characters, I should try to map them to a real letter and I should be pretty much done with my prototype!

I’m a bit late though cause it’s actually August 1st so I should be starting a new challenge but as I’m quite close to finishing, I’ll spend some time on it today.

IT’S KIND OF WORKING!! :D

I finally got to a point where I’m kind of happy with the prototype. I fixed the logic a little, made some changes to the UI of the home screen and added a text view on the Morse to text view so the translation is visible under the camera preview :D.

Of course, it’s not 100% working, but as a 1st attempt at making an Android app with Computer vision, I’m pretty happy!

Quick recording of the Morse -> Text feature

--

--

Charlie Gerard

Software developer, creative coder and tinkerer of things.