Integrating my Neptune Water Meter with HomeAssistant

Keilin Bickar
7 min readSep 4, 2019

--

Neptune E-Coder Water Meter

A couple years ago I installed a Sense energy monitor that clamped onto my power mains to track electricity usage. It does it’s job and I was even able to write a new HomeAssistant component to integrate with the rest of my smart devices and show a nice graph on the dashboard. I wanted to do the same for the other main utility water so I started looking at options. Not being a plumber, I was hoping to interface with the Neptune E-Coder reader already hooked up.

The usual way to read the meter is through a wireless unit wired into it that the town can read by driving by at street level, but the protocol isn’t open and from what I could find it was a two-way thing rather than something that could be passively read. The other way to read the meter is on the meter itself. If the solar cells are illuminated, then a little LCD display shows the total water usage for a few seconds, followed by the current rate of usage.

Water meter illuminated by a flashlight

LCD segment displays are pretty simple to read (or so I thought) so I build a device that can sit on top of the meter to read it. It’s composed of a small camera (VC0706 “Open Smart”), a microcontroller (Wemos D1 Mini lite), and some LEDs.

To line everything up, I 3D printed a couple parts. The base fits fairly snugly on the top of the water meter and has two rows of 5v LED strip lights in it. The larger strip is to illuminate the solar cells and the smaller strip is to light the display more consistently. Both strips are wired together and run up to the main section where they are switched on and off by a MOSFET controlled by the wemos.

LEDs in base

The top section has a bracket and hole for the camera that brings it the appropriate distance to take a photo of the LCD display. The rest of the electronic components are also shoved in there where there’s space. Wiring it turned out to be a little bit of a pain so I ended up adding a little plate that the camera plugs into so it could be removed more easily. Everything is powered from a USB micro port and plugs into a USB wall adapter.

Cut-away version of top section used for development

The software side as expected took a bit of work. The camera has a resolution of 640x480 but has an option to produce lower res images so I have it sending 160x120 JPGs. For some reason the camera worked with a Wemos D1 Mini v1 and Wemos D1 Mini Lite, but not Wemos D1 Mini v2. The camera’s wired directly into the TX/RX ports of the esp8285 which means it needed to be unplugged when flashing new code via the USB port

Now the esp8285 which the Wemos D1 Mini Lite uses isn’t typically known for image processing, but it has 1MB of RAM which is enough to do the job, I needed far less than that in the end. The process starts with the esp8285 receiving a message to get a reading. It powers on the LEDs, waits 10 seconds, then tells the camera to take a picture. The camera sends back a 160x120 pixel jpeg image. Due to the way the camera was mounted, the image actually comes upside-down.

Original image from camera (flipped)

Using the JPEGDecoder library, the image buffer is decoded into something readable, however JPEGs are read in blocks. The top and bottoms of the the image were always uninteresting, so they were skipped. The remaining blocks were read into a buffer the width of the image and the height of one block so the image could be read row by row.

Picture from meter split into blocks

As the image is essentially, grey-scale, only the green channel is saved, reducing the storage to only one byte per pixel. To ensure the numbers were in the expected locations in case the camera is jiggled slightly, the program searches for the bottom right corner of the display. It’s very high contrast so is easy to pick out. Once that origin point is discovered, the digits are extracted into their own 8x16px buffers.

8x16px boxes for each digit and origin

Now in theory it should be possible to read a seven segment display by just reading one pixel for each segment and comparing that to one pixel of the middle lighter area. In practice, the numbers aren’t always lined up perfectly, the lighting isn’t ideal, and the sides can be a bit blurred.

Theoretical points to check

In practice, I had to first boost the contrast, then check a few of the pixels. Boosting the contrast turned out to be fairly simple. Since each row of a number contains at least one segment that’s dark, I could find the brightest and darkest pixels of a row, then anything over the half way point is set to white, anything below is set to black. Doing it row by row also helped with the different lighting issue as the individual rows tended to have consistent lighting even if a column did not.

Digit with increased contrast

I went through two methods of extracting the digit from that. The first method was to check blocks in each of the segments. If the block was mostly black then the segment was considered filled. Then each segment was assigned a bit of a byte and the resulting number in the byte was converted to a digit.

Blocks checked for segments

While this method worked most of the time, there were a lot of exceptions where numbers didn’t quite line up right which meant it wasn’t get a good reading. I actually made a small python program that would run the algorithm on a list of images to see how accurate my changes were as I resized the blocks, but selecting the best blocks was a bit frustrating so I figured why not have a computer to the work?

For method two, I took my original method and had it read hundreds of images. I had a basic sanity check that the result was correct (water meter almost always goes up, but not by too much) and manually corrected the ones that were flagged as wrong. From there I used the sklearn library to train a DecisionTreeClassifier which took a list of the 8x16 pixels either on or off and produced a digit.

Other models definitely could have been more accurate, but a decision tree converts very easily into if-statements that run quickly on my esp8285. The worst case only is 7 comparisons and the whole tree fit into 33 binary if-else-statements or as it’s known in the industry: “AI”.

The pinnacle of machine learning

Once the digits are identified, the results are converted to a number and sent off to HomeAssistant. As with all of my custom devices, this one uses the MQTT protocol to communicate. HomeAssistant receives the data, checks that the water usage is higher than the the previous reading (though I identified a couple times it actually went DOWN by 0.001ft³), then stores that value to show on the front end and draw pretty graphs.

Water Usage in HomeAssistant

The final process is:

  1. Device boots, connects camera, and registers with MQTT server
  2. Once per minute HomeAssistant sends a request for a water reading
  3. Device powers on LEDs, waits 10 seconds
  4. Camera takes a photo, transfers it to Wemos D1
  5. Photo is processed, converted into a meter reading
  6. If processing is successful: reading is sent via MQTT back to HomeAssistant
  7. The device is restarted
Final product in action!

Full code available on GitHub

--

--