It’s all in the “flow”.

MELD: My HackGT 2016 Project

Hamilton Greene
5 min readSep 29, 2016

This weekend, I attended HackGT — Georgia Tech’s official hackathon. I was joined by over 1,000 (according to their site) others to spend the weekend hacking, eating (#snackgt), and otherwise losing sleep.

HackGT

I didn’t have a project idea coming in (though I always have some in my icebox), so I took some time the first day to explore team/project openings to see if anything jumped out at me. I looked around (even attending a team mixer), but didn’t come across any teams/projects that screamed “BUILD ME!”.

So, I decided I’d use the weekend to test my mettle and put some dev time into one of my iceboxed projects. I also spent alot of time talking to companies, eating the copious amounts of food, and attempting to win raffles.

Digital links in a physical world.

MELD

MELD is a concept I’ve been thinking about for quite awhile now. The essence is something like this:

Every physical object has a large amount of data relevant to it. How can we best link that data?

This is a pretty big question, so I’ve scaled it down over time to something more digestable with respect to my skills and time/effort budget. Instead of linking all relevant data to a physical object, I just want to link some data to it.

My basic solution: A QR code without the QR code.

I know, I know. You’re probably saying “What’s wrong with the current QR code?” You’re right. It’s well-adopted and it works.

My issue with it is thus:

  1. The process to create new ones is annoying (you often have to use a printer) and
  2. the ugliness of the code itself (it’s a white blob on an otherwise pristine piece of advertising).

If I could build something that creates a code from a simple picture of an object and could reliably retrieve that data from another picture of said object, then I could solve both issues in one fel swoop.

Use Cases

Here are some of the use cases I’ve come up with through development.

Advertisements

Companies pay top dollar to have pretty advertisements created. Ugly QR codes detract from the aesthetic (most of the time). My system could boost the effectiveness of their ads.

Plus, if I added in some sort of management system, I could allow advertisers to forward links on old ads to current campaigns, allowing out-of-date materials to retain some form of value

Quick-links

Companies

At the hackathon, many companies had surveys they wanted you to complete. This involved typing in a long url into your phone. I hate typing long urls into my phone. With my system, you could simply take a picture and get to the survey.

Candidates

I have a bunch of stickers on my waterbottle. Wouldn’t it be cool if I could take a picture of my GitHub cat (with a tag declaring it should go to my link, not someone else’s) and have it send me to my GitHub profile? Wouldn’t it be cool if a recruiter could do that?

What about your resume/portfolio?

How it Works

This is alot better than my hand-drawn monstrosity.

Create a MELD

  1. Find an object in the real world to serve as an anchor and some data you’d like it to point to.
  2. Take a picture of said object.
  3. That picture is then sent to a server that transforms and normalizes the image before spitting back a hashed value of the image, essentially summarizing it.
  4. The hashed value is coupled with your payload (the data you’re trying to link to the object) and sent up to a server to store.
  5. Voila, you now have a link.

Retrieve your MELD

  1. Take a picture of the MELDed object.
  2. That picture is once again sent to the server to transform and normalize. Theoretically, the transformation and normalization stages could rely on a known MELD tag to discern orientation, size, and shape. It spits back the hashed value of the final image.
  3. That hash is sent back to a server to retrieve the payload stored behind an image within a certain threshold of difference. We don’t want to check for exact matches because it’s unlikely that any two pictures will be exactly the same. That being said, we can be reasonably sure that similar pictures will have similar hashes, so we can check ranges of hash to find hits.

One issue with this approach is that it wastes alot of space and it increases the likelihood of getting false-positives. While this is an issue, I’d argue it’s not game-breaking. If we can retrieve the correct payload 85% of the time, I’d wager it’s still faster and more convenient than typing something in yourself, especially if errors could be fixed with a simple re-scanning.

What I Made

Due to time constraints, my limited personal dev-power, and the draw on my attention from other hackathon activities, I didn’t get around to building a frontend for my project. As you might imagine, this made it extremely difficult to demo. In the end, I showed off a crude system design sketch and my Postman requests to the backend to prove functionality.

  1. Hashing Server

I built the hashing server in Python because the best hashing library I could find was written in Python. Seriously, that’s the only reason.

  • Languages: Python
  • Frameworks/Libraries: Flask, ImageHash, Virtualenv
  • Platform: Digital Ocean, Ubuntu
  • GitHub: HASH-TIME

2. ImageHashDB

This is the server + data store that held everything

  • Languages: JavaScript
  • Frameworks/Libraries: NodeJS, SailsJS
  • Platform: Served locally
  • GitHub: MELD-AETHER

Future Work

Between school, work, and my full-time job search, I’ve got a lot on my plate this semester. I’d love to be able to get the system up and running to at least a usable if not production-ready (read: alpha) level before it draws to a close, but progress is dependent on available time/energy.

--

--