Data-driven human improvement — and how we’re using it to solve an age-old problem
We humans are intelligent people — I didn’t need to tell you that — but there are serious limitations in our day-to-day lives that occur as a result of our complex lifestyles. We’re constantly crunching numbers, reading texts, writing code, and in the midst of our persistently busy behaviors, we lose track. Cellphones, wallets, headphones, and necklaces all vanish in the middle of writing emails, watching movies, and heading off to work — and in the loss of these items, we continually dish out our hard-earned money and things we know we still have somewhere.
Although we depict ourselves as being so clever, there are naturally tasks that computers come batting heads to us at. We forget things, but computers don’t. We take our time recalling things, but a speedy computer can tell you what you want in less than a second.
This problem was what drove my group of student developers at the University of California Los Angeles’ (UCLA) annual 36-hour hackathon in March (called LA Hacks) to create a computer vision-backed system that could allow you to use a web interface to search for any object and find its last time and location spotted. We called our project a “ctrl-f button for the real world”. Our DevPost is still up, you can still check it out!
We didn’t stop there — we recognized there was a need for this kind of solution. We did research and found out that there were persistent location-tracking tools, but we knew that this wasn’t a full solution. We figured that eliminating the problem of forgetfulness required a 180 degree turn in our problem-solving process.
We took a bold step in declaring our solution to this repeatedly-occurring mishap: we called for an integration of computer vision and machine learning into our homes where we could connect cameras placed around a living space into a device that could then constantly process those visual inputs and tell us the last time and location an object we’re looking for was spotted. We resolved to further develop the idea formed at the LA Hacks hackathon and formed a company called Kenmyo, ken (見) meaning “see” and myō (明) meaning “bright”, to develop a gadget that could solve the problem of not being able to keep track of things. In short, we built Kenmyo as a virtual assistant for users to be able to search for an item.
How do we do this? We can program the device to take in camera input, process that input to find all objects within the images, and sync each object with the image it was found, along with the time and located it was spotted with. The user can then easily search for the object with the built-in touchscreen, or on a remote device.
Now where this gets even more interesting is that, the more time we spend with data, the more things we can learn to do with it. We can essentially train the device to pick up on patterns in regards to the way a user interacts with their objects. We can then use this information to find irregularities — say the user leaves their phone in their bedroom when they are supposed to leave their home with it (the time is 10:00am, on a weekday, and the user usually takes the phone to work at 9:00am, so it is never spotted on the device), and notify the owner of the irregular behavior.
We believe we’ve only touched the tip of what we can do with this approach of a data-driven, controlled living space, but we’re excited to keep bringing more software solutions as we keep learning about what we can do.
We launched our product to Kickstarter two weeks ago and we’re (at the time of writing) 53% at our goal! Check out Kenmyo’s campaign here!
