90 is the new 80

My girlfriend is getting a Masters in Public Health and has a personal interest in geriatric care so in preparing a blog post about how machine learning could potentially help elderly folks in 10 years from now, I asked her about what are the areas that they could use the most help in. She gave me three general areas where she sees elderly people struggle the most late in their lives: their failing eye sight and how that affects their day-to-day lives, their failing hearing and how that affects their day-to-day lives, and their failing memory which doesn’t just affect their day-to-day but their entire psychological well-being. I’ll suggest two ways machine learning can be extended to help with failing eye sight, one that could help with failing hearing, and two that could help with failing memory.

Eye sight. One thing that my girlfriend mentioned that elderly people generally love to do more and more as they grow older, but lose the ability to do, is read books. It would be easy to help elderly people with this problem if all of their favorite books are digital copies or on captured on tape, because then you could just tell them to run the digital copy through some software that voices digital text, or tell them to listen to the audio copy. But for many of the people who will be old in 2026, their favorite books may not be books that have digital copies readily available, or have the audio on tape recorded. Therefore machine learning algorithms could use handwriting recognition algorithms (the books may not be handwritten, but different books use different fonts so an algorithm like this will still be helpful if not necessitated) to create some sort of device that any elderly person could give their favorite book and listen to the device read the book aloud. The second way that machine learning could help folks who are losing their sight is a variation on Facebook’s machine learning algorithm that helps blind people tag photos on their website. An algorithm similar to this could be used to identify the faces of loved ones as they walk through the door and tell the elderly person who they are, so they do not have to guess and get embarrassed if they are wrong.

Hearing. In the same way described at the end of the eye sight section, the loved ones of elderly people can be identified by their voices. Some sort of device could be built and could somehow feed information to the user (the elderly person) about who is speaking. Maybe it is an addition to a hearing aid that along with amplifying the voice signal could quickly tell the wearer who is speaking. A Stanford paper linked at the bottom of this post says that speech recognition machine learning is slow-going at this point, but it cites high viability in voice types and that is in the context of speech-to-text. Overlaying my own assumptions and at the risk of extrapolating a little bit I believe that speech recognition algorithms could get to a point within 10 years when it could identify a person’s voice just by how it sounds. Siri recognizes me and only me when I say “Hey, Siri” so that ability just needs to be extrapolated to recognition in any phrase, rather than just that one.

Memory. Failing memory may be what affects those in geriatric care the most. A person’s memory is a core part of who they are, so where the previous implementations of machine learning may be “nice to have” type of solutions, a solution that keeps a person’s memory fresh would become essential for anyone in geriatric care. One suggestion keeps in line with the others I’ve suggested in that it helps a person’s day-to-day activities. That is some sort of device that stays with a person during the day for years, before or just when they begin to lose their memory, and develops a sense of their overall activity throughout a day. Then as that person’s memory starts to fail, it could be activated and remind them what they should be doing throughout the day e.g. wake up at this time, take these medications at these times, eat at these times, remember to eat enough of this type of food, etc. (the possibilities are endless). This implementation may be a bit “pie in the sky” as I cannot find a similar implementation of anything anywhere but even without knowing much about machine learning, it does seem like something that could be developed within a decade. The last implementation is the best one and the one my girlfriend was most excited about. I also know it is possible because it would just piggyback the algorithms that Facebook already uses to analyse your browsing habits. It could work in a similar way to the “reminder” solution I just presented, where it could watch a person for a time before they lose their memory, and learn their habits as they browse social media (once again, a capability that any Facebook user knows exists). Then as they begin to lose their memory it could be “activated” and remind that person every day who people are in their life (that is your son, that is your granddaughter) and what they did on certain dates (this day 20 years ago you sailed the Caribbean, and here are some pictures of that) and possibly even quiz them on certain people or events in a similar way to actually keep their memory fresh. That would actually be really great :)

Sources

Handwriting recognition algorithm: https://people.eecs.berkeley.edu/~fateman/msw/GreedyPointMatchWriteup.pdf

Facebook photo recognition: https://www.theguardian.com/technology/2016/apr/05/facebook-ai-tag-pictures-blind-people-machine-learning

Speech recognition algorithm: http://cs229.stanford.edu/proj2013/zhang_Speech%20Recognition%20Using%20Deep%20Learning%20Algorithms.pdf