We cannot give users a great experience with our apps and interfaces if we do not first consider something so personal to them: What is their primary hand? How do they like to use the phone?
Asking users early in their experience if they are right or left handed or have a preference either way is something I believe to be vastly overlooked in the world of mobile interface design. Yes, we have hands that look relatively the same. And yes, we have fingers, five on each hand. However, they are attached to separate limbs of the body, and they have different dominant functions. We write with one hand (usually), and we use a left or right handed glove or golf club, yet we are forced to use universal controls on a mobile phone/tablet built for our hands — It’s time to make these interfaces more personal.
Maybe this is more noticeable to me because I’m a left handed individual. I’ve been doomed to use tools forever built for a right handed dominant populace. True, we lefties only make up about 10% of the population, but even so, is that a small enough number to neglect? Not by the standards of most technology businesses.
In my opinion we need to go even further than right or left handed interfaces. Certainly, a navigation system or interface could be far more customizeable and personal. What if I could hold and drag my navigation to the absolute most comfortable position for me? Why can’t I choose the particular spot easiest to always reach with one finger while taking a photo on my mobile phone?
To my delightful surprise, Facebook seems to be on this path to a truly custom “User-driven” interface. Whatever their motivation with the new “floating head-dots” for Facebook chat, to me it feels like a step in the right direction. Though the Facebook interface loses a bit of it’s charm as you pass four open chat windows, it seems more ground-breaking than it’s receiving credit for. People seem to be brushing it off as just a casual update to the Facebook app. I believe I heard someone say:
“Yeah, you can move dots around with faces on them and click on them to message people.”
It’s this technology, not Facebook Home, that I believe to be their real evolution in making the experience more personal for their users. I can’t say that I need a whole phone to use Facebook, but being able to use it exactly how I prefer (not in the old myspace customization sense) looks like a promising future.
When I was in school, I always learned and read about the reasons why this user-personalization shouldn’t be an option. “It’s about making the design disappear, making the content shine and communicate,” my teachers would say. Yet somehow I feel that by forcing people to use a system that we deem best with no option to make it even slightly more comfortable for users,we do just the opposite and create designs that get in the way of content. This makes the interface and navigation become more noticeable and frustrating.
I understand the intense legwork involved in developing and accounting for this type of designed personalization. However, it would be worth it to remember that our innovation is for the good of the user. Most of us in the creative/tech market really enjoy breaking down how these apps work, but it’s time we let the users have some more control. We’ve been focused on building environments for them to create content, but I think we can let them help create the interface on which they create the content. I’m not calling for a drag and drop, move anything anywhere type of interface, but rather controlled zones that offer a bit more freedom to the way we work/live with these very important devices.
We can cache and save user settings about font sizes, articles they like, and boatloads of pinned pictures, so you’d think we would be able to save the way they like to use our apps and interfaces. In the end, apps are for users anyway, aren’t they?
This is just the beginning of my thoughts on the subject. I know companies like Samsung have created finger detection software that can sense your finger coming before it even touches the screen. To me, this seems like the perfect starting point to look toward software that might be able to recognize right from left, finger from thumb, and so on. Eventually, the interface could react intuitively to the user, not the user trying to figure out a supposedly intuitive interface design.