How do you get Users to Trust your Recommendations?
On Anticipatory Design, Metadata, & How Control and Transparency can Influence Trust
Make sure your users opt-in, never force it on them.
You really do need to have it to personalize, but if you don’t, consider using Nearest Neighbor technique.
How do you get a user to trust your recommendations?
Allow your user to tweak their experience.
Show the user what inputs your recommendation engine use.
Is gained over time, and by continuously and consistently provide value.
Anticipatory Design needs to Ask for Permission
Buzzwords like anticipatory design have been circulating a lot, but research shows that users are still uncomfortable with having the service make choices on their behalf. The most important lesson is that users are ok with anticipatory design as long as they opt-in. In every scenario where anticipatory design made a choice on their behalf that they had NOT accepted in advance, it resulted in a negative experience. You should instead have a few hidden bonus services ready, that only appear to those users that the data show might benefit from them.
It is also worth noting that truly great anticipatory design requires a lot of data to truly understand a user’s needs and motivations. That is not to say that you should completely avoid it, but be aware that most users are still wary about giving away that much information about themselves when they can’t see the purpose of sharing them.
You need tons of Users or tons of Metadata
HBO underperforms when it comes to the users’ experience of personalization. Netflix is pretty good (they say their personalization made them $1 billion in revenue), but according to the users, it is actually still far from perfect, and YouTube nails it even better. So why is that? Well, Netflix has more meta-data than HBO, because they basically track everything and even let the user data generate their shows. They’ve been collecting data since they sent out VHS’s and DVD’s via snail-mail, and are by no means new to data-gathering. This, plus their huge amount of users gives them the edge over HBO but doesn’t explain how Youtube beats them at their own game.
Well, this is actually quite interesting, because YouTube has terrible metadata on their content compared to Netflix. Each uploader has their own way of describing and labeling the content if they even care enough to write anything. So instead of relying on this, YouTube uses a method called Nearest Neighbor. It finds users whose viewing history is most in sync with yours and starts recommending the videos in that user’s history that you haven’t watched yourself.
This method, mixed together with trending topics, channels you follow, and an extreme amount of active users, makes for a very effective personalized experience. Long story short, you either need to excessively label your content like Netflix, or have enough user engagements to implement a nearest neighbor based algorithm.
Have these Parameters in mind
To make a great experience, the users need to trust that your algorithm actually knows you well enough to make informed recommendations. If that is not the case, users will just ignore the recommendations as they fear it is going to be a waste of their time. There are two parameters that will influence a users’ trust in the algorithm: transparency and control.
Allowing your users to express their feelings about content will make them aware that the service knows what you prefer. Most services allow you to like or dislike, rate, or otherwise inform the service of what you want more or less of. Some users want as little of this as possible, they expect the service to “just work”, while others enjoy expressing why they dislike that particular content (for example unsubscribe forms in emails allow the users to express why they don’t like that particular content).
Too much control can be overwhelming though, so strike a balance. In theory, the number of available feedback options could even be personalized individually, based on how much and how often each user interacts with it. A user on Netflix that often dislikes, and sorts through content, could get an expanded field, with the option to tick off why he/she dislikes it.
Allowing users to inspect what behavior triggered a certain recommendation or UI change can be a very powerful trust generator. It can especially restore some faith in the service when a user feels that a certain recommendation or change doesn’t match their preferences, as long as an equal amount of control is provided. This way the user can either approve or disprove of each change, allowing the system to continuously improve. Take a moment to think of the last time you were frustrated with a service. Did they provide you with a way to adjust your experience for future use? Would it have made you more comfortable to continue using their service, knowing that this issue would probably be improved the next time? Of course, transparency and especially control also demands that you can actually follow up on the requests and feedback from the users. There would be nothing more annoying than telling a service that you don’t want to see x, only to find that x is still there the next time you log in.
On one hand, you don’t want your personalization engine to be a black box, but at the same time, some users don’t want to be reminded how much data the service has on them. I would recommend not putting that kind of detailed information up front, as it most of the time isn’t relevant for the user to look at, as long as the recommendations make sense. It is only when the service offers a recommendation that makes no sense to the user, that they will feel the need to inspect how it reached this conclusion and correct whatever mistakes it made. Make sure that when users are looking for it, they will find it, but they are not forced to look at it in their daily use of the service. The way you frame the data can have a huge impact too. Netflix used to have a category named “Because you like x, we think you will like y”. Some users didn’t like to be reminded that Netflix keeps track of everything they do, so instead they changed it to “other users who liked x, also liked y”. This actually made a difference to many of the users, who no longer felt like they were personally being tracked, and didn’t mind that other users were being tracked.
To make the users trust you, you need to find a balance. If your recommendations are too precise, and
the user has no idea how you get your data, it can get creepy, but if the recommendations are vague and often wrong, they might as well not be there (they actually have to be very precise for the user to even register that personalization is going on though).
This is what the aspect of trust is about. A user needs to trust that your recommendations are worth checking out, but at the same time feel comfortable that your service isn’t spying on their private life and collecting data it shouldn’t. Only then will your users continuously take advantage of your recommendations and this is essential if you want your algorithm to improve over time.