Invisible UI: The Tussle Between Personal Privacy and Technological Capability

Andrew Huang
Salesforce Designer
9 min readOct 20, 2015

The invisible UI is a vision that has been floating through the progress of technology ever since we were first able to interact with a computer. At the foundation of our pursuit of invisible UI is the understanding that technology has made our lives exponentially easier and more productive, but it has also become another layer in our lives. And sometimes this layer acts as a barrier between us, creating more friction rather than alleviating it. The invisible UI seeks to make technology disappear into the background while still affording us all the capabilities that we now rely on in a fully connected world. You’ve probably seen examples of invisible UI in pop culture, from movies like “2001: A Space Odyssey”, featuring HAL, to the modern day version depicted in Her. You might also have some first hand experience if you’ve tried talking to Siri or Google Now.

AI is all about the intelligent usage of data

Apple released Siri back in October 2011 and it was an immediate flop. Siri was slow, limited, and not all that intelligent. Given its pursuit of perfection in each product release, the clunky Siri experience seemed odd, to say the least. But if we back up and look at how AI works, then Apple’s first iteration of Siri may make more sense. (And to be precise about AI in this discussion, we are talking about how data is mined, not the concept of AI in science fiction where the machine is thinking for itself.)

The AI that invisible UI relies on derives its knowledge from machine learning, which requires a tremendous amount of data to train its algorithms in order to improve accuracy. This is why Apple had to release Siri in the state it was. When you compare Siri to Google Now, you should be thinking about the differences in the amount of personal data one company has about you versus the other; not in terms of engineering prowess. Google Now launched with access to your full search history and your Gmail account, so by default it has better AI capabilities in surfacing the correct information and understanding what you are asking of it. Google knows you better because you’ve already fed it a ton of personal information about yourself.

What could possibly go wrong?

Now, if you are reading this from a consumer’s point of view, there should be some point where the alarm bells start ringing. True, you’ve already granted Google access to all your personal communication and you love how Siri knows who your husband/wife is and how to get you home, but there are very real privacy concerns that I don’t think the other blog posts about invisible UI have discussed.

Deirdre Mulligan, a professor at the UC Berkeley School of Information, teaches about how a breach of privacy is like a secret that cannot be untold. Once the secret has been exposed in the internet age, it can never be unknown again. That lesson stuck with me, because the true threat of a privacy breach is that once your personal information has been set free on the internet, it is nearly impossible to put that information back behind your personal lock and key. You might be passé about credit card breaches because you can just cancel the stolen card and remove fraudulent charges. You might also not care about the nasty scourge of revenge porn because you would never allow naked pictures to be taken of yourself. But what happens when the daily activities tracked by your Apple Watch is used to adjust your health insurance premiums?

https://www.priv.gc.ca/information/illustrations/index_e.asp

The sensors that are embedded in the Watch are a part of the invisible UI that can be used to track your movements in order to surface contextually aware information. Your catalog of past purchases is another source of extremely context rich information that can be used to create a personal profile of you. This is the type of data you should worry about losing in a privacy breach, but this is also the same data that the invisible UI would use to try to surface information and choices to you in a smart manner. Like I said earlier, services like Siri and Google Now require lots of personal information to make it work as seamlessly as you would expect, but that convenience comes at the cost of personal privacy.

The business case makes more sense

Each of these examples involve consumer facing apps, whose usage comes with the danger of losing personal data. Now, let’s examine how the invisible UI can apply to the enterprise, and the importance of business data and the intelligence derived from it.

Salesforce places a huge emphasis on their multi-tenant cloud and world class security while also being named the most innovative company the past 4 years in a row. To keep that streak going, the UX team has played an integral role in researching and designing concepts that could potentially change how our customers interact with our software across platforms, and the implementation of invisible UI is a huge trend that could change the enterprise world.

The interesting part of adapting invisible UI to the enterprise world, especially the CRM segment, is how much data we already have about our customers and our customers’ customers. We wouldn’t be so successful if our track record of protecting this data wasn’t spotless, and using this information to enable the invisible UI would increase the risks of data breaches. We also face the question of how comfortable our customers would be if we started mining their data. Yes, they could receive more actionable information that we surface to them at more relevant times, but where is the line that we draw with how we use this intelligence?

The current boundary that we are moving to aligns with new updates being released on the major mobile operating systems. In iOS9 we will be able to expose data from the application to Spotlight Search. Salesforce holds all the contact information for its customers, so instead of opening up the Salesforce1 app and navigating to contacts and then searching, we will be able to surface them directly through Spotlight Search. The same idea applies to Google Now on Tap and Windows Cortana, where we are working to provide Salesforce1 data to users without forcing them to go through the UI of the application. These are logical steps in the progression of how apps will function on mobile. In many ways it’s allowing mobile search to become a more useful tool by allowing it access to more sources of data, and bringing the mobile experience into alignment with search on the internet. The transition from information being walled off behind mobile apps to an environment that is as open as the internet will be greatly helped by these new mobile OS features.

The next steps in Salesforce’s exploration of invisible UI will involve moving beyond the features that a mobile OS enables within the app. This is where the line becomes blurry with how much we use their data. The usage of this data will make the lives of our customers easier and enable them to work more efficiently, but Salesforce will be having conversations with its users to understand their needs and pain points on a more realistic basis.

The benefit of being an enterprise facing business though, is that our customers are already very comfortable storing their data with us. So here are some possible invisible UI designs going forward (note how they don’t require a tremendously “smart” AI, but instead rely on the data we already have access to):

  • A sales rep finishes up a meeting with a client. We know the meeting time and location because it’s entered in the calendar and we know the rep’s location from their GPS. We could combine this information and send them a text message as they are walking out of the building. The message would ask the rep if they have any notes that they want to enter about the meeting, and the rep could respond directly from the messaging interface. If the sales rep gets into their car, Apple CarPlay could trigger a request using Siri to take audio notes to be transcribed.
  • You are heading to a meeting and based on your location and speed of travel, we can tell that you will be about 15 minutes late. We create an email/SMS for you to be sent directly to the client you are meeting, and then we notify you on your watch. The message lets you know that we can quickly notify your client for you, and all you have to do is press Yes or No. The same can be done if you’re running early, but instead of notifying your client, we email you a report with the latest updates on the company and people. You can never be too prepared for a meeting.
  • You’ve left a message with a client 2 weeks ago. The system has noted this logged call and also sees that no new activity has been logged for this potential lead. An automated email gets generated and sent to you with the contact info and current information on the client so that you can call the client directly from that message.

Hopefully these ideas can inspire more enterprise solutions that take advantage of the invisible UI. In order to develop a truly intelligent interface, we have to design our back-end system to use the data in a contextually sensitive and secure solution. On the front-end, we have to design it so that we aren’t getting in the way of the user’s workflow. We have to maintain invisibility in the sense that we surface information to the right person, at the right time, and in the right place.

One of Dieter Rams’ key design principles states that good design is as little design as possible. He was applying it to the physical products of his time, but if you extend that thought to the current array of digital products, you could imagine the principle evolving into something discussed by Wired’s Scott Dadich: good design “doesn’t draw attention to itself; it merely allows users to accomplish their tasks with the maximal amount of efficiency and pleasure. At its best, it is invisible.”

More resources on invisible UI:

One very interesting aspect of the invisible UI is how a user can interact with an app without having to use a standard graphical user interface. This can be done through Siri as discussed above, but a near term solution has been the use of a conversational UI. Imagine a more intuitive command line interface, which companies like Slack have taken advantage of (/giphy). There are also services like Native and Lark, which are travel and health apps that could have been packaged in the form of visual interfaces, but have instead opted for a chat interaction. Every interaction with the app can be performed through a messaging interface which bypasses the need for a graphic heavy application. Some other key benefits of using a conversational UI are related to how it is a universally understood format and that it can be very efficient and extensible across platforms. Jonathan Libov’s Futures of Text gives a great overview of the conversational UI with more examples.

Follow us at @SalesforceUX.
Want to work with us? Contact
uxcareers@salesforce.com
Check out the
Salesforce Lightning Design System

--

--