Weekend Bits: 5 Things Every Salesforce Customer Should Ask Salesforce; Google and Samsung Speak; Visualizing Hyperspace

Five Questions Every Salesforce Customer Should Ask Salesforce
 
Earlier this week we sent our subscribers a piece on the new Salesforce AI platform; Einstein (Subscribe now and we’ll catch you up on the latest publications). Dreamforce was abuzz with Einstein and AI and all of the magic that this new “data scientist for everyone” will be able to perform. Due to the company’s huge customer base and the horizontal reach of Einstein, Salesforce will likely introduce AI functionality to more sales & marketing professionals over the next year than any other company, perhaps with the exception of Microsoft. That’s quite a big deal. And it will be a particular challenge for Salesforce to educate all of its sales reps on such a fundamentally different computing method.

On the customer side, Einstein will present a unique evaluation challenge to Salesforce customers since the vast majority of them will not have evaluated an AI product before, have not quantified the impact on their business, or planned for the unique change management required. While the sales pitch from Salesforce is one of employee productivity and automation of the hard stuff of data science, machine learning isn’t that simple and it will be a wise buyer who wants to know what’s under the hood.

What Salesforce customers need to know today:

Five Questions Every Salesforce Customer Should Ask Salesforce.

  1. What kind(s) of algorithm is Einstein choosing for this use case and why?
  2. Are Einstein’s algorithms pre-trained? If so, on what kind of data? If not, how much data is needed to make the training successful?
  3. Is Einstein’s model learning from my data alone? Does Einstein transfer learning from my model to another customer’s model? If so, does that mean my competitors could benefit from my data? If not, could that limit Einstein’s effectiveness? What other data should I be collecting?
  4. What are Einstein’s accuracy rates for this problem in general? What is Einstein’s accuracy rate using my data for my business? How will I know if the accuracy rate changes? How are models retrained? Streaming or through some batch-style process? How will I know if an accuracy change is just normal variation or if the world has, in fact, changed?
  5. Is the demo you’re showing me really this good or is Einstein overfitting my data?

Remain skeptical. For a start, in one of the keynote presentations, a Salesforce exec said that Einstein uses all of the varieties of machine learning algorithms and automatically chooses among them for any particular problem. Automation of algorithm selection and feature engineering are certainly possible and could easily be preset within a narrow set of choices. But it’s unlikely that this is something that Salesforce has achieved at such a reliable level that you can completely trust or rely on this to be the case. A system that can assess the problem and choose the correct algorithm and features for a problem is a big goal in AI research. Secondly, AI isn’t risk free; just as it can make predictions better, it can make predictions worse which makes it very important to match reliability with the criticality of the problem. One demo we saw recommended alerts for predicative maintenance in a manufacturing robot. While it might be perfectly acceptable for Einstein to deliver a 60% reliable predication for a sales lead, it’s doubtful that this is a useful level of accuracy for a critical manufacturing component. It could, in fact, do could more harm than good if human operators accept this recommendation over a prior, higher, standard, because “Einstein says so.”
 
To learn more about AI algorithms and how machines learn, check out How Machines Learn, An Illustrated Guide on Amazon Kindle. It’s a succinct and easy-to-understand explanation of what’s under the AI hood, with supporting illustrations by a children’s book illustrator (so it must be simple!). This book will get you up to speed on what you need to know about how AI works and covers the primary issues to be aware of in AI projects. It’s also been recently rated a “hot new release” in neural networks, so it’s also good for budding data nerds and their parents. A half-hour video that gives an intro is here.
 
To dig deeper and set up your own AI evaluation and implementation, contact us directly. We offer a full-service AI education process on “how to win at AI” including problem definition, build/buy strategy, integration strategies, target setting, vendor evaluation, vendor selection, training and change management.

Google and Samsung Speak
 
This week, two big things happened in the world of voice interfaces.
 
First, Google released Google Home. Most of the press narrative was about comparing Google Home to Amazon Alexa (and the as yet non-existent Apple iSomething) from a feature-by-feature comparison perspective. Our take is a bit different.
 
Alexa is an unapologetic interface to Amazon services, it might feel she works for you but the truth is very different. We’ll be sending a longer piece on this topic to our subscribers this week where we go into depth over how these interfaces all come with a catch. Suffice to say, for all the benefits of a one-stop-voice-interface, there are very real anti-competitive threats in home-gateway AI you will want to be aware of.
 
Google Home faces a similar challenge. When you search Google in a visual interface, you’re used to getting a mixture of paid search results and non-paid results. You know that Google makes money off of advertising and have become accustomed to seeing paid results. And you’ve likely accepted their presence partially because 1) Google’s visual interface is so clear about what is paid and what is not paid, and 2) you can easily scroll down to see results that aren’t paid for — they’re there just for you. One way to look at it is that you know when Google is working for someone else (paid search) and when Google is working for you (non-paid results).
 
How will Google perform this in a voice interface? When I ask Google home for a recommendation, am I getting the best recommendation for me or one that is paid for? Will Google be working for me or for someone else? And how will I know?
 
Amazon sells us stuff. We know that. We know we’re getting sold. But Google informs us, it acts as our gateway to the world’s information and knowledge. And it’s important that Google continues to be transparent about why it’s presenting us with what information. Voice may make access to information easier but it may also make understanding how and why a result is presented harder.
 
Also this week, Samsung bought Viv, the company that promised to create the next great voice interface, reducing the centralized power of the super-platforms. Viv’s goal has been to make “intelligence as a platform” but it has been commonly referred to as “a better Siri” since the team is based around “the guys who made Siri.” Earlier this year, Viv opened the curtains a crack to reveal their vision of creating a unified interface that could do almost anything you wanted it to do. The vision is big and compelling but had some big hurdles including breaking down some tough technical challenges and getting enough developers to plug into their system. For instance, one use case they showed off was telling Viv to “pay Adam for dinner last night.” Accomplishing that task requires Viv to plug into our contacts to know who Adam is, our calendar to know where we went to dinner, our transaction records to know how much dinner was, and our payment system to make a payment. Getting all of those systems to plug in is a tough challenge for a startup — even with their pedigree.
 
Enter Samsung. Samsung needs to ramp up its software platform (especially since Google is holding back some of its goodies for itself now) and is/was one of the only major tech players without a voice interface (leaving Huawei as the biggest mute now). Viv gives Samsung a major boost in voice interface (once the technology launches, of course) and Samsung gives Viv a huge platform that will entice developers to plug in. Not to mention, extreme data access.
 
As we’ll talk about more in our voice-focused piece this week (subscription required), we’re excited by the potential for voice, we’re cautious about voice interfaces talking well with each other, and we’re wary about the concentration of voice control in the super-platforms.
 
Visualizing Hyperspace
 
Earlier this week, we sent out a piece on visualizing hyperspace to our subscribers. Subscribe now and we’ll catch you up on the latest publications including this one. Below is an intro to the 10 minute read.
 
Summary: Many of today’s data visualization projects have a key advantage: the data is structured by humans. Since the humans create the constructs, there is a likelihood that it is structured in a way that is easy for a human to understand. Machine learning creates a different challenge because the machines can think in unlimited dimensions, the data gets structured in hyperspace, and the answer may cut across many dimensions. Since humans can’t think in many dimensions, how can we visualize hyperspace?
 
One answer is to try to construct 3D virtual worlds so that we can “walk” through the data. There’s plenty of hype about these ideas. We’ll show you two new concepts for visualizing data in a virtual world: one that creates a virtual representation of real-world equipment in the electricity industry and another that creates a virtual space to organize corporate data. As you’ll see, adding a third dimension doesn’t solve the problem. Sometimes being in a virtual world doesn’t really add any value (the industrial example) and sometimes you just can see enough information since half of the data is always behind you (the corporate data example).
 
Humans may live in a 3D world but we analyze data, read, and watch things in 2D. So the question is, what are interesting options for representing complex data in 2D? We’ll show you one of the most interesting Brainspace which has a surprisingly easy-to-use method of navigating masses of unstructured data.

To read more of my writing, please visit www.intelligentsia.ai.