Trusted AI, IBM Watson, Focusing on Value vs Hype, IA and AI | Daniel Hernandez | Stories in AI

Ganesh Padmanabhan
StoriesinAI
Published in
27 min readAug 2, 2021

Ganesh: Welcome back to stories in AI. Today we speak with Daniel Hernandez, the general manager for IBM data and AI division. I’ve known Daniel for several years. And he’s one of the strongest technology leaders I know. He is very thoughtful, puts people first, and has this amazing ability to motivate a large group of people towards a common goal. Daniel leads one of the world’s largest franchises in AI, IBM Watson. And in this discussion, we talked about the lessons that IBM Watson gave to the rest of the market, the state of the market today, the need for focus on data and information architecture to get AI right. We talked about trust, we talked about flying and AI, and a whole lot more. I really enjoyed this conversation with Daniel. I hope you do too.

Daniel, welcome to stories in AI. How are you this Saturday morning?

Daniel: Ganesh, thank you. Very awesome. It’s my wife’s birthday. What better way to celebrate than to be with you.

Ganesh: Happy birthday to your wife. It’s so awesome that she allowed you to do this little thing on your Saturday morning at 7am. But thank you so much for taking the time. I was so looking forward to it. Because we were planning to do this for a few weeks. And then one thing or the other, we got through it.

So let’s get started with you and talk about how did you get into data and AI? Give us your story, your personal story.

Daniel: I would say my entire career has been focused on the domain or immediate adjacent domains in and outside of IBM, and it was mostly accidental. I spent roughly 10 years in Startup Land before joining IBM. We were building geospatial analytics, field service apps, and really data intensive applications. When I came in at IBM, I joined what was then the predecessor to data in AI; it was a division called Information Management. And it was building out the capabilities really from the database out. So master data management, ETL, data governance, analytics, predictive analytics, which starts to get into things like statistical machine learning techniques. And I had all manner of roles inside of IBM in this space: product management, development, M&A. And now as the general manager of data and AI, which is home to the capability that I originally joined, plus much more including Watson.

Ganesh: That is awesome. We’ve known each other for about a decade now, and I’ve been following your journey ever since the information management to leading parts of AI and data, to now leading one of the largest franchises in data and AI and Watson. Watson made this market happen with IBM Watson. And we’ve come a long way.

So before we go into AI and the market, tell us as the GM for this largest franchise in the world for AI, what does your typical day look like?

Daniel: Varied. I’ll tell you, and I would say this is probably true for all of my colleagues in IBM, but also, people that are outside of IBM that have a similar role. My job as the custodian of data and AI is on a day-to-day basis, executing on the promises we make, which are typically conveyed in vision and strategy, memorialized in plans. On any given day, that might look like talent management. So trying to hire the best and the brightest, outside of the company, inside of the company into critical roles, talking with customers, either informing them on what we’re up to, and how we can help them, maybe even advising them on what some of their critical blind spots are. Typically, that would be, “Hey, you’re so excited about the power of conversational AI, but you’re not ready because you haven’t really considered these things.” To make sure that the trains are running on time inside the business. At the end of the day, we’re a product business in data and AI. That means our products have to be exceptional, they have to work better together, and at the end of the day, they have to deliver benefit to our customers. And so on any given day, I’ll be looking at our technology, I’ll be reading our docs, giving critical feedback to one of our many teams. But I will tell you, the best days are when I’m outside of the firm talking to our customers and learning from them as much as trying to inform them on what we’re up to and how we can help.

Ganesh: Are you back traveling again? Are you doing it all remote now?

Daniel: I have started traveling again. So I had my first real live customer meeting. I met with over three dozen executives and one of the largest Telcos here in Dallas. I don’t live in Dallas, I live in Austin. But that was good. It was ironically, social distanced; it was outside, and so it worked. Then I have a slew of customer meetings that are live scheduled for late July. So it’s definitely picking up and personally, it’s making a difference in energy levels. Like, you and I meet live, even though I was excited to see you virtually, I think the bonds are just much stronger, it’s hard to replicate for sure.

Ganesh: I can’t wait to get out there. I’ve started doing some meetings, but still not too much. And I miss that. Somebody told me this, “When you go to a party, and you can really find out what kind of person you are. When you go to a party and come back home, are you tired or are you wired up?” I’m wired out. I go to a party, meet a lot of people, come back up, and I can’t sleep for next two hours. Because I get the energy from people. So I miss that. That physical or in person interaction. I’m glad the world is opening up again, we’re getting back to some kind of normalcy.

So let’s talk AI. I can’t believe it’s been exactly 10 years since IBM Watson played Jeopardy on TV.. And a lot of things have really transformed since then. I can’t believe it’s actually 10 years ago, and it almost feels like we’ve had probably five decades of progress since then. There are a lot of changes that have happened in AI, in the field and in the market. What are your thoughts? Walk me through your view of the market, how have we evolved in the last decade in AI?

Daniel: You’re right, we celebrated 10 years of Watson, we also celebrated 110 years as IBM; that was just last month. So it’s really a storied history. As an IBM, I’m very proud of it. And I certainly feel a heavy responsibility to be a steward of not just the Watson franchise, and the work related to it, but carry on the legacy of IBM and move it to the future. I would tell you in the last 10 years since we introduced Watson, when we introduced Watson, it was largely a Q&A system. The beneficiary of a lot of work in academia and IBM research, we stood on the shoulders of many giants. And in the ensuing 10 years, I will tell you, we’ve learned from every conceivable mistake we could have made in introducing something brand new, trying to create markets, and trying to scale this technology in the wild.

When we introduced it, we proved the power of the technology certainly in Greenfield, a Q&A system that can do speech to text, get answers to questions, determine with high accuracy the right answer to that particular question, repeat it and do that at speed. Trying to apply this stuff in the real world where you don’t have Greenfield is where you learn what works and what doesn’t. And we certainly did. I would say, just to label some of the lessons learned, the technology that we had done, and we continue to have now, it has evolved significantly since then. It is only one part of the equation you have to get right in order to deliver outcomes that matter for your customers. You have people, issues you have to contend with, you have process issues you have to contend with, cultural often antibodies that are going to reject technology because it’s relatively new, into in the panniers trying to apply it to all industries and multiple use cases. We basically figured out what works and what doesn’t. And that leads us to where we are now, which is kind of the focus on what Watson is doing today for our customers.

Ganesh: That is awesome, you’re exactly right. It’s more than just the technology, it’s more than just a core platform even for the market. I mean, back in the day, the problems were very much, “Can I build a machine learning model?” But after Watson introduced from a language perspective, and we’ll come to language in a bit, it was all about saying, “Hey, here are some frameworks, and here are bigger problems, solve it.” Today, it’s a given that most organizations see AI as a powerful differentiator and an accelerator for their transformation journeys or just to be more competitive. And the problems you’re solving are very different, even from 10 years ago. What is the market today? Where is the AI market today? What are organizations still struggling with? What is already working? Give us a view of that.

Daniel: All right, so let’s break down the categories. So when we say AI, what do we mean categorically, at least as far as the stuff that I am focusing on through our team’s work? So conversational AI; so anchored on technology known as natural language processing, and applied to customer care. I think we’ve got significant product market fit in that space. So NLP has been around for some time, we actually were using NLP techniques in the original Watson for the Q&A system in the conversational AI world. This is a space that actually grew up through basic experiments. You would see things like virtual assistants on websites, bots that are interacting you through messaging; they were cute and fun. We’re actually applying that stuff for customer care, full lifecycle, and really anchored on customer service. COVID forced a lot of companies to have to contend with the very real problem, which is significant inbound across multiple channels, not just voice, but text, in search for either new services that you’re eligible for if you’re a citizen to a particular government, whether it’s County, state, or maybe even federal here in the United States. Or new services you needed for profit for customers like the ones that we service today. You either had to hire 1000s of people into the contact center to deal with that flow, or you needed an alternative technique that would work, and not just work over your voice channel, but work across text. The work that we were doing around Watson assistant, which is our conversational AI, applied to that customer care, customer service, style problem, and it delivered benefits that were even beyond what we were expecting. We knew this stuff was good. I mean, this is a product that organically was doubling and growing particularly in usage, not just the revenue, and we started applying it in a targeted way. They’re like the customer satisfaction for the people that were interacting with. It increased substantially the cost savings associated with increased containment rates over traditional IVR systems, and made the economics super favorable. Now we’re enjoying the conversation with our customers and saying, “Okay, what’s next? How do we not just solve customer problems that are being driven through these interactions instead of your contact center? How do we deliver exceptional care in sales, in marketing, and in promotions? How do we anticipate the customer needs before they even know them, address them and create delight throughout, not just for customers, but for even prospects?” So that’s the conversational AI side. And there’s a whole collection of technology beyond the conversational AI that are related. So in some cases, for instance, if you’re interacting with me and I can’t anticipate your need, I still want to offer you help, what kind of help? It could be, “Here’s an FAQ that I want to point you to in order for you to self-serve your own needs.” It might be, “Hey, I’m sort of confident, but I’m not completely confident that this piece of content on my website or on the internet might be useful to you,” we’re able to deliver that kind of help through something called Watson discovery. Again, based on NLP technology, primarily driving content intelligence. So basically, processing and analyzing documents, building critical insights from those, and using it in support of things like search, in support of things like longtail help in this conversational AI use case; that’s Watson discovery. But in general, all around this customer care, I would say that’s number one.

When it comes to data science, building models, and trying to put them into business critical processes, we’ve built out a tool chain to help you do that, leveraging all the open source that data scientists love, principally our Python, and all the frameworks and toolkits that are built around those ecosystems. And we have the lifecycle management for those things like model serving, model lifecycle management, helping us deal with these very real business core problems, like what do you do to manage those models and collection of models after the data scientists who built them has gone away? How do we ensure that those things continue to deal with the original problem that they we’re designed? That’s our Watson studio stuff. So I would say conversational AI through Watson assistant discovery is the most interesting, and if you’re one of our customers that served customers, you wouldn’t even know that the thing exists, it is just behind the scenes. Watson studio, our tool chain for data science is probably there. What I will tell you is, the proprietary algorithms and the proprietary tools that used to be the primary method that data scientists use for their job, whether it was SAS, in my case SPSS, in large part is being either complemented, or in some cases replaced with what’s available readily in open source, and what most data scientists, especially ones coming up from school are using in learning on.

Ganesh: It’s interesting. So broadly, you’re seeing the two sides. One is, can I give you capabilities that are AI powered, like customer care is an example, that you can easily integrate, plug in and start realizing value ASAP? On the other hand, it’s a development toolkit of, “Hey, you want to build something because you have smarts, you have the people, you have the resources and you have something proprietary you need to go do. Here is a very accessible and evolved toolset that you can use to go do that.” And that’s very indicative of what we’re seeing in the market right now. There are a lot of folks who are just consuming AI, if I may use that word, AI powered services as RESTful API, as a service, or as just as delivered by an SI partner. And there are folks who are trying to build something on their own with AI and trying to make it a differentiator. How about value realization across the two? One I can make sense, like, you’re bring the power in this case with IBM, they’re taking customer care, you bring the power of IBM Research, all the history, all the learnings, and everything into them, they’re going to realize value really quick. The flip side, it may not be as differentiated for them, versus them building something on their own which might be a little depending on the business unit and what their business outcomes are going to be looking like. But in general, what has been the value realization with AI on AI projects across the industry in your opinion?

Daniel: I think about it in two ways. Let’s talk about apps in general. Like, I was describing our own conversational AI to our customers, that it looks like an app focused on customer care; they actually don’t care what the technology is underpinning in. They really don’t care that NLP is there, they don’t care that there are sophisticated techniques beyond NLP that help us understand your intent, and resolve the customer interaction. In so much as we deliver the outcome, which is higher customer satisfaction, higher containment to save money, they just don’t care. And the same is true for other apps. Like we’ve got an Asset Management application called Maximo in this business; it is used by virtually every asset intensive company out there. So think of power generation, bridges, roofs, that kind of stuff. They started injecting models that were built through my tools, Watson studio, in particular, into their application experience to help field service technicians to predict maintenance to optimize truck rolls. The customer of that application doesn’t care that Watson is actually inside. The benefit of Maximo is what they care about, which is more downtime, more efficient truck rolls, and better capital efficiency as a result. The same thing is true on Watson assistant for the chief customer care officer that we support there. Same thing is true for the office of finance that we support through planning analytics, which has Watson built in to help you build and run forecast, budgets and plans. So inside of the application game, it might be controversial. But the value of AI is irrelevant, what has value is what’s possible within the application experience and the benefits that are levied to the customer and to the users of those apps. So categorically in apps, AI doesn’t matter to the end customer when it comes to benefits as much as it does help the actual application experience.

Now on the building stuff, it depends on how are you using these models. If I’m using these models in business critical processes like customer onboarding as a bank, then the benefits are, are you able to detect fraudulent activity upfront, better, faster and more efficiently than before? Are you able to perhaps onboard people that would have been false positives on the fraudulent side, and therefore enjoy the benefits of the revenue that that customer now will bring to you? And so the value of what we enable through AI, specifically statistical machine learning techniques, models, or drillers really depends on the application of those models’ themselves.

The value of the tools comes down to are you enabling people to do their job better, faster, more efficiently? Are you equipping a broader user demographic to wield the power of AI than what you could if you didn’t make it more accessible? But the nature of the value inside of the tools game is different than what it is inside of the app game I would argue.

Ganesh: Got it; that makes total sense. You of all people have than most people I know, a very deep background in data. You’ve talked, and we all know that AI is not just a machine learning problem, the machine learning itself is a data problem. So what is the role of getting your data side of the equation right to do successful AI?

Daniel: I did grow up in the data world first. And it was pretty obvious then that the impact that you can make to your customer is largely correlated to your ability to find the data that you needed, to trust the data you needed once you found it, and to ensure that there were appropriate data protection around that data. So that if you were to make it available to a broader mass of people, you’re not increasing your risk compliance surface area. And that was in support of things like predictive analytics, business analytics, self-service analytics through any matter of tools. The broad application of artificial intelligence today, especially if you were to go back to our taxonomy, not the apps, but on the tool side, still is leveraging machine learning, and supervised learning. Supervised learning needs data. And so the problem I just described in things like data warehousing projects and analytics projects is true. The difference is, most data scientists that we still talk to in the wild, in customers, don’t appreciate the dependency they have on data. Let’s say they’ll do things like subset data into their own little shadow system, create the models, if they’re lucky enough to deploy the models, majority of these are not actually being deployed in wild still, we should explore that. And if there’s ever any question, whether it’s from a regulator, an internal auditor on, “Hey, you got to defend this model, where you did it, how was it built, where did it come from? Or what was it built from? There’s no defensibility at all, because the link between those models and the data basically doesn’t exist. So the data problem in AI is a very serious one, it’s knocking and getting enough airplay, our customers really don’t appreciate the dependency. And when I say our customers, I’m saying in general, the industry right? And until we begin to have a greater appreciation of this or have techniques that aren’t so dependent on supervised learning, I think it’s going to be a critical factor on adoption for sure.

Ganesh: Yeah, and I don’t know whether it was you or Rob Thomas that said, “Before you have AI, you need IA or information architecture.” Elaborate on that a little bit more.

Daniel: He said, “There is no AI without IA,” and he said that three years ago. And it was a simple way to describe what we just said in a lot of words; there is no AI I without information architecture that’s ready for AI. You need to have mechanisms to collect, organize and analyze your data. And if you don’t, any AI that’s built on top of it, no matter the technique is going to be dubious. So get the fundamentals. Don’t stop in experimentation; don’t stop necessarily your AI project. But ensure that you have proper mechanisms and good enough mechanisms to have your information architecture ready. The ability to collect, organize and analyze your data.

Ganesh: I get that Daniel, but the reality of data also is that we’ve been trying to evolve the data management practices for five or six decades. And these are things that are always evolving and changing. Is it going to be like if you go that path, wait for data to all be ready and prepared, with your information architecture in place and comfortable, wouldn’t it be too late to actually experiment with AI? What’s the balance?

Daniel: To experiment? You said the magic word: to experiment, you don’t need the entire data infrastructure across the entire enterprise across all your business units to be ready. In fact, you may not even need any of it to be ready, you could do experimentation in a controlled environment, with subsets of data in CSV files, or raw data inside of a data lake that has been shadow copied out from source systems, and learn from that. In fact, a lot of what we do through our data science elite team, these are a collection of about a 100 hardcore experts in data and AI that we make available to our customers. Their whole game is, “Give me your data, give me a problem, and I’m going to use my tools to go solve it.” And typically the data is protected, but it’s subsetted out. And so for experimentation, you really don’t need to wait at all, you can learn in a controlled environment with an anon optimized data architecture. But if you want to put this in business critical processes, that could be a recipe for disaster.

Ganesh: You talked about hundreds of data scientists; talent has been a huge constraint. Talent, resources, highly skilled machine learning engineers, and data scientists have been a constraint in helping organizations really realize value with AI. Is it continually going to be an issue? I know there are a lot of folks who are now getting into the market as newly minted data scientists and so forth. Affordability is also the other side of the equation. Since the supply is so low of good. There is this statistic that says, I think we have about half a million machine learning engineers in the world, compared to 20 million Java developers to just give the perspective. How are we going to solve that for the industry? Is it just going to solve itself by getting more and more people trained? Or is it going to be solved through tools and practices?

Daniel: I’ll tell you is much better today than what it was even three years ago. I would say it’s through the collective efforts of industry, academia, and open source, which you know is the required contributions from companies like us and our competitors. Again, the situation is much better than what it was even three years ago. Go to Coursera, check out some of the curriculum, much of which is powered by a curriculum that we’ve contributed to, and free use of our own products for academic use. That’s mostly true for even our competitors. And so as an industry, we’ve made it easier to learn, even if it’s not through traditional learning in universities. Well, you’re right, still not enough. It’s trending right? I don’t think anyone should rest on laurels here and just assume that suddenly, there’s going to be an abundance of skills, and you’re going to have the same number of data scientists that you do with analysts that can wield self-service analytics tools, but it is trending. Obviously, all of us have a responsibility to do more.

Here’s what I would tell you is the next level problem; these days, scientists are being put not just in product teams to build stuff, they’re being hired into disciplines like marketing, and sales to apply this stuff. So data science always was, but especially now, to solve the outcomes that our customers care about, it has to be multi-disciplinary. So if you’re learning data science, and Python, you’re learning not just the Python language, but all the toolkits around it, scikit, learn, plotly, dash and all this other stuff you could apply to it, are you able to take a marketing problem and apply it? And can you speak the same language so that the marketers are conveying to you with high fidelity their needs, you’re interpreting it with the same level of fidelity, and you’re able to turn it into a technology powered outcome? To me, this is the next level up for us to really move on to the next stage of adoption across business for sure.

Ganesh: I agree. I think storytelling and communication, or just communicating the value of what they do, other than the technical skills really differentiates the really good data scientists from everybody else. So you are a pilot, you fly your planes and a couple weeks ago, I’m still bummed that I had something going on, and I couldn’t actually fly with you to Denver. You equated flying airplanes with AI in terms of trust, how to trust AI as a system and so forth. Can you elaborate on that and draw that parallel clearly?

Daniel: Yeah, so the kind of planes that you and I would fly are general aviation planes, right? They’re not the commercial airliners that real pilots fly. And whether it’s the plane that I fly, or ones like them, they’ve got highly sophisticated avionics that effectively allow you to push a button and have the plane fly as you lead to the destination that you’re trying to go to, with a lot of complicated intermediate steps. Like how to actually depart horizontally and vertically out of the airspace and how to arrive at one. So my brother who’s also a pilot likes to tease me because he learned before all the sophisticated avionics existed using techniques that we would consider things like machine learning really powering some of these avionics, a glorified button pusher. He said, “You’re actually not flying anymore, the plane is flying itself.” In a way he’s right. We’re deferring the stick and rudder techniques of flying to the plane. And the obligation as a pilot becomes managing the entire process. Like making sure the plane is behaving the way that it should, that you instructed it properly. It is like you are cross checking at this moment in time. Is it doing what it ought to? And if not, why not? So you’re troubleshooting, and you’re risk managing. The nature of being a pilot just is leveled up a little bit. It’s no longer hands on yoke, necessarily, like it was when he was there. And so that delegation doesn’t happen if you don’t trust the airplane for the most part, even though you’re monitoring the system. If you don’t trust that it does what it should do at that point in time, right? And so, obviously, the stakes are different there than when we’re talking about customer care applications, asset management applications of this stuff. But maybe not. Let’s talk about the case of Maximo. We’re trying to assess the appropriate moment to maintain this asset. What kind of asset is this? It could be a bridge, or it could be a road. These are the kinds of things that we rely on in our daily lives, and we really don’t think about. But imagine if we got that wrong. What would be seemingly low stakes decision making or predictions becomes high stakes depending on the application in which that stuff is done.

In short, in aviation, I think it’s probably an understudied domain where algorithms and computers are doing a lot of high stakes things. And the responsibility as humans is obviously to augment those processes. But to do some risk management and monitoring the systems and make sure they work as intended.

Ganesh: So you use the magic words like trusting the system. How do organizations and leaders trust their AI systems? What is needed to happen for that trust to be established?

Daniel: Let’s distinguish low stakes stuff versus high stakes. If you and I are enjoying time together, and we pop up Netflix, and the recommender of what we like, informed by all my past history, which is pretty esoteric, like Gets the Guess Wrong, who cares, right? I might upset you because you’re hanging out with me watching something you don’t like; it matters, but it doesn’t. We’re not going to cry over that; we’re not going to die over that. And so trusting those systems, while it doesn’t matter, if you trust it but it doesn’t deliver on you, the stakes aren’t that big, and so it’s a highly forgiving domain. On the other hand, there are higher stakes decisions. Like, determining who could be your customer, determining whether or not someone is credit worthy, determining whether or not you are going to automatically process a warranty claim. If you are deferring those decisions, if you are relying on those systems that are built and powered by AI to power those high stakes decisions and processes, they need mechanisms to trust that the stuff works. So trust is kind of complicated.

Why would you trust something? I might trust something because it’s understood. How do you understand how your process works that’s powered by AI, where you need some degree of Explainability, you need some degree of transparency, and you need those apply to things like the models that are powering the primitives that are powering these systems? And depending on the technique, your mileage will vary on your ability to do that. You need to trust that those things are current and up to date, if they’re trained originally on a data set that historically represented your business, but then the whole market changes, how are you going to ensure that those models that were powered on that now bad data set, have been retrained? And how do you know that they have the same level of accuracy? There’s that. If you’re a regulator or if you’re an entity who is responsible to a regulator, and have to defend the trustworthiness, the veracity of those models, how do you support that process? These are all critical questions that you have to answer in order for you to imbue something with trust. Tools have a role to play here. So I talked about Watson studio and the model serving, model lifecycle management. It is in that model, lifecycle management’s set of considerations that we are focusing on. We’re focusing on these set of problems, and we’re trying to apply technology to help. But it’s not just technology, it’s also in the processes you have. What are your internal model validation routines? And how are you going to share defensibility with a regulator, whether that’s internally done or externally done? You have to consider all these things for sure.

Ganesh: Awesome, bring it home for me. What’s your advice to organizations either starting or scaling their AI journeys?

Daniel: I was on the circuit about two years ago, and it wasn’t really a novel idea. This is at the time where we were talking about robots and taking over the world, and AI was right around the corner of the Hollywood AI. I was like, “Look, our job, and my job is to make this stuff boring.” And in fact, I think our teams are pretty successful on making AI, boring. And things become boring when they become common. And they can only become common if they’re useful by whatever your targeted demographic is. And for our case, it’s obviously business. We’re AI for business. And so what I worry about, and what our teams worry about are all of these relatively, what seemed like hard people process problems. Like, we’re focusing on that and trying to deal with it so that we can just put this stuff to work, not just in the obviously exciting ways, or the stuff that would make it into the press release, but the pedestrian ways like, “Hey, let’s service our customers better. Let’s help our customers sell the right things to their customers in a way that helps their customers address a need that they, in some cases didn’t even know they have.” So for me, the future is the advancement of technology to make it pervasively available so that the stuff just becomes easy. Yeah, it’s boring, because we’re not going to talk about it. I’m not going to issue a press release on every amazing project we’ve got, because we’d be doing that multiple times a day at this stage. So I hope really as an industry, we start evolving to less, “I’m going to inspire you on the power of AI to, I’m going to educate you on what’s possible. And let’s celebrate the less exciting applications of this stuff.”

Ganesh: That’s fascinating, make AI boring. I like that. #makeAIboring.

Daniel: I tried it and it didn’t work with our sales team to be clear. So they said, “Hey, wait a minute, what are you talking about? Sir, no, I’m not going to do that because I have already been hazed once,” but I still believe in that basic ideal. That would be a great outcome.

Ganesh: So advice to businesses and business leaders is, look for the tangible stuff where you can make a difference in your process, in your business, and focus on those rather than trying to do Big Bang.

Daniel: Here’s what I would say to end users, whether they are the kind of companies that we serve or our competition serves, when we think about AI, I would request you consider it in two ways. One, for a business problem you have, try to find an out of the box application that solves that problem. And if it happens to have AI inside that delivers better benefits, all the better. In other words, don’t seek out AI, seek out solutions to the problem. And often, there are applications that are already built that help you do that. Don’t start with the lowest common denominator of the models and the tools. The most expedient way for you to get an answer to your problem is to buy a solution that’s ready made for it. In our case, it is plenty analytics for budgeting, forecasting, and planning, Watson assistant for conversational AI and supportive customer care, and Maximo for asset management. You can even Tory solutions outside of IBM and understand what the corollary would be for those. And if you’re a builder, whether you’re a builder of one of those applications just for a different category, or you’re a builder like a data scientist, a data engineer, or an SRP team where AI has the potential to solve your problem, validate the problem. And validate that the technology actually does what you think it’s going to do before you spend a whole ton of money trying to implement this stuff. It doesn’t serve you well as an end user if you’re solving a problem that is not anchored on hard facts, hard pain points that have dollars and cents, and maybe even risk and compliance obligations associated with it.

Ganesh: Awesome. Fascinating, good advice. What is one problem that you want everybody, all innovators and entrepreneurs to focus on in the future with AI?

Daniel: That is a hard question.

Ganesh: Is it because there are too many problems to solve, or because there are too little?

Daniel: I don’t know if there’s a single rallying cry that I could issue for all practitioners, or all customers in this space. Certainly, as builders, we owe it to our customers to demystify the hype and to apply this stuff in a way that helps our customers make the world a better place, versus just being hype machines on plausible or even possible outcomes, but not anchoring this stuff on real world issues. As far as the domain, there are so many interesting emerging spots, like the role and the potential of artificial intelligence in the quantum computing world, the merging of expert systems and other techniques through neural symbolic methods. These are some of the things that IBM Research is helping us advance through MIT that can actually stump the chump.

Ganesh: I was also trying to get into your mind, like, what are the things that are still worrying you or things that you think the world or the market is going. You already talked about a lot of things and how the industry is evolving. This is just another way to get something that I missed out in those questions. I have some rapid fire questions for you. I have so many questions, but I have to wrap it up. One is, give me a story. I used to ask this question slightly differently, in all my shows. It was like, “Give me an example or a story of how we will be interacting with AI in 100 years.” I don’t ask that anymore, because 100 years is too far out. Give me a story in 10 years. From Jeopardy and Watson 10 years ago to now, give me a story 10 years from now, where we will be with AI.

Daniel: I think it is going to be invisible and powering most interactions that customers have with the firm’s that service them. You won’t even know that it’s powering anything and even when you’re talking to a human, or you’re interacting with humans inside of the business, they’re going to be augmented by capability that turbocharges them, helps them make better more informed decisions even on Creative tasks. But I expect it to be pervasive, therefore invisible and mostly, as a result just taken for granted, much like email is today.

Ganesh: Boring and invisible. I like that. AGI, Artificial General Intelligence, do you fear that? Is it going to be possible or realistic in our lifetime to expect that?

Daniel: There were many contemporaries out there, Elon Musk, and more that are taken on that particular topic. I’m not focusing on that at all. Our job is as practitioners serving our customers with the technology we have today, for the technology is being born in IBM Research to help commercialize it. I don’t fear general intelligence. I don’t think we’re right around the corner from that at all. And so I don’t spend a lot of time focusing our team’s energy on that general question to be honest.

Ganesh:

Yeah, it’s funny. Personally, I don’t fear AGI, I fear narrow intelligence in the hands of bad actors. That’s the worst outcome than just robots taking over.

Daniel: Well, any technology wielded by bad actors is a problem and artificial intelligence just is a different technological technique, but it’s technology, nonetheless. I guess I have that fear pervasively across technology, not just across artificial intelligence. That’s my view at least.

Ganesh: Awesome. If somebody is watching this, and they’re trying to get into AI, what’s one resource that you would recommend to them?

Daniel: Good ibm.com, give me your resume, and come join us. Let’s fight the good fight for building this technology, responsibly putting it into the hands of our customers so that they can make a difference in the world.

Ganesh: That’s awesome. And how can the viewers and listeners get in touch with you? Where can they find you on internet?

Daniel: danhernandezatx on Twitter, Daniel G. Hernandez is how you can find me on LinkedIn. And dghernan@us.ibm.com is my email, hit me up anytime.

Ganesh: That is awesome. Daniel, this was such a blast. I think we should do another episode later on because I still have a list of 200 questions that I haven’t asked you yet. But thank you so much for taking your Saturday. Happy birthday to your wife again. Thanks so much for getting on the show.

Daniel: Let’s go flying.

--

--

Ganesh Padmanabhan
StoriesinAI

#AI and Healthcare. CEO @ Autonomize, @StoriesinAI . Scaled Data/AI biz to $B+ , 2x startups, ex-GM @DellTech . On life, startups & impact, sharing & learning