Testing and Evaluating Enterprise AI: The Power of Detailed Visuals for Transparent Machine Learning

Learn why most AI projects in industry rarely get past the prototype stage and how visual, more collaborative software tools provide a solution.

Jason Behrmann, PhD
Zetane
35 min readJun 25, 2021

--

Image derived from the photo by Photo by Alex Kotliarskyi on Unsplash.

The majority of AI projects planned in enterprise remain just that: plans that never see the light of day. This talk by the CEO of Zetane, Guillaume Hervé provides an analysis on why businesses encounter several barriers that inhibit adopting valuable AI innovations. Presented at the BrightTALK Summit on Adopting AI for Enterprise Insights, Guillaume bases his analysis on a review of key reports and over 100 interviews conducted by Zetane with experts in data science and machine learning.

Complexity remains in lockstep with commercial applications of AI and that makes the technology nebulous to both experts and non-experts alike. This lack of transparency and understanding about the inner workings of complex deep-learning algorithms and their processing of data provides little reassurance towards implementing such technology in critical industrial and business operations. Such reservations reach a whole new level of significance if the operations may impact the health and safety of others.

“In industry, seeing is believing, and seeing is understanding — especially with new technology.”

Guillaume Hervé

One piece of the puzzle to overcome these challenges is to provide new means to represent AI models and data using intuitive and tangible visuals that diverse professionals can comprehend with relative ease. This will open new avenues to explain and understand AI solutions. This next generation of human-understandable visuals for AI projects will also provide better means to oversee and collaborate on initiatives that bring AI prototypes to a full-fledged deployed solution. Learn about these strategies in this video from the event; below you will find a transcript of Guillaume’s insights.

If you notice that we overlooked a key topic that you would like for us to address in future talks, please tell us about your request in the comments below and we will do our best to research your recommendations in depth.

Transcript

Well, good morning; good afternoon, depending on where you’re joining me from. My name is Guillaume Hervé and I’m from Zetane Systems, the CEO and co-founder of a software company that has developed, basically software solutions for machine learning and deep learning in enterprise. And today I’m going to be talking about a subject that’s very dear to me because we deal with it on a regular basis, which is the issue of transparency and the issue of trust in an enterprise to bring more projects forward. And you’ll see what I’m referring to as I show you some material today. I don’t do many speaking engagements; when I do, I always want to make sure it’s full of information that you can use. I always put myself in your shoes. And so today I will be giving you lots of material. And hopefully, you’ll see my train of thought there to help you. Whether you are a person in business looking to to get more projects — AI projects — approved and into deployment; or whether you’re an AI-as-a-service or consulting firm that’s looking to have more success with your potential AI clients, that’s what this conversation is all about.

1:41
I start this presentation with three numbers: 85% 50%, and 15 to 20%. These are really important numbers that we’re going to talk about today. And I’ll get back to them in a bit. But these are at the heart of the problems and the slow adoption and deployment of AI in enterprise. So, you know, if you’re here today, you might be representing what I call three different types of risks; or three different types of businesses, all of which have a different risk factor. So if you’re developing AI for games for toddlers; or if you’re developing AI for robots to help in customer service or calls; or if you’re developing more advanced AI, deep learning for autonomous vehicles, the consequences of failure are very different. You know, a vehicle that fails in autonomous driving could be lead to injury or death. Whereas a game for toddler that may not perform well, you know, may not be that consequential. Might be some people here that have lived through COVID with toddlers at home might tell me, “Guillaume, I’ll take the problem with the car, and please make sure my kids’ apps are working fine.” But you see the point, right? Like, there is a difference in the risk that you’re facing. And this is what enterprise AI is really all about.

3:03
What I’m sharing with you today is based on well over 250 different client meetings and interviews that me and my team have had, dealing with people in enterprise trying to get AI projects done — whether they’re on the data scientist, machine learning side, or whether they’re on the business side. And I’ll share this with you today. It’s also based on lots of reports that I’ve read on the problems and challenges, also the successes of AI in industry. And also some of the projects that we have ourselves are done with clients looking to deploy solutions in their business. So very, sort of very practical, very, you know, from the field. This is not, you know, just vague thoughts or opinions; they’re from experience and/or from research. So, come back to these numbers here. So 85% 50% and 15 to 20%: these represent the failure rates or lack of success rates. 85% of AI projects never go into production; Gardner report that came out a year ago or so. McKenzie and another report: over 50% of projects never make it past the proof-of-concept stage. And if you look at Accenture, who’s been following AI for a long time, they’ve reported that only 15 to 20% of companies are able to bring projects to scale. So you might get a POC done — proof of concept — may be an MVP or prototype. But you won’t get it past the approval inside your company. Now, why is that? I mean, these are alarming numbers. And so when you look at it, there are very good reasons and my talk to you today — if you get one, two or three things out of this talk to prevent you from crawling into this red zone and these issues. And being more successful at getting projects through into deployment.

4:58
So yeah, we’re going to talk about these numbers, and more specifically why these numbers are scary, and hopefully give you ways to not fall into those statistics. We’re going to talk about why it’s so important in industrial context and business context to be able to open that black box, something that we do very well at Zetane. In fact, our focus has been from day one to develop a platform that’s fully transparent — top to bottom, bottom to top — so that people can have answers to every question regarding these algorithms. We’re going to talk about funding and what prevents people from getting good funding for their projects in business. And we’re going to talk about AI as an industry — in industry, sorry — being a “team sport”.

5:44
So, quick overview of the team, I’ll give you a few slides on who we are just so you get the context. We’ll talk about transparency and why it’s required in certain portions of industrial projects. We’ll talk about what we learned from industry; from our own projects and things that you need to focus on. And in fact, this slide is a snapshot of three different projects that were done, either with our clients or by our clients. Whereas you can see, we’re showing AI in a very different way: this is no longer confusing graphs and Python libraries and scripts. But we’re rendering a human-understandable — as we like to call it — solution so that people around the table can understand how to make these models better if you’re on the AI side. But if you’re on the business, the operation side: what these models will do for you, what the risks are. And when will they work and when won’t they work? And how do you manage that? So I’ve also put a few question periods. And we’ve got Jason on the chat side with us as well. And so please, I will have three breaks so that we can answer questions as we go. So don’t be afraid, raise your hand and digitally, electronically speaking, and let us know. And I’d be glad to answer them as we move forward. So let’s start with a little bit of background of who we are.

7:09
So we’re based out of Montreal. We’re one of those few companies whose technology is actually based on proprietary IP. In fact, we received our patent over a year ago now. And our technology that you’re going to see today — from the shots that you’re going to see — are all based on our Engine, the Zetane platform itself. We have a great team: multidisciplinary AI, machine learning, deep learning people; we’ve got folks with 3D-background; we’ve got strong software developers and engineers and a very multi background team as well. So allows us to have some great ideas as we move the product forward. What you’ll see as well is that the Zetane platform, at its core, is a software to evaluate, test and optimize AI solutions to work more effectively. But also to de-risk projects and increase user buy-in early. And we’ll talk about why this is a big problem in industries where you can use your models, you can use your data with one or two lines in your favorite AI infrastructure. You launch your models into Zetane and you see them in a very different way. It’s an “object way” of showing you in 3D what is actually going on with your data, with your model and the outputs that it’s generating.

8:31
And the way we came at this with the founders: At one point, we said to ourself, why is AI so complicated? I mean, it’s complex; certainly, the models can be very complicated, the data can be massive. And the end the use-cases can be very challenging. But you know, there’s been — there have been other industries where they’ve had the same challenge. And the one that came to mind for us was medical imaging.

8:54
Most of us here have had an X-ray, or an MRI, or some sort of a scan digital imagery, medical imaging, and if not, you’ve probably seen it on TV. And what’s amazing about medical imaging is that on the one hand, it provides information for extremely advanced research; you know, top-end scientists will huge — will use medical imaging to make incredible discoveries and to advance the state-of-the-art in medicine and in finding cures to some of the biggest problems facing us, right? But that same technology will also allow us to have a doctor speak with his patients to communicate with them exactly what is going on with them and why they recommend a certain treatment versus another. And so, extremely advanced use of medical imaging and yet something that is rendered human-understandable — as we like to call it — for us to allow people to understand what is going on. And make it something that is not scary. And so this is what we wanted to do with the team. We said, “we should be able to have the same impact in the world of AI, as the MRI did to revolutionize medicine for scientists and research and to make the very complex, very human-understandable.” Not just for AI teams to be more effective in doing their projects faster, but for subject matter experts in business to contribute as well.

10:31
And so why did we decide to take that role? Because in business, and I have my whole background is in industry and, you know, in technology as well. And in industry, seeing is believing and seeing is understanding, especially with new technology. If you’re going to want adoption and buy-in to your projects, your business teams need to understand why they can trust them, and why they can be reliable. Thus, transparency.

10:57
So, very quickly, we sell software. On the left side, we have a free Viewer available; it’s been available for months, it’s free forever. It allows you to see machine learning and deep learning models you downloaded in seconds. And you can start loading your data into a model “Zoo”; we have curated a bunch of models for you. And you can start — starting to see how to view models, their architectures and why they do what they do.

11:26
But we also have services. And when I talked about the projects that we did, a lot of my talk today is based on the feedback and the experience we had working with clients with their AI projects and helping them to get through proof-of-concepts from cases’ MVPs — in some cases, a scalable solution. And ultimately, the way we tackle this problem is one of the findings that we discovered during all these interviews that we did was that a lot of the data scientists and the ML engineers and the people doing the AI were frustrated because they were spending a lot of their time doing non-AI, non-data science, non-machine learning work. They were either developing homegrown tools because they didn’t exist; or trying to, you know, sew together two or three open-source libraries or software or tried to get something so that they could move their project forward. And so what we did is we said, “well, what are all those places that we can help in terms of tools and visualizers and dashboards?” And explainability — or xAI (explainable AI). And how do we provide a platform that helps you do that so that you can better test, eval — evaluate and validate your models prior to deployment. And at the same time, you know, eliminating that, but also allowing you to have much fewer iterations of data going back-and-forth, and models being run back-and-forth in the cloud, wasting, usually hours, sometimes days, sometimes more to get a reply, that says that it’s not going to meet your performance or your operational goals.

13:02
So for us, we talk about reducing the guesswork. If you can see inside your models and your data in a more effective way, we can save a few of those runs and save a lot of time without having to develop homegrown tools as well. So that’s really how we came about what we do today.

13:21
The Zetane workflow is simple. You can launch it from your favorite AI architecture or AI platform. You load up your inputs, and we do in the background the work to translate your model — ONNX conversion — into a visual environment that data scientists can now open up and inspect and then demonstrate to non-AI people or show them what’s going on.

13:44
Like I said, it integrates with all of the usual suspects and your favorite platforms. We’ve also begun work on even integrating with Unity and Unreal API’s for those that need those higher-end 3D-rendering engines. Though our platform does some of that as well. Available on the usual operating systems, and we’ve also made it ready for Docker, for Docker application as well.

14:13
So if you’re not from Montreal, or Quebec, or Canada, most of these names won’t mean a lot to you. But we’ve gotten some really good support along the way. Last couple of years to help us build up the company with the R&D and the support. So these are some of the folks that have given us a lot of help. For those that know the open neural network exchange, ONNX, at the bottom left; this is a regular — not a regulatory body — a body of some of the biggest players in AI, machine learning and deep learning, who have gotten together to try to bring some standards for interoperability and standardization around the AI model, especially in machine learning. So we’re a member of that. And we actually have a viewer, our Viewer — which is free that I mentioned earlier — is actually recommended by that group as well. So we encourage you to check them out if you don’t know them already.

15:02
We have some great clients. And one thing you’ll notice here is that they’re not in one specific industry. So think of our platform as a horizontal tool that allows you to work in different industries as opposed to a vertical-specific tool that does one little thing very well for one application. We’re trying to equip AI teams and in business teams in enterprise to work more effectively. And because of our IP, our tool is data agnostic. So because of that, you can ingest any data, which makes it easy for us to work in any industry; or for the users of the software: it can work in any industry as well. So I’m just going to switch screens to show you a quick demo of what this can look like. So pardon the close-up.

16:00
What I’m going to share now is a project that we helped a client do which was to they had massive, massive amounts of data that was being collected in a surgical brain-surgery simulator and they wanted to develop predictive algorithms to evaluate the outcome of a patient based on how the procedure was going so as to intervene earlier with surgeons. And so what we did is what you’re going to see there is the Zetane platform where you’re going to see the actual data, which were pretty complex sets of features that were provided. And you’re going to see the model in 3D as opposed to in code and libraries. And all that linked to a 3D simulation that shows you and shows the non-AI people how the platform is working. And allowing them to begin to trust why that project is worthy of more funding.

17:00
So as you can see, you can load the data; you can see it running in real-time; you can build the 3D application and all the binding and things that go along with an API project. And then as you can see the boxes that you see right there at the bottom right. That’s the actual model used with the, with each of those boxes representing a layer or a function. And each of those boxes, you can open up to see how they’re contributing to the end solution. So that’s just to show you what an AI project can look like. To make it more understandable and more engaging for your business teams.

17:41
Switching back, you probably see me now back on the slides. So yeah, I talked about the free Viewer. So, do what you may wish to do with that. So let’s talk about transparency and why it’s — what’s the issue around transparency in industry? And why is it getting so much attention. So, you know, the — I mean, to put it down two really, really simple terms, you’ve got “input” that goes into this: this term called the “black box” because nobody can open them, especially when you’re using cloud-based sort of open-source solutions. They’re great, but you may not be able to open them up and inspect them. And then they give you an “output.” So the problem is, in some parts of industries, in some types of industries — like I mentioned earlier — the more risk you have, and the more consequence of failure becomes important, this concept of transparency and black box cannot coexist, because “trust me” doesn’t work in those types of applications. And so people have been spending quite a bit of time, especially if you’re in the machine learning and deep learning space, to figure out how to open that black box.

18:55
And why is that important? Well, here’s my takeaway from the projects that we’ve done. And certainly the research that we’ve done is that decision-makers are ultimately the ones in enterprise paying for budgets. So you may have an AI group, but that AI group is funded by some part of the business, some part of corporate fundings. And these people are putting money in so that you can exist and do these AI projects. And typically what we’ve seen is most companies, because of all the talk about AI, are creating these groups. And they’re able to exist, but their ability to scale is a real challenge. Why? Because they’re having a hard time contexture — contextualizing their projects. The real challenge is they’re having a hard time understanding: the risks associated with their algorithms if they’re deployed; and showing to the business how that risk has been tested and evaluated and validated; and how risk mitigation procedures have been put in place before you launch an algorithm in the wilderness; ROI, return-on-investment and scalable beyond the research project — scalability after a POC are the next real stumbling block.

20:06
I mentioned this because if you’re on this call, what’s really important is that you can get a little funding — what I call “small money” — to do proofs-of-concept and have your lab and have a few people doing an AI project. But to get the big funding for us to enable your projects to move past the proof-of-concept into a scalable domain and then eventually deployed, you need to address these issues, and you need to address them head-on.

20:35
So I call this the “little funding” a big funding problem. And I mentioned that upfront, but this is what it’s all about. And so one of the issues that has been well documented for a few years now — these are just two quotes, I could have put a lot more — is that the black-box problem and the complexity that goes with it, essentially is associated with the fact that if you cannot explain and show which factors are leading the algorithms to its final solution and decision and/or prediction and how it’s coming about that decision, then you’re gonna have a hard time convincing operational people and business people that that may want to use AI that that solution is safe for use in their operations. Same things said differently from Deloitte. But if you can produce results without showing explanation, then what happens is you cannot understand when an algorithm will make an inappropriate decision. And that’s a problem.

21:32
And if you think of banking, if you think of human resource application, if you think of autonomous vehicles, if you think of identifying people: I mean, there are a lot of issues with getting it wrong. And so AI is not just about testing to show that it’s going to get it right; it’s also to show when it’s not going to work and how you’re gonna be able to flag it. So that’s the black box at a high level.

21:53
And again — like I said earlier, and I’ll say it again — it depends on the complexity of your business and the consequences of failure of getting it wrong. And so if the consequences are very little, explainability in that sense may not be an issue for you because it may not be worth the effort and the time. Dr Lecue, somebody that we’ve worked with over the last while, made a couple of presentations on this; he’s one of the key opinion leaders and leaders in the world of explainable AI. And he put it really well.

22:24
If you look at the orangey, red arrow on the y-axis, it’s the cost of poor decision. And so as you see, the further you go up the y-axis, the more you can tell that, you know, if somebody gets it wrong the consequences can be very high. And so for those types of businesses at the top left and top right quadrants, explainability becomes a real issue. And so these are just examples to contextualize what I’ve been referring to.

22:55
And the problem that we have in AI and in machine learning and deep learning is that the more accurate the model — what you’re seeing on the y-axis, top left. Well, less explainable they are. But usually, if you’re using a machine learning and deep — specifically a deep learning model, or a complex model — is because you’re usually dealing with complex things, where you probably have a big problem where the consequences of failure are high, where you want a lot of explainability. But it’s hard to get it if you don’t have a platform that allows you to open up the model and explain why decisions can be trusted and how the decisions came to be from an algorithmic perspective. And so this is the context of if you’re a data science person, and you’re trying to deal with explainability and accuracy, well, that’s where you live: right here. And so it’s a real — it’s a real issue.

23:41
And that’s why a lot of people would [inaudible]. So yeah, so that’s why opening a black box becomes important. And that’s when it becomes important. When we looked at it, we said, “you know, why does everything have to be so complicated.” And that’s how we came to develop the platform that we did. One of the big takeaways from our research and dealing with clients — and when we deal with clients, we deal with both the AI teams and the business teams, right — so we see both. We see the communication; and there’s a communication breakdown. There’s a wall there, and the wall leads to these failures or is a part of what leads to failure through the lack of adoption and scalability of AI projects. And when we looked at it and said, “well, why? What’s causing this wall between these groups that are meant to work together?”

24:31
Well, wrong tools. Meaning, the tools that were built for research labs and academics and that world which led to, you know, the growth of AI are not — no longer the right tools for AI in industry. And the second issue was that business people were being involved way too late in the AI project. You know, they were AI teams were working weeks and weeks, if not months, on something they really couldn’t share with the business people because there was no easy way of sharing it. And so they wait ‘till they’re done only to be told, “while you might have a 95% accuracy, but in these three areas you’re not meeting my needs are these outliers are actually the most important part of the business.” So you have to get it right. And so if they had known that earlier, they would have saved weeks and weeks and months. But wrong tools and involvement with team — teamwork too late. So, here, I thought I’d take a little break and just see if people have questions. I’ll just stop sharing for a second.

26:22
And we’ll go on to the next part. So you know what we’ve learned from industry projects and from all these interviews that we’ve done. So you know the data. Now why, what’s the answer? Well, the answer is assuming that you’ve got data and you’ve got access to data, what is leading to some of these sort of low scalability rates that people are reporting? First and foremost, people in the AI groups have a problem or a difficulty, sorry, demonstrating value — and value can be the financial returns of this project, it can be risk mitigation for the business based on this AI solution — it can be more, capturing more clients or expanding the market or retaining more clients. And so there’s different ways to look at ROI. But they’re having a hard time showing how these great solutions are the right ones to use for business and how they’re going to pay off.

27:39
Lack of management trust in models we’ve talked about before; the language of AI is just not human-understandable, despite all the efforts that people are making with graphs and tables to try to make it. So it’s just not enough.

27:53
No clear proof-of-concept, revenue, MVP to show potentials. So when I say “not clear” is demonstrating complicated graphs and complicated tables and AI metrics does not help a business person understand how to move forward. And because AI teams are taking so long to come out with POCs, and MVPs, because of this guesswork and this back-and-forth in this iterative process that we’re addressing, it creates sort of lack of — lack of belief, or lack of trust that something is available. Lack of ability to demonstrate POCs can scale. So a lot of teams that are getting to proof of concepts are doing so using some sort of technology: open source, other software, there are great platforms out there that helps you get there. But then when they’re looking at how do you scale this, they would say, “well, we’ll need another type of platform.” And so it puts into question the whole proof-of-concept, because most proofs-of-concept should be “well, we’re just going to do more of this just at a bigger scale — we need more data.” And often it’s not only do we need more data, but we need new tools and a new environment. And so if you can start your POC in the same environment that you’re going to have the scaling, it can really help.

29:06
And finally, key business stakeholders and SMEs are involved too late in the process. And there’s a lack of tools for QA [quality assurance] testing & evaluation of machine learning models. And so as we say — in some of the software circles that I follow — you know, software has a huge body of knowledge around tests and evaluation and QA that you have to pass through before you can deploy software. But in AI, we haven’t caught up to that. And yet AI is simply an algorithm that develops itself and learning from data. So it should have the same requirements.

29:40
And you have a shortfall of AI resources, generally speaking, so the demand is out — outpacing the supply here. But what did we learn from specific interviews? Well, I broke it down into two groups. So there are the — like I said, I spoken to probably closer to 250, now, well over 250 — and so I’ve made a slide for what data scientists and machine learning engineers are saying versus what the AI supervisors and business leaders are saying.

30:07
We want less guesswork. We want to know why something’s not working. And we want to know it won’t work earlier rather than waiting to do a bunch of iterative processes and sending stuff to the cloud to then be told — to just find out — that we will meet our metric and our targets.

30:22
Number — the next one is: better deal with debugging. Debugging is still very complicated. Where do I look? Where should I focus? Certainly, in our platform we’ve given people the ability to be pointed into a direction so they don’t have to start from zero. It takes to figure — it takes too long to figure out the problems with my data; was number three. I’ve mentioned this before: we want fewer iterative futile training trials, which are costing us time and money. We want more out-of-the-box solutions so we don’t have to build them themselves and you know, do homegrown tools, I’d love to do more data science, thank you, please, was the next one. Better collaborations: they know that they’d like to be speaking with the business people earlier in the cycle. But they have no means to do that right now. And no one knows what I’m doing. This is a huge frustration from the data science/machine learning engineer side; they’re saying, “I’m working hard to try to come up with solutions but nobody on the business side actually knows what I’m working on.” And that’s really, really frustrating. Or they don’t understand AI.

31:25
Now on the business side and on the supervisory side: Interesting points of view. So the ML team supervisors, the head of AI or the director of AI specifically, they want to see better tools to see where their teams are stuck. We’ve noticed that in many AI teams, when the more junior or the interns go to a more senior person because they’re stuck on a project, helping them to figure out where to start to debug or fix or clean the data or or do something with a model is a very, very difficult process for them. So they’re needing better tools to help their own teams. They also know that the SMEs are involved way too late, wasting valuable time. The iterations are taking too long. That’s a common thread amongst both groups. They want to understand risks before you talk to them about deployment. And so this goes back to test and evaluate — evaluation and validation.

32:23
Alright. Clients don’t understand jargon. So you know, what’s a tensor? What’s YOLOv3? What’s ResNet, you know, what’s GAN, what’s — I mean, I could go on and on. That until I got involved in AI, you know, these were terms that were complicated. And I just, you know, if you talk to me in those terms, as a business decision-maker, I’m gonna have a hard time, you know, getting to trust what you’re trying to get me to approve for more money or for more funding.

32:52
AI solutions don’t offer AI context. And for those people that are not in the proof-of-concept world, you know, a lot of the businesses are saying, “Can I see your proof-of-concept before I fund more of that work in the company?” And the calendar to what the frustration is from the AI teams, the supervisors are saying, “What are AI team’s doing? I’m not really sure what they’re working on.” But you can see where the communication wall exists very clearly.

33:21
Challenges we experienced firsthand as well with specific work on projects like that we worked on with companies. So the POCs and MVPs, that we did see, when we did see some were very hard to understand for the end-user. But that is ultimately the user of that AI. Explainability matters, as I mentioned at the intro when risks are present, and you need to understand the risks right upfront so that you can begin to create your explainability story as you develop your solutions. And it’s about showing trust. Number three: when will it work and when it won’t.

33:58
Data scientists and ML engineers need better tool to iterate faster and to minimize the trial and error that is hurting our credibility. When it’s just seen as trial and error it almost looks like you’re just pitching stuff on the wall and finally, it’ll work. If you’re a business person, that doesn’t create trust and confidence. And even the biggest companies — don’t fall into the trap of thinking of big company has a big AI team. Size of the company is not at all related to the size of the AI capacity that they may have. A lot of the larger companies we work with don’t have a lot of AI experience. So you’ve got to adapt to that as well. And that’s where we concluded.

34:38
You saw this slide earlier. This is our summary of how we see the world of AI. AI in industry is truly a “team sport”. It’s not a sport of research labs: folks that tend to keep the research in themselves are very, you know, very, very defensive of showing their work. They publish very carefully because for very good reasons. In industry, there’s got to be more of a teamwork where people are working together on the AI solutions right from the start.

35:08
There’s great — there’s a great paper by Accenture called AI Built To Scale. And I just circled in red there, the important stuff. And the best practice that came out of that study is: if you’re a business of AI, whether you’re only in your own company, or if you’re an AI-as-a-service provider, or if you’re a data scientist team that developed a proof-of-concept factory, with proof of concept that business can understand. And then be able to talk about how you will scale so that a proof-of-concept is meaningful in terms of estimating its scalability — so risks, time, more data — so the cost and all the resources that will be required to achieve that scaling. And then, you know, because you’re in industrialize [industry], it’s got to also scale to a point where if the business grows, the models have to grow with them. So I thought I’d share that as well.

36:00
I talked about tools that are hard to understand. So, you know, there are lots of platforms out there that are very useful and like doing what they do. But when you’re trying to dig deep into understanding: are you meeting operational constraints from the business? Why are you stuck somewhere? And how do you communicate your project to other people in your team? They’re not necessarily that easy to use in that context. So what we’ve done is — as you saw earlier — we’ve created that interface where you can show models as objects with all of their structure and layers so that people can start opening up each part of the model and see how each part contributes to the final decision and therefore begin to understand when it will work in the real world. Our platform, you know, for it to be transparent and understandable, we felt that it had to have a very well populated library of explainable AI or “xAI tools”. And so we curate all those and embed them in the platform with a drag-and-drop application. Most projects in industry are multiple data — they’re no longer just single data. So make sure that people can ingest data that matters to them. And we loaded it up with many, many curated models so that you don’t have to reinvent and start from scratch in a case where models do exist.

37:24
And yeah, so you know, you’re looking — whether it’s our platform or others — but always ask yourself, “Is this the most transparent way for me to initiate my machine learning project? Am I paying attention to how others will understand my project, my solution?” If they’re not from the AI world: “Am I able to dig into the models so that we can see where they work and how they think and how they come up with their final solution and how the data passes through the model to give you that desired output and why it can be trusted?”

37:56
We talked about testing and debugging already. And so that’s a bit of a summary. I’ll show you another project, which will give you a whole other context. So this is a project that a client did with our help, where they had satellite images. And they wanted to be able to classify rooftops for disaster relief and the evacuation of people on roofs in areas that have problems. And so if you imagine, you know, from a satellite image, you can tell if a roof is sloped and what kind of roof is this. And if you have a helicopter — they’re trying to land water or food or support or aid — while not knowing that can be very problematic. So the idea is we take satellite images, you look at the roof, and based on that you’re able to tell some helicopter somewhere, you can land on house roof “two”, but you cannot learn on house roof “eight”. So I’ll just switch again, just to show you that video.

39:05
What you’re going to see here is the images. Well, this is — would be a user of Zetane like in the real world; it’s not just the video, it was taped off the Engine. You’re going to see the actual satellite images being brought in, high-rez, you’re going to see the actual model used, being launched from your Python script in one or two lines very quickly. So you see the data, images; you see the actual models. And then you’ll see that we can open up — where the user can open up — not just running the data in real-time and passing it through — but you can see as it passes through what the layers and what each of the boxes, each of the functions are doing. And you can take that all the way down to the tensor level. And so if you’re a data scientist, it really helps you to pinpoint, you know, how to optimize and how to make sure you’re meeting your numbers. But if you’re sitting in front of a business person that is involved in satellite imagery — a user of this particular solution — they can ask questions and you’re able to say, “Well, let me explain to you how that image is being, you know, labeled with A versus B. And when will it — what is the most important features that are defining that?” The expert can tell you: “Well, that shouldn’t be the most important feature because there are other features that should have more influence.” And so now you’re able to find that out early on, make your modifications as required and then move with the project in a more effective manner. I thought I’d show you this one because this is an image, an image one. Back to the slides, give me a sec.

40:47
And who else? All right.

40:50
So that was the last part of it. And any questions here? So I can stop and see if there are any questions or chat or feedback on the chat, or something there. I’m not seeing anything come in.

41:38
I’ll keep going. I’m just mindful of time — this is what I’m looking at. Plenty of time for the end as well. Okay, so the last part is what you need to focus on a project. So this is going to get a little more technical. But I thought you’d — for people that are involved in different types of projects.

42:00
So remember, all — going back to “Don’t forget, a project is all about the team.” So you might be a data scientist or an engineer or a manager in the business, a manager of IT, or manager of operations. Or it might be a customer that needs things validated; or a regulator that wants to make sure that these projects are — go through a rigorous process before authorizing them because of the risk to the population, right? So always keep that in mind as you’re working on your projects.

42:31
And so what? Here’s our main takeaway. I mean, there are many but of course, we know about data. And there’s volume of data and type of data and quality. We’ll talk about that. The people that are involved in understanding how to understand and communicate business needs. So on the data, the big takeaway here is that, yeah, we can do all types of data. And typically projects huge use different types of data. We certainly there are some that just use tabular or time series or whatever, but some — many we see are using more and more, two, or three. But what do you see in the bottom right, which is POC seasoned MVP, play a key role is often in a project, you don’t have enough data to be able to say necessarily that I can scale it right now. But if you’re doing your proof-of-concept well and you have enough small data but of quality — so you don’t need “big data”, a lot of data — you can actually start a project and get to a POC and MVP very well. Certainly, it’s been our experience.

43:34
And because you can work effectively in a context like the Zetane platform, but in your own workflow as well. But you’re able to show with confidence that your PoC works, where the data needs are going to be in terms of investing the data acquisition for bigger, more scalable solution and operational solution. And it creates confidence not just in yourself, but in your management team and in the business people that are ultimately funding these AI projects.

44:05
People that you’re going to want to think about as well: one of the mistakes we see is teams are working and they’re only IT in a corner in the, you know, outside in that bubble. And they’re not involving other members of the team of the business that ultimately will be the decision-makers or will have a big influence as to whether your AI projects get funded. Here we’re talking about other tech people in the business, maybe your engineers or your software dev or your IT teams, which will have a say as to whether your project looks robust enough to be deployed or has a chance of scaling. Database administrators often are misrepresented and they can be a huge value. Why our data scientists who, you know, which are more expensive and harder to get, spending so much time doing what database administrator should be doing. And finally, your business decision leaders — directors and managers — need to be involved and have a regular view of how your project is progressing so that you get them involved — get them involved early and get their buy in.

45:07
The needs need to be actionable. What we’ve seen a lot of AI projects is people come up with some great things. But the ultimate result doesn’t give operators or users in AI something to take action on that will directly show how the business will benefit — in whatever that means for each business. It needs testing and validation. If you’re in a semi to high-risk environment, people will want to know that you’ve done testing-validation and testing-evaluation, or “QA” if you want, as well. And you need stakeholder buy-in like I said — repeating myself here — but it’s so important because we’ve seen a lot of projects fail because that wasn’t the case. Why? Because there was no easy way to demonstrate the solution; we need to open the black box. Because as you’ve seen before, if you don’t, you will work ineffectively. And the longer you take to come up with a project POC, or between POC and MVP, or full scale, the more you’re creating doubt in the business team that you’re not cooking the books or cooking the solutions. And you know, you’re just trying stuff, but you have no real clear path. So the more you can open your AI black box early, the more you can see what’s going on inside the models.

46:18
You want at some point to be able to see what the tensors are doing. Because ultimately seeing the tensors tells you where the priority is being put and placed on how the algorithms are thinking about the data and how the algorithms are learning from the data set. And if there’s data, if it’s a deployed solution and you’re testing data drift, you want to be able to understand why that’s happening.

46:42
So you know, with Zetane, you can really do the introspection you need. That’s just an example of how “deep dive” you can get if that’s your world. For sure in our platform, but others as well, will give you this. You know, instead of developing these, we create a lot of metrics and performance indicators for you. To help you understand — this was an imagery, an example of how opening up a satellite image in the different layers shows you how the algorithm is concluding and is making its prediction.

47:20
Here are some checklists that we use as we go through projects, when you’re designing your neural network, when you’re validating a trained network, and while you’re training it at work. I think checklists are the underused solution of the world of AI; I think a good AI team should develop these checklists. And then have them used across their workflow as a way of doing business, as a good process management. And so here are the ones that we use that we’re happy to share.

47:51
Observations on your models: these are some of the things that people should again, you know, have a checklist: don’t forget to look at this; don’t forget to look at that. Why? Because when you’re in front of your funding manager or your supervisor or your business leader, you’re able to say, “well…” You’re able to answer the question, why should I trust it? Well, because we’ve gone through all these and here’s what each of these tell us in terms of how the machine learning functions and algorithms are working. And then if you’re looking to validate a trained network, well, you need to validate your data, but you need to also validate your model down to the tensor level, to what we call the “unit level”. Because these will tell you what, when things will work. But they will always also tell you when things won’t work. It will tell you how — where the model is focusing its attention on inside the data. And you’ll be able — with your subject matter experts — to determine whether that makes sense or not.

48:48
So visualizing is key. We believe that, we certainly see it in our projects; not all the time, but for projects where there’s a bit of complexity and certainly where there’s trust and explainability and transparency required. We believe that you have a better idea of what’s going on and being able to explain to yourself first and then to others.

49:10
Normalizing is important. So for those that are in the science side of things, here’s how we look at the world of normalization. In Zetane, when we represent a model and convert it to an object as opposed to code or libraries, you actually see the structure of the model. We’d like to call it, you know, you see it going left to right; your data comes in the left, exits at the right with the actual output recommendation. And understanding the way the model is structured, often, you know, data scientists will draw it out on a piece of paper when they get started. But understanding it and being able to explain it also creates a lot of trust.

49:48
And then zooming in the model — because it’s a 3D environment we’re able to zoom in all the way as we saw, right down to each node, down to each weight, each tensor. Voila, zoom, and more zoom so you know, dig deep and then come in, come out.

50:03
And finally, try to create early a view of the world as seen by your operator or the people that are going to be using your solution. In Zetane, you can create live dashboards that not just — not only show the dashboard and the metrics that you’re trying to give your operators on which they’re doing their decision — or basing their decisions, sorry — but you’re actually also able to see the model, open them up, see the performance of the models. So if something goes wrong or is questionable, somebody can quickly go right into it with the data there and explain why that decision is being made. So the model coexists with the live data, coexists with a user output.

50:47
And so in conclusion, be mindful that if you don’t do things differently, you will fall into the 85, 50, 15, 20 category. And you’ll have a hard time getting models or your projects past the goal — certainly past proof-of-concept stage. Number two, think hard about the industry that you service, and why the black box issue matters, and how much you need to open that black box to get more projects deployed internally if you’re working in a company that is an industrial company. But if you’re AI-as-a-service, you know, who are you servicing? And why does the black box concerns matter to them? Number three, don’t get fooled. Lots and lots of companies are getting little budgets to do AI departments and do small projects. But the gain is won or lost at your ability to address the 85, 50, 15, 20 issue that you get the bigger funding that allows you to scale your project into the cloud. And finally, as I said several times, don’t forget that AI in industry — from all the work we’ve done — is a team sport. And you need to make sure that you engage your stakeholders early and frequently. On that note, I thank you. For those that participated, you’ve got my email there. You’ve got Jason’s email there. You’ve got our website. In fact, it’s funny because we’re releasing a new website next week, early next week. So you’ll see a lot more focused value proposition there. But you can download the Zetane Viewer, you can download actually all the software there directly from zetane.com.

Additional recent presentations from Zetane

--

--

Jason Behrmann, PhD
Zetane
Writer for

Marketing, communications and ethics specialist in AI & technology. SexTech commentator and radio personality on Passion CJAD800. Serious green thumb and chef.