Common Mistakes Clients and Stakeholders Tend to Make in Artificial Intelligence Projects

Lakshmi Prakash
Design and Development
6 min readAug 11, 2022

The benefits of conversational AI are many (no, these are not the same as chatbots). It is believed that conversational AI would be the future UI — that all communication between agencies and people would happen mostly through conversational AI. We can see more and more brands from different industries such as banking, insurance, healthcare, etc. shifting towards conversational AI or intelligent virtual assistants with every passing day. The change is real, the change is trending, and the change is probably happening at a rapid pace, so maybe people across different industries are not able to really understand and digest it all.

Not Providing Data: Data science and machine learning are powerful. New, highly efficient models and neural networks are being researched, introduced and deployed to provide the best results. But for companies to be able to make the best use of these, or for that matter, to use even the most common and basic algorithms and models, they must provide data. It’s sad that so many people, even in the world of technology, do not take data seriously and often say that they themselves don’t have records and structured data about their products, services, customers, expenses, and such. I suppose that this is not something that happens only in conversational AI; if I’m not wrong, companies should start collecting or recording data before they want to shift to artificial intelligence and data analytics overnight! There, I said it!

Having Unrealistic Expectations! This is probably the silliest of all mistakes in the world of AI. Once again, who’s to blame? The rush in the shift and change in the trends tend to add sudden, unexpected pressure at least indirectly if not directly for businesses to use AI applications. You could say that this is something similar to peer pressure. Stakeholders might not be aware of what AI can do and what is way too ambitious, so they tend to set unrealistic expectations. For example, can AI mind-read? Absolutely not! Can conversational AI gather information from within your company, when you don’t provide us with structured data or a database we can pull in data from? No. AI can’t sneak into your systems and ask your employees to offer information. It’s okay to dream and inspire, but we must also be realistic in setting goals, no?

“Whenever you see a successful business, someone once made a courageous decision.” — Peter F. Drucker

Being Unclear About Their Own Ideas: We can train AI to act the way you expect, but to get into your minds and understand your expectations? We can offer suggestions, we can tell you what can be done if you would make your end goals clear, but being unsure of the end goals after months of working on a project, now, that can lead to a waste of time, effort and money. Take your time, there’s no need to hurry, and firstly, make up your mind on what you want AI to do. “I want a virtual mental health coach.” It sounds cool, alright, but do you realize how subjective this is, and that we need to be more specific?

Expanding The Scope Frequently and Losing A Sense of Direction: Just because your rival company is upgrading, that doesn’t mean that you should compete with all of them and try to outsmart them. As tempting as that can be, you can’t keep adding use cases from several domains and expect all that to be dealt with by one virtual agent. For example, Flipkart is an e-commerce platform, BookMyShow is a ticket-booking application, and LinkedIn is a professional networking platform. Now, imagine clubbing all these three into one application — how foolish would that be!

Planning and Making Decisions

AI is powerful, agreed, AI can do many things, but all this requires planning on a higher level and acquiring resources with skills that match these expectations and signing up for tools we would need. Also, continuously changing the scope could mean that the model needs to be changed as well. You need to train the model with a lot more information, and it might not always work. The original algorithm might no longer be applicable, and finding an appropriate algorithm to cover all these areas and meet all your demands might be hard, and that can’t be guaranteed to have a high degree of accuracy either.

Not Giving Us The Whole Idea/Giving Incomplete Ideas: Halfway through the project, if you’d ask us why the AI is not meeting certain expectations of yours, when you’d never stated your expectations clearly, this could again involve a lot of extra work. If you want a certain part of the message to be hyperlinked, please tell us so. If you want us to fetch the latest or most recently updated value using APIs, please tell us so. Please avoid making assumptions. AI wouldn’t know when you expect what, you see?

“The most important things to say are those which often I did not think necessary for me to say — because they were too obvious.”

— André Gide

Insisting that We Use Models You Suggest: Leave it to the machine learning engineers and data scientists to figure out the best algorithms and models. Just because you find a model or neural network impressive or you were convinced about its power and capabilities by speakers in a conference, that does not mean that those models might really be the best choice for your problems as well.

Making Decisions and Sticking to Them

Giving us very little time: This need not be a problem specific to artificial intelligence projects, you might think. Yes, that’s true, this is a common problem, unfortunately, but I see this happening in the field of AI, and after weeks, I come to learn that clients are unaware of the time it would take for us to design and develop. Some clients have asked me if an AI tool or application wouldn’t just start working right away if we just upload flowcharts and add databases? No. AI is still not that intelligent! 😄

Not Practicing AI Ethics: “Garbage in, garbage out!” is a common saying in the world of data science. It means that your product is only as good as the data you use to train it. And if you are going to use fabricated data or data that is clearly biased and discriminating or you want to generalize based on your sample data or you want to intentionally target or harass or shame communities, then you’d be breaking AI ethics. And your product or service would be founded on unacceptable and unethical information. Is this what you’d want? This could lead to serious problems and it can even be illegal, beware.

No offense, but just because an idea seems brilliant or doable, that doesn’t mean it’s the best option or that we should go with it! If I had a nickel for every idea I have thought of and then realized that I was hoping for something too ambitious or too far-fetched, so I had to let it go, I could buy myself Oreos for a year! 😄

“I’ve always felt it was not up to anyone else to make me give my best.”
― Hakeem Olajuwon

Relax, think it through, share your ideas with us before you decide, we can brainstorm and consider different strategies and possibilities, and we can then decide what we want. If you have questions we don’t have answers for, give us some time; we will certainly do the research on our side, and then get back to you. You are our clients, so we would want the best for you. Your feedback matters to us, and we would only be happy and proud when you’d be satisfied with our work at the end of the day.

--

--

Lakshmi Prakash
Design and Development

A conversation designer and writer interested in technology, mental health, gender equality, behavioral sciences, and more.