AI or Not — The Users Control It

Anand Tamboli®
tomorrow++
Published in
11 min readMay 7, 2021

--

If the users or other interacting systems are not good enough, no matter how intelligent your AI system is, it will eventually fail to deliver. The failure may not be the only outcome, but in some cases, it may have also resulted in the business risk.

The AI systems are not standalone as they often interact with several other systems and humans too. So, at each interaction point, there is either a chance of failure or degraded performance.

Your AI solution is only as good as its users.

There are a variety of users

We can classify computer users by their roles or expertise levels. In the case of role-based classifications, they look like: administrator, standard user, guest, et al. Whereas skill-based groupings put them in categories such as a dummy, general user, power user, wizard/geek or hacker and alike.

All these categories have user levels that are just good enough to use the computer or any software installed. However, if the user’s expertise is a border-line scenario for being good enough or below that, they would soon become bad technology users. So much so that they can cause a relatively good computer system to come to a halt, AI included.

Additionally, I have also seen the following user categories, each of these are dangerous enough to cause problems:

Creative folks

Creative users are generally skilled enough to use the tool, but they often use it beyond its specified use. Doing that may often render the tool useless or break it.

I remember an interesting issue during my tenure with LG Electronics. One of the products LG manufactured was a washing machine. A typical home appliance that normal users would use for washing clothes!

However, when there were several field failure reports from service centers, especially from the North-West part of India, we were stunned by washing machine users' creativity.

Restaurant owners in Punjab and nearby India's regions used washing machines for churning the Lassi on a large scale.

Churning Lassi requires more human strength due to its thick texture, especially if you make it in large commercial quantities.

That’s when restaurant owners became creative and used top loader washing machines for Lassi churning. It caused operational issues due to unintended and unspecified usage of the appliance and resulted in an influx of many service calls. This kind of creativity looks interesting at face value but certainly causes problems with the technology tools.

Another example of such creativity would be the use of Microsoft Excel in organizations. How many companies have you seen where Excel is used for tabulation and record-keeping and used for small-scale automation by running macros?

How many times you’ve seen people using PowerPoint for report making instead of creating presentations?

All these are creative uses of tools and maybe okay once in a while. However, the users are mostly abusing the system and tools, which can cause unintended damages and losses to the organization. These types of users also expose companies to more substantial risks.

The naughty kind

These types of users are not productive, and they do not mean any direct harm. However, they are merely toying with the system. At times, this may be completely harmless. However, they may cause unknown issues, especially with AI-like systems.

If your AI system has a feedback loop where it gathers data for continuous training and adjustments, this may be an issue as any erroneous or random data can disturb the set process and models.

The deliberate (bad)

The users that are deliberately acting badly and trying to sabotage the system could be disgruntled employees.

Sometimes, these users think that the AI system is no better than them, and they must teach a lesson. They become deliberate in their attempts to fail the system at every chance they get.

Mostly deliberate bad actors do it with some plan. These types of users are difficult to spot early on.

Luddites

A classic example of bad users would be Luddites. They are people who are, in principle, as opposed to new technology or ways of working.

The Luddites were a secret oath-based organization of English textile workers in the 19th century, a radical faction that destroyed textile machinery as a form of protest. The group was protesting against machinery in a “fraudulent and deceitful manner” to get around standard labor practices. Luddites feared that the time spent learning their craft skills would go to waste, as machines would replace their role in the industry.

We often use this term to indicate people who oppose industrialization, automation, computerization, or new technologies in general.

These users, most employees who are threatened and affected due to the implementation of new AI systems. If your change management function is doing any good job, these types would be easy to spot.

Bad user versus incompetent user

Incompetence can mean different things to different people. However, generally, it indicates the inability to do a specified job to a satisfactory standard.

If a user can use the system without any (human) errors and the way they were required to use it, you can call them competent users. Incompetent users often fail to use the system flawlessly because of their ability (not the system’s problems). These users often need considerable help from others to use the system too.

On the other hand, bad users may be excellent at using the system, but their intent is not a good one.

All incompetent users are inherently bad users of the system; however, bad users may or may not be incompetent. The reason why we need to understand this distinction is — one is curable while the other isn’t. You can make incompetent users competent by training them, but no amount of training would refrain from intentionally bad users.

Importance of change management

Most of the user and other system interaction-related issues result from poor or no change management during the full term of the project.

While AI has the power to transform organizations radically, substantial adoption numbers are difficult to achieve without having an effective change management strategy in place. All the bases should be covered before you begin the implementation and continue it as long as it is necessary.

When you have a complete understanding of how an AI solution will help end-users at all levels in the company, it becomes easy to convey the benefits.

Only quoting the feature list of your new AI solution will not help. You will need to explain what AI solutions will do and change and how it will help everyone do their job more effectively.

A safer and less risky approach will be to pick tech-savvy users for the first round of deployments. They cannot only provide useful feedback about the AI system you’re deploying but also can highlight potential roadblocks for a full rollout. Tech-savvy users can help you determine if the AI solution works as expected for their purposes.

These users then become your advocates within the organization and also help in coaching their peers when needed. These early users help create significant scale buy-in within teams and potentially reduce the number of bad users down the track.

Educating users for better adoption

Proper training plan and established, training uses real-life scenarios and hands-on sessions — user feedback is welcome and will be acted upon before moving forward — not doing that means you leave out much unhappy right talent

If you want to ensure smooth transitions and user adoption, start user education early in the process. Moreover, tailor it to each stakeholder group. You should provide them with baseline information and knowledge around AI technology as a whole and then deeper insights and information on the specific application you’re deploying. It will help in setting their expectations. Every involved member must understand the benefits of AI solutions.

Through educational initiatives, you can quickly dispel misconceptions about AI. For some of the stakeholders and users, especially those unfamiliar with how AI can help, futuristic technologies can be intimidating. This intimidation begets a defensive response and brings out the lousy user in them in various forms.

With proper education, AI's benefits can become apparent to your team members and thus foster positive uptake.

If you center the education on the fact that AI solutions will enhance employees' daily work and make it easier to handle routine tasks, make sure you highlight that aspect. When communicating with your employees, focus on the purpose of the change and emphasize the positive outcomes it can bring.

Even for executive leaders, it is vital to understand what is happening and knowing the capabilities or limitations of the AI system you’re deploying. By investing due time in acquiring appropriate education, executives will ask the right questions at the right time and steps. Being more involved is necessary for them.

It is hard to recover from a lack of end-user adoption if you haven’t invested enough in user education, so make sure you have spent an adequate budget in educating users for better AI adoption. Create multiple formats that are readily available for various devices, including offline in-person sessions. When you roll out the training, measure the uptake and types of resources employees use most. It tells you which medium is more effective, and you can leverage it some more.

Going all out on education and training materials can minimize failure when employees start using the systems. It will ensure that all the promises of efficiency and productivity of AI solutions are fulfilled.

When you deploy new systems, there is a typical spike in productivity loss, which is generally a result of slow adoption and a long learning period. You can minimize this productivity loss with a proper approach. To ensure successful AI deployment, pair education, planning with the training.

Moreover, as a rule of thumb, education and training should not end after solution deployments. Still, they must become a periodic activity to ensure that you can sustain all the positive gains. It will also help in improving user capability over time and help in reducing bad-user phenomena.

Checking the performance and gaps

It is reasonable to expect a human user to demonstrate the same performance repeatedly for any given set of scenarios. You would also expect them to reproduce the same outcome each time those scenarios frequently occur. Moreover, you would also expect other connected systems to exhibit similar consistent behavior for you to trust the whole system.

It is essential to check performance for consistency and find gaps as early as possible in the deployment phase. AI systems usually work on proportionate outcomes, and some variation at the solution level is already accepted. When you couple this inherent variation with several humans and other systems' variation, it can quickly become unmanageable. Although each variation might have been acceptable at an independent level, altogether, it can be problematic and result in poor overall performance.

That’s the reason why performance must be checked for these gaps once you deploy the AI solution. When you use your AI solution, several systems interacting with your AI solution may go haywire. If you did not plan for systematic changes before the deployment, it could soon become a roadblock.

Performing Gauge R&R (Gauge Repeatability and Reproducibility) tests can reveal several actionable findings. It is a statistical test used to identify variance between multiple operators and can be used to test how various users interact with the same system. You can also use it to check how multiple systems interact with your AI solution.

The outcome of Gauge R&R studies gives you an indication of the causes of variation in the performance. These findings can help in formulating training plans for fixing user performance. It can also help you in formulating system change requirements to make them work seamlessly.

Continuously monitoring the user and system interactions and periodically conducting systematic checks (and tests) can help you manage incorrect usage of your AI solution.

Handling user testing and feedback

No matter how much content you put into training material, it is not always possible to cover all the questions users may have. It is essential to establish easy-to-use and quickly accessible communication channels between users and the responding team.

If you can clarify who the contact person is, how long it will take to get a response and how to escalate if needed, that would help gain user confidence and give them clarity about AI deployments. By doing this, you will only encourage users to come to you when they encounter any issues.

Giving them confidence that their feedback is valuable and always taking it on board can go a long way. Moreover, once received, do not just consume the feedback but act on it.

Sincerely learning for every feedback and fine-tuning your AI application can help in improving user experience. It can give them confidence in the deployed AI system. Doing this also reduces the number of bad and incompetent users significantly and reduces your overall risk exposure quickly.

Augmenting your HR teams

Until now, HR teams have been carrying out responsibilities to manage the (human) workforce's performance. So, all the policies that HR teams have developed were there to address human workforce education, augmentation, and performance management.

It is now changing as machines are becoming smarter, and AI is becoming mainstream. So, how do you plan to handle this new type of workforce, which is fully automatic (AI only) or is augmented by smart machines (humans + AI)?

HR members will have to manage performance gaps and issues related to system malfunctions and retraining requirements of humans and machines. If there is any impact on human performance due to poor quality AI systems, it will have to be handled differently than how they would handle typical human (only) performance improvement.

Generally speaking, AI systems are smart, but they seriously lack humans' key characteristic, common sense! With the deployment of digital twins of your human employees, it may become an essential requirement.

Humans in charge of powerful technologies would have to be trained, coached, and managed effectively, just like the government controls armed forces differently from civilians.

It would be a good idea to establish a new HAIR (Human and AI Resources) team or augment the existing HR team and accommodate these new challenges. The development of appropriate policies and procedures must be core to their initial tasks.

Start looking beyond the technology

No matter how smart the technology or AI is, it cannot apply common sense and human perspective.

Therefore, merely nailing the technical element of AI is not enough; you need to balance it with the human aspect. The understanding of the surrounding environment in which you are using the AI is crucial.

Increasingly, technology teams need to demonstrate cognitive intelligence if they were to be successful. As much as the development and deployment of an AI solution are critical, the user aspect is important too. Without proper use (and users), AI success will surely hang by a thread.

A good AI solution in the hands of bad users can be disastrous, while an average AI solution in good users' hands can be a great success. The users have the full power to make or break it; your goal should be to enable your users and extract maximum positive value out of it.

The people who understand AI users don’t understand AI design. The people who understand AI design don’t understand AI users.

Note: This article is part 7 of the 12-article series on AI. The series was first published by EFY magazine last year and now also available on my website at https://www.anandtamboli.com/insights.

--

--

Anand Tamboli®
tomorrow++

Inspiring and enabling people for a sustainable and better future • Award-winning Author • Global Speaker • Futurist ⋆ https://www.anandtamboli.com