8 Ways AI Could Harm Your Brand

Ankit Solanki
Cut the SaaS
Published in
6 min readAug 10, 2023

Identifying the potential brand risks associated with the use of AI is important for companies to better navigate the digital landscape. Understand some of these risks.

Technology can be a double edged sword for companies and brands, as it can act like an unbridled force with the potential to change things for better or worse.

This ‘worse’ has come up in different ways. Amazon was left red-faced when their Rekogition software was used to conduct facial-recognition tests, where it incorrectly matched NFL champ, Duron Harmon, with a criminal’s mugshot.

Artificial Intelligence can also be prone to problems, with chatbots having told people with suicidal risks to go through with the act. Although tech companies have moved fast to tamp down on these interactions, it could have been concerning for any third-party company using this chatbot.

Additionally, complex discussions around ethical, legal, financial, political, and regulatory ramifications continue to dog artificial intelligence — conversations you should be paying attention to.

Here are a few things to be wary about with AI:

  1. Repetitive and duplicate content

At first glance, the content generated by ChatGPT and other generative AI, is nothing short of extraordinary. It’s rich, varied and comes quite close to the content written by a human. A second, third and fourth glance (through a few more prompts) can reveal the same content, regurgitated in a different way. This is a risk many digital marketers must consider, as dry and repetitive content can spur the slow march to obscurity for a brand.

It can become a larger problem when you’re using AI to create social media content, as you don’t want the same posts as your competitor. This lack of creativity and ingenuity can lead to a loss of customer engagement and loyalty. ‍

2. ‍Regulations on the horizon

Governments across the globe are looking at whether regulation needs to be introduced for the use of AI. The EU has already reached a preliminary deal on the text of an ambitious Artificial Intelligence Act, which was drafted two years ago. Furthermore, tech leaders have released a joint statement that explicitly states — “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable”. What does this mean for you?

If you use AI for your business practices, especially content generation, would you have to remove them once countries start passing new regulations? Are there any potential liabilities from future laws? If you have drafted a roadmap for the next few years — keeping AI in mind — you may need to put those plans on pause, make alterations or remove certain elements that might directly relate to upcoming regulations.

3. Unintentional bias

Even the whiff of bias can disrupt a brand’s value and possibly crash their stock price. Unfortunately, bias is a pressing concern when it comes to AI algorithms and their results. Skewed results can arise from the biases of programmers or problematic data used to train AI. For instance, Goldman Sachs customers reported that Apple’s Credit Card Assessment algorithm reflected a significant gender bias by offering higher credit limits to men and lower to women with the same credit qualifications.

The New York Times reported on a couple — David Heinemeier Hansson and Jamie Hansson — who received widely differing credit limits, where the wife’s application was denied and the husband’s credit limit was 20 times higher. Biases — be it gender, race, ethnicity, sexuality, class or more — can affect customers negatively and reflect very poorly on your brand.

4. More misinformation, more check

The screenshot shows how Bard had given an incorrect fact about JWST taking the very pictures of a planet outside our own solar system.
An incorrect fact on how JWST took the very first pictures of a planet outside our own solar system, presented by Bard, source

Did you know that AI can hallucinate information? You’ll find statistics that don’t exist, leading to websites that don’t exist either. AI models can also incorrectly interpret data or give you outdated data. It all depends on what data access has been given to those models for training. This is of prime importance to marketers — whether you’re communicating with clients, the target audience, or the general public. According to Gartner, 70% of enterprise CMOs are expected to prioritize accountability, authenticity and ethical marketing in the next two years. Will that be possible with AI?

Interestingly, a year after Chat GPT made its debut, researchers tested what content an AI chatbot would write after it was fed questions loaded with conspiracy theories and fake news. The results were so controversial, researchers stated that ChatGPT is “going to be the most powerful tool for spreading misinformation that has ever been on the internet.” It’s easy to see why an estimated 80% of enterprise marketers may decide to establish a dedicated content authenticity check by 2027, to fight misinformation and fake news. In the meantime, all brands can do is verify, verify, verify.

5. Deep Fakes

Source

Speaking of verification, elections are on the horizon, and interestingly AI could be a pivotal power in swaying the results one way or the other. Today, AI has reached a level of sophistication and artistry that can create cloned human voices or hyper realistic audio, video and images. When mixed with the right social media algorithms it can create a cocktail of false information and digital content that will spread far and wide.

This is disastrous in many spheres, including for your business. Deep-fakes of Mark Zuckerberg ranting about how he stole the data from billions of people, or politicians abusing other world leaders, can cause disruptions in various avenues. Experts have repeatedly discussed how easily AI can generate synthetic media, which can be used to mislead consumers. Although deep fakes can create affordable, hyper-personalized and omni-channel campaigns (or even bring icons to life on the screen), the cons currently outweigh the pros.

6. Lack of employee trust

You may be thinking of adopting AI into your business; however, your employees may feel differently. KPMG, in partnership with the University of Queensland, Australia, discovered 61% of respondents were wary or unwilling to trust AI. Implementation and smooth functioning may be harder to achieve without basic trust. You don’t want these murmurs landing up on Glassdoor.

Additionally, AI often produces unexplainable results, making it hard to determine how and why an AI reached its decision or conclusion. Situations like this, involving sophisticated systems that are continuously learning, can erode trust in the company and their practices.

7. Privacy and data protection

A significant risk that comes with AI and digital marketing revolves around privacy and data protection. AI has an enormous capacity to gather a large mass of data and analyze it. However, some of this information could be private, and there are concerns around how the data could be used or compromised.

If companies use AI tools, they can invariably be part of the harvesting and usage of this data. What happens if the brand is using data that was gathered without the express permission of your customers?

8. Not understanding or using AI incorrectly

Source

For many marketers, AI has the power to generate great insights and results. However, the results are only as good as the data it’s fed. Remember Tay? It’s a chatbot created by Microsoft and introduced on Twitter. Tay was completely undermined by the racist, antisemitic, misogynistic information shared by trolls. Within 16 hours of release, the chatbot’s language became so offensive that Microsoft suspended the account. This episode highlights the very real need to train AI properly, before it can cause harm.

Do you have the right systems in place for your employees to use AI tools? The right training? The right approval process? A lack of adequate understanding about AI tools can lead to offensive material being disseminated through your brand channels, invariably putting you in the crosshairs of litigation.

Risks are inherent to every new technology, especially AI. Companies need to consider these risks and simultaneously consider what AI can do for their brand value, treading a careful line between the two.

What do you think? Should companies look before they leap into AI, or does the incredible potential of this technology outweigh the risks? Leave a comment below to let us know where you land on this debate!

--

--