How can AI help social media?
by Erik Panu, Chief Business Officer, Intelus
In the past five years, social media has become an area of serious concern for everyone from brand managers to national security professionals. Between unsubstantiated claims about vaccines, election results, trusted brands, and world events, what is real and what is fake has not been so contested in recent memory.
According to a September 2021 study by NYU quoted in the Washington Post, misinformation got six times more clicks than factual information during the 2020 US election. Even more distressing, according to the New York Times, disinformation-as-a-service (DaaS) is a “quietly booming” industry, with an estimated $235m in revenue flowing through disinformation networks each year, many of them employing thousands of contractors around the world.
Along with employing more fact-checkers, a common solution is using AI models that work across social media networks to track activity. Big companies, SMEs and national security organizations all use some variation of this solution. Unfortunately, results can often be poor, as was the case this past summer with conspiracy theories slowing Covid vaccination rates and memestocks, bolstered by social media, creating wild fluctuations in the stock market.
At the heart of the problem is volume and sentiment analysis. Like the heat maps used by marketers and UX designers to determine the most popular elements of websites, AI-driven volume and sentiment analysis doesn’t go beyond the surface. Yes, it tells you what’s hot and whether it’s a positive or negative. But it can’t qualify if the activity is human- or bot-generated, or some combination of the two. Also unclear are the variants of the trend and the “why’s” that are driving it.
For AI to help social media, new models are required, models that take stakeholders inside trends. The good news is social media is rich with unstructured data to build these models. The caveat, however, is organizations cannot rely on incumbent data labeling methods. First, because they only bring us to the same shortfalls and prejudices. And second, the process is not agile and cannot be easily tested. Typically, AI models take anywhere from 3 to 36 months to deploy, depending on scope, whereas we aim for results in a matter of hours. For instance, what if pharmacies and other essential healthcare providers were able to create a suite of new services, from monitoring and extracting information related to COVID, store hours, procedures, shots, etc.? Those services would have been invaluable.
What to do? The options are always the same: 1) Do nothing (& hope nothing bad happens), or 2) Take charge of your data and solve your own problems, even if you are not a developer or data scientist. We call the latter option “Machine Teaching,” and the best news of all is you don’t need massive stores of data and unlimited GPUs to start building and testing models.
For a benchmark test on how it could work, we suggest taking our Duet platform for a spin.