Changing the Narrative: When AI is Used for Good

H. Mason
b8125-fall2023
Published in
4 min readDec 7, 2023

It is often said that there is potential for both good and bad in the growth of artificial intelligence, but more often than not, the bad potential seems to rise to the surface of headlines and conversations. However, the technology is so vast and so advanced that it is certainly capable of doing far more good than many may recognize or admit. The most productive way to approach the growing AI use and integration is to handle it with care — implementing balance, accountability, and responsibility. In many regards, AI can be considered a double-edged sword. The same AI advances that help make everyday processes more efficient, can also pose a threat to security and human ethics if safeguards are not enacted and followed.

While the perception of an AI as a daunting force may be a reality for some, there are several articles and discussions about the positive use of AI that many would welcome. There are also instances where AI is being used to help solve a problem that AI created.

A recent article in Tech Brew introduced an AI advancement that helps prevent robocalls, especially voice spoofing robocalls which seek to take advantage of victims for financial gain. The robocalls are a problem essentially created by AI in many respects, and government agencies have been seeking out AI researchers to fix it. The Federal Communications Commission (FCC) issued an inquiry in November requesting feedback on how AI can improve the government’s efforts to stop “illegal and unwanted robocalls,” Tech Brew reported. It adds that the telecommunications industry as a whole is seeking ways in which AI can help break apart spam cycles by using methods like pattern recognition and by the models learning to “think like a bad guy” in order to prevent scam calls. Some telecom companies have IT departments that flag spam voice messages and send in data about risky calls for algorithms to collect this info, the article notes. The discriminative AI models can then recognize patterns in the data to thwart malicious attempts.

Fake kidnapping calls and fake calls from alleged family members have led to families being extorted. The National Institutes of Health (NIH) released an official public health warning about the “virtual kidnapping random scam” in which a person receives a phone call that one of their family members has been taken captive and will not be freed until a ransom is paid. Sometimes a female voice can be heard screaming on the call as the caller requests money to be wired or transferred in some way. These scams are made to seem more believable when personal information about the alleged victim has been collected and shared by the caller in an attempt to make the situation seem authentic. These ordeals are extortion attempts, preying on the vulnerable, and causing emotional distress, financial strain, and safety concerns. In some instances the elderly have fallen victim, losing thousands of dollars.

These tactics are a combination of live humans and AI and are not yet fully automated or predictive. This truth presents a window of opportunity to make it right. Like with every new thing, there is an opportunity to use it for evil or for good. The question is, “what side will major tech companies stand on,” knowing the risks and responsibility they carry. Using AI to recognize and thwart these scams can be a major step forward in using technology ethically and responsibly.

Stopping fake ransom calls is just one of several potential benefits to AI advancements. Healthcare is another example, where future AI models are being tested to help cure disabilities and predict future health outcomes for patients. DNA mapping is another progressive pathway being used to not only predict future health ailments and outcomes but to inform generations across a bloodline on their health factors. This effort in healthcare also spans into nutrition as the algorithms can use DNA data to determine certain foods that a particular bloodline will benefit from so that the person will have longer life and health.

Elon Musk’s Neuralink, for instance, helps intercept signals from the brains of blind persons and allows it to visualize physical actions that can be done by machine-like typing. In the disabled and paralyzed, Neuralink can also send neurological signals from the brain to the spine, bypassing nerve damage, and signaling the healthy nerves, allowing for movement again. While the technology is not available to the general public, testing is in progress in some areas. This is just additional one example of AI being used for good.

AI is making processes more efficient for humans, extending the knowledge we have, and increasing our connectivity across the world. There is no inherent good in AI, at this point, as it does not possess a consciousness, emotion, or spirit. But, there is a responsibility to handle with care and balance and to ultimately use it for good for humanity. Humans have the capability to step in and make sure that guardrails are established so that the massive data processing model will not supersede good nature, health, and safety.

--

--