Redefining the Internet: AI’s Role in Crafting Better Spaces

Dag
tomipioneers
Published in
8 min readApr 3, 2024

In my past writings here on the Medium, I’ve been pretty skeptical about AI, especially when we talk about how it is being used on the internet and out in public spaces. I’ve shared concerns about how AI can processes tons of personal info and has the ability of tracking people online. Not just that, but in many public places, AI-powered cameras can watch our every move, making it feel like privacy is a thing of the past.

Despite all this, I want to switch gears today and look at the other, more optimistic side of the coin. What if AI could be used to help us all get along better and make the internet a friendlier and safer place?

A Discussion on Democracy and AI

Democracy represents our collective aspiration for a society where the balance tips towards greater good, freedom, and justice, rather than oppression and inequality. This vision should guide our development and relationship to AI. It is a tool crafted by humans, that has the potential to enhance our world and the internet with these democratic values in ways we probably can’t even imagine yet. But we have to be careful. Since AI is a tool created by humans, it reflects our ideas, our dreams, and aspirations, but also our imperfections.

Throughout history, the human desire for progress has led us to invent tools that we can use to enhance our quality of life, embodying our pursuit of happiness, efficiency, and fulfillment. From the invention of the knife, useful for crafting and culinary purposes, but also capable of causing harm, to the development of the wheel, which transformed transportation but also introduced the risk of accidents, our innovations have always had dual potentials.

This dual nature extends to AI and the data-driven world we live in today. While we’ve seen AI contribute to the spread of fake news, voice scamming, and the reinforcement of dangerous algorithms, it’s important to recognize AI’s potential as a force for good. AI has the capacity to tackle these very challenges, offering solutions or at least aiding us in navigating these complex issues.

The essence of the matter is how AI is used. Like any tool, the downsides of AI are not inherent to the technology itself, but in how it’s being used by humans. As we continue to develop and integrate AI into our lives, our focus should be on harnessing it in ways that reinforce our democratic ideals, ensuring it contributes to a society where freedom, fairness, and the collective good prevail.

A Guardian of Truth

In the mission to create digital societies that value truth and democracy, the role of AI in managing misinformation and fake news is increasingly important. The challenge lies not just in discerning fact from fiction, but in leveraging AI’s capabilities to safeguard the integrity of information — a cornerstone of democratic health.

The pursuit of tools to combat misinformation isn’t new, it’s a modern extension of humanity’s age-old quest for truth. Today, AI and Machine Learning (ML) technologies are at the forefront of this battle, offering innovative approaches to discerning and disseminating fact-based content. Take the work being done by the FakerFact project as an example. They have been able to prove that AI has the capability not only to assess the truthfulness of information but also to understand the motivations behind its sharing. Their algorithms analyze the way language is used in a text, helping them differentiate between content that’s meant to inform the public and content that aims to manipulate or deceive. Essentially, ML algorithms are now smart enough to identify not just the factual accuracy of information, but also the intentions driving its dissemination.

Building on this foundation, AI-driven initiatives are expanding their focus to further safeguard the democratic process. One key area of development is in content verification and source authentication. AI algorithms are being trained to scrutinize digital media, detecting alterations or fabrications and tracing content back to its origins. This ensures that the public receives reliable and transparent information regarding its origin.

Furthermore, AI is being utilized to analyze network behavior, identifying bots and inauthentic accounts that often amplify fake news. By understanding the patterns of how misinformation spreads, AI can help dismantle the networks that propagate it, making the digital ecosystem less susceptible to the viral spread of these falsehoods.

Another promising advancement is the customization of fact-checking services. Leveraging AI’s natural language processing capabilities, these services can instantly respond to user inquiries about the credibility of information, providing evidence-based answers. This not only democratizes access to fact-checking, but also empowers individuals to critically evaluate the content they encounter online.

The development of AI-driven systems for fake news detection represents a significant step towards mitigating the adverse effects of misinformation. These systems not only identify false information but also provide explanations, fostering a deeper understanding of how and why certain content may be misleading.

AI’s Potential for Privacy & Security

I think we can all agree to say that we’re living in a data-driven world, a theme I’ve frequently explored here on Medium. This reality brings to the forefront issues of privacy and data security. Interestingly, Artificial Intelligence — the very technology I’ve scrutinized for its role in these concerns — also emerges as a key player in enhancing and protecting our data privacy in ways we’ve yet to fully comprehend.

AI’s potential to protect our digital footprint lies in its ability to analyze and process vast amounts of data quickly and accurately. This capability is being harnessed to shield customer privacy and ensure the ethical handling of personal information across various digital platforms.

One significant area where AI is making strides is in safeguarding customer data amidst the growing legislative landscape around data privacy, such as GDPR and CCPA. AI algorithms can swiftly identify and classify sensitive data across sprawling digital ecosystems, making it easier for organizations to adhere to these regulations without manual oversight. This not only aids compliance but also enhances the trust between consumers and brands.

Moreover, AI’s role in detecting and responding to data breaches cannot be overstated. With the advent of security analytics and encryption, AI technologies are shown to significantly reduce the cost and impact of data breaches. By monitoring network behavior in real-time, AI systems can flag anomalies that may indicate a breach, allowing for quick containment and minimizing potential damage.

AI also plays a pivotal role in managing and protecting sensitive data through advanced techniques such as federated learning and data classification. These methods allow for the development of robust models on disparate data sources without compromising individual privacy. For example, federated learning enables institutions to collaborate on algorithms for detecting financial fraud or improving customer service while keeping personal data secure and private.

In essence, while AI presents a complex challenge with potential for misuse, its capability to enhance privacy and secure data is unmatched. By automating the detection of threats and anomalies, standardizing privacy practices, and ensuring compliance with privacy regulations, AI is at the forefront of creating a safer digital world for all of us.

AI and Community: Content Moderation and the tomiNET

Another interesting use case for AI would be to deploy it for content moderation. Platforms like Meta and YouTube have seen content moderators speak out about their distressing work conditions, faced with sifting through deeply traumatic content, including child exploitation, violent extremism, and other forms of graphic material. This scenario not only poses severe psychological risks to human moderators but also underscores an urgent need for a more sustainable solution. AI could revolutionize this space by significantly reducing human exposure to such extreme content, taking on the burden of initial content filtering. This approach would not only protect moderators’ mental health but also enhance the efficiency and effectiveness of content moderation processes.

This is a topic that the DAO team goes deeper into in the tomiDAO specifications document (which right now is open for public review through this link), where they talk about how AI is a tool that could be implemented to help content moderation for the tomiNET.

The DAO specification document’s initial approach to content moderation in the tomiNET involves a unique blend of community engagement and AI technology. The idea would be to recruit community members for content moderation tasks, leveraging the collective wisdom and vigilance of the tomiNET users. This community-driven model would aim to create a self-regulating ecosystem where users actively participate in maintaining the integrity and safety of the tomiNET.

The process for recruiting community members for content moderation is thoughtfully designed to ensure broad participation and fairness. The document suggests utilizing a rotating system, where community members are periodically called upon to review content flagged by the AI or other users. This would not only distributes the workload but also prevents burnout and overexposure to potentially harmful content. To ensure the system is democratic and inclusive, measures would be in place for a diverse cross-section of the tomiNET community to take part in moderation, reflecting a wide range of perspectives and sensitivities.

In parallel, AI is presented as a potential support system in this moderation framework. Recognizing the psychological toll that moderating harmful content can take on individuals, AI is proposes as a first filter. This means that AI systems could initially screen content, identifying and filtering out clear violations of community guidelines before human moderators would ever see them. This dual approach — combining AI efficiency with human judgment — would aim to strike a balance between protecting moderators’ mental health and ensuring nuanced, context-sensitive content moderation.

By incorporating AI as a foundational layer of the content moderation process, the tomiNET would aim to alleviate the burden on human moderators. This strategy acknowledges the limitations of current AI models, particularly its tendency towards over-cautious censorship, and positions it as an aid rather than a replacement for human oversight. This thoughtful integration of AI would serve to enhance the effectiveness and sustainability of the content moderation system, to make the tomiNET safer and more welcoming for all users.

Again, this is what has been proposed in the DAO specifications document. I invite you to read it yourself and I’m sure that the DAO team would appreciate if you have any comments on its content, since it is still a work in progress.

On an end note…

I hope this article has given you some hope in terms of how we could leverage AI for greater good. Not just within the tomiNET but in the overall online environment that we all interact with daily. Clearly, AI is set to radically transform the way we live our lives. While there are significant risks with AI systems designed for surveillance or control, there’s an equally significant, if not greater, opportunity to develop AI systems that can truly make our world a better place to live.

Follow us for the latest information:

Website | Twitter | Discord | Telegram Announcements | Telegram Chat | Medium | Reddit | TikTok | YouTube

--

--