Who will bell the cat?
Regulating Artificial Intelligence (AI) somehow
The regulation of artificial intelligence (AI) stands as a pressing and pivotal subject in contemporary discourse. Considering the extraordinary capabilities it has demonstrated, unbridled AI development could potentially transform into a formidable force. Evident signs, such as the rise of deep fakes and instances of algorithmic bias, underscore the urgency of addressing this matter.
One conceivable approach revolves around the idea of making all AI development open-source — an idealistic notion, to be sure. However, when confronted with the substantial investments in time, money, and intellectual resources required, companies understandably seek a return on investment (RoI). Yet, an alternative perspective suggests that there might be room for compromise.
In the realm of open-sourcing AI technology, let me share a personal journey that encapsulates the essence of collaborative innovation. From my doctoral research, I’ve delved into the intricate realm of Geometric Dimension Reduction — a longstanding challenge in the Machine Design community persisting for over six decades. Given the complexity of this predicament, progress has been gradual, even with the rule-based solutions I’ve proposed, falling short of achieving 100% accuracy. Recognizing the limitations of an individual effort, I made a strategic shift, bringing the problem into the domain of neural networks. Confronted with the realization that singular contributions might not suffice, I decided to open source my solution to the global community. By sharing not only the solution but also the training-testing dataset, I aimed to provide a foundation for others to build upon. This act of openness extends an invitation for collaborative efforts, enabling fellow enthusiasts to contribute their unique perspectives, identify shortcomings, and collectively advance the field.
A second perspective proposes a strategic separation between core AI technologies and their applications, coupled with the implementation of regulatory policies. Foundational models like GPT, Palm, and Llama, deemed as core tech, might remain unregulated, except for provisions ensuring explanability. Conversely, the applications of these models, particularly in fields like medicine and law, could be subject to stringent regulation.
Preserving the non-regulation of foundational models preserves space for innovation, allowing for the exploration of new possibilities. Simultaneously, a vigilant focus on applications can help mitigate issues related to bias, ethics, and potential cognitive distortions, ultimately serving the global community.
Another consideration is the global perspective. A select few leading nations currently dominate the discourse and decision-making in the realm of AI, potentially leading to covert monopolies and digital colonization. A call for a flat and inclusive global landscape is made, advocating for broad participation and democratization of AI technologies for the greater public good. In the spirit of “One World, One Future,” the emphasis is on fostering a collaborative, global approach.
In essence, the bottom line calls for the preservation and nurturing of the open-source ethos, maintaining an environment conducive to innovation, while simultaneously implementing regulations that govern the end usage of AI technologies.
Click pic below to know more about the author of the story above.