Sitemap

Symposium Summary: Generative AI in Progress: Application, Governance and Societal Impact

11 min readJun 9, 2025

On the afternoon of May 22, the AI & Society Symposium — Generative AI in Progress: Application, Governance and Societal Impact — jointly hosted by Tencent Research Institute and the SMU Centre for Digital Law, was successfully held at Singapore Management University.

Nearly one hundred experts from academia and industry in China and Singapore participated, engaging in discussions on technological trends, industry applications, governance, and ethics. The event aimed to explore paths for building an open, trustworthy, and sustainable AI ecosystem and society.

Guo Kaitian, Senior Vice President of Tencent Group, delivered opening remarks on behalf of the organizers. He stated that AI is not only a technological revolution, but also a profound transformation of the relationship between humanity, society, and intelligence. We are standing at a pivotal point, as the rapid advancement of LLM technologies is driving AI from “cognition” to “action”, becoming a true intelligent partner to humans and fundamentally reshaping social structures and value systems.

Tencent Group Senior Vice President Guo Kaitian delivered the opening remarks

Guo Kaitian emphasized that AI should honor the unique role of humans as the originators of meaning and value. He stated that “the true value of AI lies not in how impressive it looks, but in how reliable and useful it is in practice.” To this end, Tencent places great importance on open and transparent technological ecosystems, and advocates for a governance model that integrates openness, participation, and oversight, with the goal of building a foundation of trust in the AI era. He also remarked that the chapter of AI civilization has only just begun, and Tencent is committed to working together with all parties to co-create a future that gives equal weight to technology and humanistic values, and embraces openness and inclusiveness.

Accelerated Development of Generative AI Requires Synchronized Governance

The first half of the symposium focused on the theme of “Industry Application Trends of Generative AI”.

In the keynote speech titled “Issues and Opportunities in Generative AI,” Prof. Mohan Kankanhalli, Director of the NUS AI Institute, outlined three main technological pathways: Large Language Models (LLMs), Vision-Language Models (VLMs), and Diffusion Models. Diffusion models, in particular, have achieved exceptional results in multimodal generation tasks — such as images and audio — showcasing vast application potential. However, he also pointed out key technical challenges such as hallucinations, limited controllability, and factual inconsistency, alongside social risks including privacy breaches, copyright disputes, fairness concerns, and the spread of misinformation. He stressed the importance of strengthening data governance and responsibility mechanisms to promote sustainable development across creative industries, education, and employment.

Professor Mohan Kankanhalli delivered a keynote speech

Dr. Liu Tianchi, Senior Researcher at Tencent, gave a talk titled “Synthetic Voices in the GenAI Era: Creation and Defense.” He introduced recent developments in speech synthesis (Text-to-Speech, or TTS) technologies, highlighting breakthroughs in expressiveness and controllability enabled by LLMs. He highlighted recent progress in aspects like timbre, emotion, intonation, style, and dialect, and addressed risks of deepfake-based identity fraud and misinformation, presenting defensive technologies like audio watermarking and AI spoofing detection.

Dr. Liu Tianchi delivered a presentation

A panel discussion on industry application trends of generative AI was held by Xiaohui Yuan, Senior Expert and Director of the Innovation Research Center at Tencent Research Institute; Prof. Zhu Feida, Aptos Move Chair Professor and Associate Dean of the School of Computing and Information Systems at SMU; Bman, Co-founder of ABCDE Venture Capital; and Lu Jianfeng, Chairman and Co-founder of WIZ.AI.

Panel discussion on AI industry application trends

Prof. Zhu emphasized composability and collaboration as defining features of generative AI, arguing that the rapid rise of open-source ecosystems has significantly lowered the threshold for using generative AI, enabling enterprises to build transparent and controllable systems based on local data, thereby supporting the realization of data sovereignty and AI sovereignty. He also shared insights from personalized learning projects and suggested that decomposition of tasks combined with traditional algorithms can improve accuracy. In areas like DeFi, autonomous on-chain AI agents have already shown practical value. He particularly emphasized the intersection of AI and Web3, identifying “collaborative intelligence + tokenized economy” as a key direction for industrialization and governance.

Bman discussed the emergence of the “super individual” in the AI age and praised the development potential unleashed by open-source models such as DeepSeek. He envisioned a coming “Agent Society,” where agents can not only execute tasks but also possess identity, property, and transaction capability. He cautioned that many startups merely wrap large models with little added value, and recommended that they establish network effects and build defensible business models. He also expressed expectations for a multi-agent economic model enabled by standard protocols and agent-to-agent collaboration.

Lu Jianfeng examined the practical viability and limitations of commercial AI agents, using WIZ.AI’s Voice Agent as an example. He argued that the core value of agents lies in completing tasks in a fully autonomous and closed-loop manner. However, key technical bottlenecks such as response speed and hallucination control remain. He proposed a “scenario-first” product development approach focused on real client needs and differentiating through data quality and customized knowledge. He added that AI agents will fundamentally reshape internal enterprise collaboration, leading to a new norm where humans and intelligent systems work side by side.

Moderator Yuan Xiaohui posed the question of whether AI is triggering a reconfiguration of business processes. She summarized that generative AI is evolving from a tool to a system component, prompting enterprises to embed agents into workflows and redefine the boundaries of human-AI collaboration. Even if agents are not yet perfectly accurate, human-AI co-creation already brings significant efficiency gains. She emphasized the importance of developing governance, safety, and public acceptability alongside technical progress.

Global Perspectives on AI Governance

The second half of the symposium focused on “Governance and Societal Impact of Generative AI.”

Prof. Simon Chesterman, David Marshall Professor of Law and Vice Provost (Educational Innovation) at the National University of Singapore, delivered a keynote speech titled “The Tragedy of AI Governance.” He argued that although AI ethics frameworks and regulatory debates are abundant, effective governance is still lacking. He identified three structural barriers: First, the growing dominance of private companies that prioritize powerful model deployment over responsible governance; Second, government reluctance to impose strict regulation for fear of hindering innovation and competitiveness; Third, the absence of a crisis severe enough to force international coordination, unlike with the UN or IAEA. He warned that if these imbalances are not addressed, society may suffer a major AI-related crisis before effective regulation is in place. He also underscored the need to preserve flexibility and reversibility in AI-related decision-making.

Professor Simon Chesterman delivered a keynote speech

Cheryl Seah, Director at Drew & Napier LLC and Industry Fellow at the SMU Centre for Digital Law, delivered a presentation titled “Generative AI: Perspectives from Practice.” She pointed out that in the context of AI governance, there is always a tension between whether you “should” do something even if you legally “can” do it, so many AI governance frameworks aim to guide organisations in making choices that will build user trust in their AI solutions. The AI governance frameworks and materials help organisations understand what are the risks when developing and using AI solutions, and allocate the risks accordingly in their contracts with third-party developers/users (as the case may be), as well as manage the risks in their AI use policies. In 2025, a new trend has been the release of several of model AI contractual clauses — and while they are mostly by the public sector for public sector users, the clauses are still useful for private sector organizations.

Presentation by Ms. Cheryl Seah

A panel discussion on the governance challenges of generative AI was moderated by Liu Han-Wei, Deputy Director of the SMU Centre for Digital Law. Participants included Alexander Joseph Woon Wei-Ming, Provost’s Chair and Lecturer at the School of Law, Singapore University of Social Sciences; Daniel Seng, Founder and Co-Director of the Centre for Technology, Robotics, AI and the Law (TRAIL) at the Faculty of Law, National University of Singapore; Josh Lee Kok Thong, Senior Research Affiliate at the SMU Centre for Digital Law; and Dr. Jeff Cao, Senior Researcher at Tencent Research Institute.

Panel discussion on the governance and societal impact of generative AI

Prof. Alexander identified three major challenges in current AI governance. (1) Ignorance vs Indifference — governance frameworks might work for those who want to use AI responsibly but don’t know how (Ignorant), but do not address the problem of those who don’t care about right and wrong and will simply use AI for profit at any cost (Indifferent). (2) Normative gaps vs Enforcement gaps — In many cases, we have laws and frameworks to deal with bad actors, but whether those things can actually be enforced is a different story. Given the globalised nature of cyber-harms, and the territorial nature of law enforcement, there are major limitations to the effectiveness of normative tools. (3) Supply vs demand side responses to online harms — Many current responses to problems of online harm focus on service providers and bad actors. These are essentially “supply side” responses, focused on reducing the amount of bad stuff that happens. But, as noted above, whether such measures are effective is questionable, especially if service providers and bad actors are outside of jurisdiction. Maybe we should also explore “demand side” responses, hardening targets of online harms. This cannot just be confined to education, but maybe should extend to behavioural and design responses that shape behaviour. We need to develop a culture with built-in online safety instincts.

Prof. Daniel emphasized that appropriate regulation in the AI field is necessary to guide its development, rather than to hinder innovation, as legal frameworks are essential tools for preventing the misuse and abuse of AI. Accordingly, conducting risk assessments of AI misuse is the right approach. In addressing challenges such as deepfake imagery and non-consensual impersonation, he argued that laws should focus on the nature of the behavior rather than the underlying technology, adopting a technologically neutral approach to ensure adaptability to future innovations and long-term effectiveness. In addition, he called attention to the emerging issue of monopoly in the realm of large language models, advocating for greater linguistic and cultural diversity and representation. He noted that the development of Chinese language models can play an important role in rebalancing the global AI ecosystem and fostering more opportunities for applications across diverse linguistic and cultural contexts.

Prof. Josh pointed out that international AI governance faces many challenges. One of the top challenges is regulatory interoperability. As a working definition, this could be defined as “a form of legal coordination where two or more regulatory regimes across different jurisdictions are able to interact harmoniously and exchange information, data or services, thus enabling smooth and efficient interactions between regulators and regulated entities operating under those frameworks”. One example of regulatory interoperability is in data protection — where one country recognises the data protection law of another country as providing an equivalent or similar level of protection. Regulatory interoperability is a challenge because regulatory regimes for AI are still emerging, and even where they have emerged, are taking distinctly different approaches. Nevertheless, all regulatory regimes for AI seek to balance innovation with having sufficient safeguards against potential risks. To that end, regulatory systems can adopt a spirit of agile innovation — regularly updating and providing guidance on regulatory frameworks as technology develops, and have a bias towards fostering adoption, so that greater use and understanding of the technology begets greater trust in the technology. To that end, engaging closely with industry, academia, civil and society and other international regulators is key to fostering greater ecosystem-wide understanding of AI.

Dr. Jeff Cao pointed out that the phenomenon of autonomous decision-making (or decision delegation from human to AI), emotional substitution, and human enhancement brought about by AI technology applications may introduce new challenges. These three foundational phenomena would be the source of many risks in the coming future. As such, it is essential not only to address problems like hallucinations, biases, mistakes, misuse, abuse, misalignment and etc, inherent in AI decision-making processes; but also to pay attention to the impacts of emotional AI applications (such as AI companionship) on interpersonal and human-AI relationships, ensuring that AI does not weaken genuine human connections. Moreover, when leveraging generative AI to improve efficiency and productivity, it is crucial to avoid over-reliance on AI tools, which may risk causing “short-circuits” in human individual cognitive abilities. Instead, appropriate usage of AI in education and work should enhance human creativity, achieving true human-centered and human-enhancing intelligence (AI for people and humanity in the true sense). Additionally, the rapid evolution of AI technology demands the establishment of governance mechanisms that are adaptive and agile, relying not merely on external regulatory mechanisms but more importantly on internal model-based mechanisms such as ethics-by-design, value alignment and safety guardrails, thereby creating AI assistants that are safe, honest, useful, harmless, and reliable.

Building a Human-Centered AI Future

At the end of the symposium, Jason Si, Dean of Tencent Research Institute, and Prof. Liu Han-Wei, Deputy Director of the SMU Centre for Digital Law, delivered concluding remarks on behalf of the organizers.

Mr. Jason Si, Dean of Tencent Research Institute, delivered the closing remarks

In his remarks, Jason Si shared several reflections on the current trajectory of AI development. He noted that AI is accelerating towards artificial general intelligence (AGI), driving a new wave of structural transformation in the economy and society. Enterprises are moving beyond measuring digitalization through electricity or cloud consumption, and are instead adopting “token usage” as a new metric for intelligence intensity — signaling the arrival of an era of “Intelligence as a Service.” He also called on all sectors of society to jointly promote AI governance and social responsibility. On one hand, this requires anticipating and addressing the societal impact of AI technologies through cross-disciplinary and cross-sector collaboration. On the other hand, it also calls for forward-looking imagination and the articulation of a positive AI vision — one that can build consensus, guide AI development toward the good, and ensure it remains rooted in humanity, ethics, and responsibility. Only in doing so can AI truly become a force for building a better future, rather than a source of risk.

Professor Liu Han-Wei, Deputy Director of the SMU Centre for Digital Law, delivered the closing remarks

Prof. Liu concluded the symposium by expressing his gratitude to all the speakers and participants for their active engagement. He remarked that the discussions on generative AI, its societal impact, and future directions for international governance had been highly productive, covering a wide range of topics including deepfakes, AI agents, and algorithmic bias. He noted that the SMU Centre for Digital Law has long focused on cutting-edge issues in the digital economy and digital society, and has co-organized numerous events and discussions with industry partners. This collaboration with Tencent Research Institute further deepens that partnership, and he looks forward to more opportunities for exchange and cooperation in the future.

Learn more about what the Singapore Management University (SMU) Centre for Digital Law (CDL) does here. If you enjoyed this article, follow us on LinkedIn, Facebook, X, or Instagram for more!

--

--

SMU Centre for Digital Law
SMU Centre for Digital Law

Written by SMU Centre for Digital Law

Strengthening Singapore's leadership in digital transformation while advancing a 'smart nation' vision that fosters human flourishing for the next century.

No responses yet