Navigating the Future: Philips and the Evolving AI Regulation Landscape
Authors: Deha Kuran, Arlette van Wissen, Michael van Hartskamp, Ger Janssen, Philips Responsible AI Team
We are witnessing an exciting moment in time where artificial intelligence is becoming an increasingly mainstream technology and is being integrated into our day-to-day lives to address challenges in healthcare. As a company we have been leveraging artificial intelligence (AI), including machine learning (ML), in various applications over the past 10+ years. Hence, the question: how are we regulating this superpower to prevent misuse and maximize the benefits?
In the Philips Data & AI Excellence team, we focus on enabling Philips’ businesses to harness data and AI capabilities effectively in our products, with a focus on patient safety and quality. What this enablement entails varies based on the nature of each business’ needs. These needs are dynamic and change with the regulatory landscape as well as with the rise of innovative technologies, like generative AI. Our team supports teams with questions concerning people upskilling, governance and process definitions. What these efforts have in common is making sure our AI-enabled technology is safe, complies with regulations, meets ethical standards, and is trustworthy.
In this blog post, we give an overview about the AI regulatory landscape from our line of sight. We also show its impact on healthcare technology development, and how Philips is navigating these challenges.
AI Regulations Landscape: A Philips Perspective
In response to the growing need to responsibly manage AI in healthcare technologies, there are several initiatives to bring forward principles and best practices. Our CEO Roy Jakobs is co-chair of the National Academy of Medicine’s (NAM) Steering Committee which is developing an AI Code of Conduct to ensure equitable, ethical, safe and responsible use of AI in health. They recently published their proposed AI Code of Conduct commitments:
1. Protect and advance human health and human connection as a primary aim.
2. Ensure equitable distribution of benefit and risk for all.
3. Engage people as partners with agency in every stage of the life cycle.
4. Renew the moral well-being and sense of shared purpose to the health care workforce.
5. Monitor and openly and comprehensibly share methods and evidence of AI’s performance and impact on health and safety.
6. Innovate, adopt, collaboratively learn, continuously improve, and advance the standard of clinical practice.
The intention of the NAM Steering Committee is that all decisions associated with, and actions taken, to develop and deploy AI in the health sector will be consistent with these commitments to develop and foster trust.
Similar commitments are already clearly visible in proposals from regulatory bodies. For instance, the US Food and Drug Administration (FDA) is paying close attention to AI-enabled medical devices and has released guiding principles on good machine learning practices for medical device development.
In Europe, the Data Act and AI Act are set to revolutionize how AI is leveraged in the European Union (EU). The AI act categorizes AI systems based on risk — in healthcare, high-risk are those systems integral to patient care and outcomes. These high-risk systems will have to comply with strict requirements and conformity assessments. This includes extensive documentation, transparency in algorithms, and robust data governance to ensure traceability and accountability.
In China, the National Medical Products Administration (NMPA) has established guidelines that emphasize the reliability, safety, and effectiveness of AI medical devices. The NMPA’s standards mandate clinical evaluation reports, risk management throughout the product lifecycle, and post-market surveillance to safeguard public health.
These are just a few examples of established standards & regulations that serve as benchmarks for various nations, many of which are in the process of developing their own policies. This is reflective of a global consensus on the need for stringent regulatory oversight to manage AI’s profound implications on healthcare.
The challenges of developing trustworthy AI systems
As you can see, the global AI regulation landscape is complex and varied. Philips remains proactive in navigating this landscape, ensuring our healthcare solutions are safe by meeting these high standards of compliance and safety. As a global healthcare company, we must invest in AI systems that not only excel in performance, but that are also created and deployed following responsible AI practices.
The unprecedented power of AI needs additional regulations, certainly with the introduction of generative AI (general purpose AI). The absence of such regulations can lead to significant issues such as biased and discriminatory outcomes, fabricated conclusions, hallucinations (failures/errors), and privacy or security concerns that may lead to unfair and unethical use.
For example, a clinical study found that a software widely used in the healthcare industry prioritized less sick white patients over black patients who were more severely ill for access to high-risk healthcare management programs. The bias was due to the algorithm’s reliance on healthcare costs to determine patient ‘risk’ rather than actual illness, thereby inadvertently reflecting and perpetuating existing racial disparities in healthcare spending and access.
Generative AI also has biases, which show themselves when you generate text or images. A recent paper has showed that men are starkly over-represented in online images, and that gender bias is stronger in generated images compared with generated text. For instance, if you generate a photo of a surgeon, you get mostly white males.
A more recent non-healthcare example from Google’s generative AI, Gemini, shows that trying to mitigate a known bias might prove harder than you think. Gemini tried to overcome a bias of non-diversity in images by prioritizing diversity in the output. As a consequence however, when Gemini was prompted to generate images of historical figures such as ‘a Viking’ or ‘a pope’, it ended up creating diverse but historically inaccurate images. So detecting and mitigating bias is very important, but not so easy.
This a challenge that not only Google, who has publicly acknowledged the issue, is facing. This is a problem that everyone working on the process of developing AI-enabled solutions can encounter. Sadly, the internet is a vast landscape with lots of human bias. Despite all good intentions, supervised and unsupervised models can be trained with this biased data and can be developed with their own (implicit) biases.
Philips’ Approach to AI Challenges
At Philips, we have a set of company-wide Data and AI principles that guide us in creating safe, compliant, ethical and trustworthy solutions. They emphasize our priorities when we design, develop, deploy and monitor AI. Our commitment to these principles underpins our technology development as it is our intention to ensure that all our healthcare solutions are safe and beneficial for all stakeholders.
To make these guiding principles more actionable and relevant, our Data & AI Innovation Excellence team collaborates with subject matter experts and works with Philips businesses to develop a Responsible AI framework, which aims to give guidance on technical (can we?), compliance (are we allowed?) and ethical (should we?) elements of AI deployment. Below, we give a couple of examples on how we move from principles to practice.
To improve awareness of fairness considerations and mitigation of biases, we created the so-called BIRIA (Bias Risk Assessment) tool. It is being introduced into the product development lifecycle to guide data scientists and AI developers in checking where biases might occur so that the right preventive or mitigative actions can be taken by product development teams.
Another example can be given to address accountability and transparency. Internally, we are using data and algorithm catalogues to clearly show who is accountable for the data and the implemented models. In addition, we use tools to trace back the lineage of each asset and see all its attributes with any transformation happening on these data elements.
One other concrete example concerns sustainability. We focus on identifying metrics and best practices for green AI, and on methods for monitoring the environmental impact of our AI-enabled medical solutions. As you might know, training an AI model consumes a lot of power, and using AI in the field does too. We aim to carefully assess our solutions and use the metrics to drive green software and AI development, thereby striving to reduce both our environmental impact and that of our partners.
Our Responsible AI framework is designed and being developed with flexibility to address the evolving landscape of AI regulation while putting patient safety and quality at the center of everything that we want to introduce.
Curious about working in tech at Philips? Find out more here