Artificial Intelligence & Parkinson

A Personal Note on the Difficult Topics of Health and Ethics

Alex Moltzau
ODSCJournal

--

When my mother called me into the room with my two younger brothers one Autumn night everything was quiet for some time. My father was leaning back in a chair and looked at all of us. I knew he had something to tell me, because my mother had told me to be there on this specific day. She had told me it was serious. I was staring into my father’s eyes when he took a deep breath. He stared at me and looked at both of my brothers in turn.

He took another look at my mother before he looked down. He slowly said: ”I have Parkinson. It is not an easy disease and it will take some time for you to understand. There is no cure for Parkinson and it will never be better only worse. It will be slow and it will be difficult. When I started shaking I knew something was wrong, and I went to the doctor. They confirmed my suspicions, and I have it, an incurable disease that will not ever let go.”

We were crying, my family, as we all embraced my father.

Parkinson’s disease is a long-term degenerative disorder of the central nervous system that mainly affects the motor system. Although there is no cure there are medications to help control symptoms, potentially surgery at later stages and lifestyles changes that can contribute lessening the degenerative effect. It may be easier to handle the disease if accurate predictions are made for those who are predisposed or could have been diagnosed earlier.

When I was frantically searching for answers I realised there was rather a lot of data on Parkinson (Michael J Fox, 2019). I have heard talk about the importance of health data, however never so directly experienced thinking about the potential it can hold. I want to think that there is a programmer or another individual out there with the same need to help a family member and that we can work together at some point piecing together information or ensuring data from around the world is available.

It is not easy to think about, but there are others who have experienced worse. I remembered when my wife’s father died to a complication and running down those hospital corridors. He died too young of health complications resulting largely from acute kidney injury. Acute kidney disease can often be cured when the underlying cause is diagnosed early.

In July 2019 DeepMind Technologies announced that its technology could now predict acute kidney injury two days before it happened. This resulted from a 2016 partnership with the U.K.’s National Health Service. The collaboration gave DeepMind access to 1.6 million patient records for a kidney monitoring app called Streams, and its data-sharing practices were ultimately deemed illegal (Shead, 2017), prompting an apology from the company.

Self-Evaluation of Algorithms

DeepMind Technologies is a UK company founded in September 2010, currently owned by Alphabet The company is based in London, with research centres in Canada, France and the United States. The company was founded by Demis Hassabis, Mustafa Suleyman and Shane Legg in 2010.

It was aquired in 2014 for 400 million pounds (Gibbs, 2014) or $650 million U.S. at the time by Google. As part of the acquisition the deal was that Google’s established an artificial intelligence ethics board, but the ethics board for AI research remained a mystery for a while. In 2019 an attempt was made to establish an ethics board for responsible AI, however the composition of this board was so widely protested that it shut down a week after being set up (Piper, 2019). In a statement responding to the shutting down this Advanced Technology External Advisory Council (ATEAC) by a Google spokesperson it was said:

“It’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.”

DeepMind has a unit called DeepMind Ethics and Society (DeepMind, 2019) focused on the ethical and societal questions. In October 2017, DeepMind launched a new research team to investigate AI ethics (Vincent, 2017). Which is stated to have its focus on different aspects of safety and society. Their research labs collaborates with Google and OpenAI. One such collaboration seems to be on evaluation tools: most recently one called bsuite.

The stated goal of the bsuite library (a code repository) facilitates reproducible and accessible research on the core issues in reinforcement learning (DeepMind Research, 2019). The code is Python, and apparently easy to use within existing projects. They include examples with code from OpenAI and Google as well as new reference implementations. As such this is a stated wish to develop tools to evaluate algorithm performance. However we may question large technology companies developing tools to evaluate their own performance.

Self-Policing Technology and Corporate Science

Is there an issue with smoking companies sponsoring research on whether smoking gives you cancer? Has there been an issue with pharmaceutical companies sponsoring certain research institutes working with regulations? If a mining company evaluates whether a river is clean or not is that a problem? If an oil rig in the ocean evaluates whether there is an oil spill or not can there be cases where incidents are not reported? We know in both all these cases the answer is often yes. Although it may not be the case for all industry actors it seems to be a general trend with larger companies.

Being ‘ethical’ or doing the right thing is hard enough in an applied ethics sense on a case by case basis, however it is important to consider who builds the tools or funds the ethics. Facebook decided to fund the Technical University of Munich AI Ethics Institute with $7.5 million earlier this year (Shead, 2019). Sheryl Sandberg announced this at DLD in Germany one of the most prestigious tech conferences in Europe.

Microsoft is a close partner of the Stanford University Human-Centered Artificial Intelligence. Although this must be confirmed it has been mentioned that they intend to attract $1 billion in funding (Gershgorn, 2019). One of the only sizeable nonprofit organisation OpenAI investigating AI Safety and artificial general intelligence received a $1 billion investment from Microsoft (Brockman, 2019) and thus created a for-profit section that may attract further investments.

Large technology companies understand they have to become better and many have experienced more regulations in the last times. After the privacy violations and mishandling user’s information Facebook was fined $5 billion by the Federal Trade Commission in the United States (Kang, 2019). Google received its third billion-dollar antitrust penalty levied against by the European Commission, which has fined the company more than $9 billion for anticompetitive practices since 2017 (Tiku, 2019).

We have to ask ourselves if research into the ethics of applied ai can be sponsored by the same companies who they question without implications. Further to this we have to ask how we would otherwise approach this important aspect of developing technology without relying on large companies to be self-policing applications of AI.

I want to believe these people fund research that makes them better companies and I would hope for no implications. However Facebook, Google and Microsoft has some possible issues that may arise in regards to their conduct and how would this dynamic work if unethical conduct is discovered? I believe this question needs to be answered for us to not make the same mistakes with companies selling technology as has been done within other industries in the past.

Unsafe AI Safety

Safety in the field of artificial intelligence is a challenge to discuss. When I have been in discussion with friends or seen business conferences and events so far there has been little talk of specifics in AI Safety. Perhaps I am in the wrong communities. However to me it seems like very few if any consider implications in AI Safety. If by any chance I meet someone they usually approach it from a technical, engineering or economic perspective. What other areas remain? Well for one what: what is important in AI Safety?

We have to consider what we want to keep safe or what we want to protect. We could talk of the field of AI in broad terms or criticism of humanness, investments, financial analysis or insurance. Securitization as a financial practice is the pooling of various types of contractual debt such as residential mortgages, commercial mortgages, auto loans or credit card debt obligations. These can be lumped together into bonds.

As an idea it was intended to keep investors safe from failure of individuals to repay loans, and it can clearly be said that securing loans contributed to the financial crisis in 2008 (Hill, 2009). Who decides how secure artificial intelligence is? Infamous agencies rated loans as triple A (AAA). Triple-A bonds, or AAA bonds, are those considered the absolute safest by bond rating agencies. Who decides the safest algorithms? Private companies, states, and NGOs do at times invest in these solutions yet there is little assurance of quality or safety.

Who or what entity keeps it safe? One consideration is the climate crisis unfolding. It seems to me this concern has to be built into more processes and mission statements as well as strategies of companies. Perhaps even going to the extent of making it a part of everyday operations to a much larger degree than ever has been done before. I would argue the greatest risk in AI Safety besides the apocalyptic nuclear scenario is companies making products ignoring or partially forgetting the climate crisis unfolding.

The impression I held was companies working within the field of AI did not put the climate crisis front and centre. The concerns of companies often seem projected into a future that seem brighter with the promise of applications or products being built. It has been argued that this is a solutionism: the belief that all difficulties have benign solutions, often of a technocratic nature. I am not so sure this situation has changed too much. I am of course not proposing we go forward without thinking about technology, however it is only part of the solution. A part I am highly interested in exploring.

Conclusion

Data is personal both in a way that it can be hard to give a way, and it can be incredibly emotionally or directly important. Data could save lives, and open data is frighteningly important and a hard task. It requires a lot of responsibility by those managing data and those evaluating algorithms. Self-policing by large companies could have severe consequences. It is even worse if the climate crisis is ignored in the pursuit of saving lives in the short-term although it is too tempting too try when it is someone who is close, like my father.

References

Brockman, G. (2019, August 7). Microsoft Invests In and Partners with OpenAI to Support Us Building Beneficial AGI. Retrieved from https://openai.com/blog/microsoft/DeepMind. (2019, September). Safety & Ethics. Retrieved from https://deepmind.com/safety-and-ethics

DeepMind Research. (2019, August 16). Behaviour Suite for Reinforcement Learning (bsuite). Retrieved from https://deepmind.com/research/open-source/bsuite

Gershgorn, D. (2019, March 23). Stanford’s new AI institute is inadvertently showcasing one of tech’s biggest problems. Retrieved from https://qz.com/1578617/stanfords-new-diverse-ai-institute-is-overwhelmingly-white-and-male/

Gibbs, S. (2014, January 27). Google buys UK artificial intelligence startup Deepmind for £400m. Retrieved from https://www.theguardian.com/technology/2014/jan/27/google-acquires-uk-artificial-intelligence-startup-deepmind

Hill, C. A. (2009). Why did rating agencies do such a bad job rating subprime securities. U. Pitt. L. Rev., 71, 585.

Kang, C. (2019, July 12). F.T.C. Approves Facebook Fine of About $5 Billion. Retrieved from https://www.nytimes.com/2019/07/12/technology/facebook-ftc-fine.html

Michael J Fox. (2019, September). Data Sets. Retrieved from https://www.michaeljfox.org/data-sets

Piper, K. (2019, April 4). Exclusive: Google cancels AI ethics board in response to outcry. Retrieved from https://www.vox.com/future-perfect/2019/4/4/18295933/google-cancels-ai-ethics-board

Shead, S. (2017, July 3). Google DeepMind’s first deal with the NHS was illegal, UK data regulator rules. Retrieved from https://www.businessinsider.com/ico-deepmind-first-nhs-deal-illegal-2017-6?r=UK

Shead, S. (2019, January 20). Facebook Backs University AI Ethics Institute With $7.5 Million. Retrieved from https://www.forbes.com/sites/samshead/2019/01/20/facebook-backs-university-ai-ethics-institute-with-7-5-million/#4f6f0cc21508

Tiku, N. (2019, March 20). The EU Hits Google With a Third Billion-Dollar Fine. So What? Retrieved from https://www.wired.com/story/eu-hits-google-third-billion-dollar-fine-so-what/

Vincent, J. (2017, October 4). DeepMind launches new research team to investigate AI ethics. Retrieved from https://www.theverge.com/2017/10/4/16417978/deepmind-ai-ethics-society-research-group

This is day 113 of #500daysofAI. My current focus for day 101–200 is mostly on Python programming, however there are climate protests around the world therefore I focus my writing on the climate crisis. If you enjoy this article please give me a response as I do want to improve my writing or discover new research, companies and projects.

--

--

Alex Moltzau
ODSCJournal

Policy Officer at the European AI Office in the European Commission. This is a personal Blog and not the views of the European Commission.