Taking advantage of AI whilst avoiding pitfalls: a note for policy
The development of full artificial intelligence could spell the end of the human race — Stephen Hawking
I would actually be very pessimistic about the world if something like AI wasn’t coming down the road — Demis Hassabis
Public standards watchdog to probe use of artificial intelligence
Artificial Intelligence never seems out of the headlines these days. So, in recognition of its growing significance the UCL Policy Forum held its second annual event on the topic of AI. Supported by UCL Public Policy and UCL Changemakers, a panel discussion centred on the question: How should policymakers react to the rapid advancements in AI? Attracting over two hundred attendees including academics, students and professionals, the event culminated in an extensive and engaging Q&A.
We were proud to host all of the following speakers:
- Rob McCargow — Director of Artificial Intelligence at PwC
- Ivana Bartoletti — Head of Data Protection and Privacy at Gemserv and Co-Founder of Women Leading AI Network
- Clara Durodie — CEO of Cognitive Finance Group
- Prof. Murray Shanahan — Professor of Cognitive Robotics at Imperial College London and Senior Researcher at Deepmind
- Dr. Jack Stilgoe (Moderator) — Senior Lecturer in the department of Science and Technology Studies at University College London and fellow of the Alan Turing Institute.
What we learnt from the panel
The panel started by considering the current hype around AI and if the excitement is justified? Prof. Shanahan reminded us that when AI was first introduced in the 1950s we saw the same type of excitement that we are seeing today. This was then followed by a period of relative silence in the scientific community. Unlike the 1950s, however, because of the increasing integration of its practical benefits into the operations of many business today, the current hype is less likely to die down. Predicting that its likely to stay, he did suggest that the tendency to humanise AI often generates mis-statements and overstatements of what it can really do
Rob McCargow also pointed to the hundreds of AI companies being set up with over 800 already registered in London alone. With this rapid expansion of the use of AI in industry, it is important to consider how individual corporations can ensure the quality of the AI-enhanced services they are offering. For this, Rob recommended that we foster a greater understanding of AI outside of the small, existing circle of experts. Moreover, as setting up and creating AI-powered services has become increasingly accessible, there is an important need to raise the formal standards for practising and managing AI.
Taking a different stance, Clara Durodie revealed that despite the ongoing debate around AI, it has yet to fully reach the financial sector. There are untapped possibilities for AI in finance because senior managers lack the knowledge to understand the potential of developments of AI and machine learning in their field. Like Rob, Clara recognised the need to broaden out beyond the ‘usual suspects’ and open up a space for learning and skills development if the financial sector is to benefit from AI.
While AI technologies are rapidly advancing and being implemented into the operations of our businesses we should also be wary of the implications of these changes. For the most part, the panellists discussed the consequences of large-scale data collection. Put simply, AI uses algorithms, and a good algorithm needs a lot of data to work. The data used by AI is often data about people. This puts companies who have access to this information in a position of power as many of the details collected are personal and potentially sensitive. This is why the use of this data is not only very important for businesses, but also for us as users.
Using the example of targeted advertisements, the panellists drew our attention to the positive and negative uses companies can make of personal data. Targeted advertisements on mobile phones can be beneficial as they can facilitate the user journey by putting forward goods and services tailored to the users’ preferences. However, having access to the users’ personal data can also offer the opportunity to misuse AI. Algorithms can be developed that use this information to take advantage of vulnerable people. Ivana Bartoletti commented: where an individual’s data can suggest they are at a point of mental instability, advertisers can offer them the chance for a spontaneous holiday. From the advertisers’ perspective, using AI that can pick out individuals who are more likely to make a spontaneous purchase is a profitable option. However, such an option would defy ethical standards. An alternative use of the data could be, if AI can pick out vulnerable people, why not make them aware of the health services available? Indeed, the potential to use AI in improving healthcare for all was frequently cited on the night.
The range of potential consequences of using AI prompted the question: How do we take advantage of the benefits of AI while avoiding its potential negative outcomes? The overarching consensus from the panel was that better regulation on the use of AI was needed. Whilst self-regulation is an important part of the picture and users of the services powered by AI, must become more aware of the data they are handing over responsibility should be recognised by customers and companies alike. This is because individuals can’t solely carry the burden of responsibility to prevent the potential negative outcomes of AI’s. The sheer importance of this task demands more than just self-regulation. Some companies are already responding to this. IBM, Google, and Microsoft all have company principles for ethical AI.
Given that there is a pressing demand for greater regulation of AI, it is crucial to focus on the careful and effective design of this regulation. Panellists suggested regulations need to be introduced in two areas: the building of algorithms and the choice of data. Regulations should aim to achieve greater transparency and accountability for companies developing AI technologies. Regarding algorithms, Ivana demanded that companies be willing to explain what their algorithms do and how they work. She argued that, since algorithms use our personal data, we should be entitled to an explanation of how this data is processed and for what purpose. However, as Murray pointed out, this presents a unique challenge: there is no straightforward translation from algorithms to natural language. Hence, what task a specific algorithm performs cannot be explained.
Overall, policymakers should be excited and cautious about the future of AI. AI presents some unparalleled challenges. However, if policymakers can ensure that AI practices are managed responsibly then the technology offers a unique opportunity for policymakers to facilitate meaningful developments in both private and public sector services for the benefits of society.
Ana Cuteanu and Annys Rogerson are (Co)Head’s of the Organising Committee for the UCL Policy Forum.