Nigel Crook Director of the Institute for Ethical AI at Oxford Brookes University
Rise of the Moral Machine
Sonny: “My father tried to teach me human emotions. They are … difficult”
Spooner: “You mean your ‘designer’”.
Spooner: “So why did you murder him?”
Sonny: “I did not murder Dr Lanning”
Spooner: “Want to explain why you were hiding at the crime scene?”
Sonny: “I was frightened”
Spooner: “Robots don’t feel fear. They don’t feel anything. They don’t get hungry, they don’t sleep …”
Sonny: “I do. I have even had dreams”
Spooner: “Human beings have dreams. Even dogs…
Co-authors Dr Paul Jackson, principal consultant at the Ethical AI Institute and Dr Yayoi Teramoto, AI Innovation Developer, KTP Associate for the Blenheim project.
Oxford Brookes is now nine months into its Innovate UK-funded Knowledge Transfer Project (KTP) with Blenheim Palace. One key goal of the project is to understand how visitors move around the site, and how different factors (e.g. weather, special exhibitions or other events) might affect what visitors will do on a particular day. Once we understand what people actually do on site, we can see how we might improve the experience of their visit —…
Rebecca Raper, PhD Candidate at Oxford Brookes University in ‘Autonomous Moral Artificial Intelligence’
Photo: Possessed Photography https://unsplash.com/@possessedphotography
The idea of a machine with the ability to make moral decisions might conjure images of a dystopian fiction, or at the very least, seem something that is not possible to achieve. Morality, it seems, is a distinctly human quality, and regardless, we wouldn’t want to give this to machines.
However, the pursuit to create machines with morals — otherwise known as Machine Ethics within Computer Science — is a very real pursuit. …
Photo by Alexander Sinn https://unsplash.com/@swimstaralex
It is estimated that one in three relationships now starts online. Dating websites and apps are no longer a niche activity, and online dating forms a central role in the social fabric of society. However, these sites are not safe environments for all. Between 2011 and 2016, the National Crime Agency reported a dramatic 382% increase in crimes reported to the police related to online dating, and the figures continue to rise with numbers doubling between 2015 and 2019 The Suzy Lamplugh Trust and dating service, Match.com, found that a third of online daters have…
Rebecca Raper, Senior Consultant, Oxford Brookes University Institute for Ethical AI
On Tuesday the 7th of July we virtually welcomed 10 professionals from across different industries involved with human resources, innovation, and management strategy for our first Risk Classification Framework Workshop.
Organisations are searching for the right approach to evaluating AI systems for the potential harms they could cause. At the Institute for Ethical AI (IEAI), we have been exploring the potential of using a risk-based governance approach to provide appropriate oversight for systems that use probabilistic reasoning.
An essential part of risk-based governance is understanding what causes higher risk…
by Arijit Mitra
Important emerging areas of productionising a product containing AI are This blog is concerned with the role of DevOps, Data Ops and ML Ops methodologies in delivering A.I on an enterprise scale. Without these fully under control, the risk of mishap during operation of the AI product is substantially increased.
I have been delivering enterprise-level systems long before A.I became ubiquitous. It is that point just before going into production that I can distinctly remember that, despite all the planning, I always used to get butterflies in my stomach. …
In 2016, the British Science Association (BSA) conducted a survey via YouGov regarding public sentiment of artificial intelligence (AI). Of the more than 2000 respondents, 36 percent considered the development of AI to pose a threat to the long-term survival of humanity.
It’s fair to assert that public sentiment regarding AI is primarily influenced by the media. Unfortunately, AI-related media coverage is often less than well-informed and has arguably contributed significantly to the unwarranted fear and paranoia of AI as an existential risk. This post aims to examine the misrepresentation of AI in the media, providing both examples and context.
an Interview with Paul Massara
It is was our pleasure to recently have a chat with Paul Massara about the new International Centre for AI, Energy & Climate initiative. This article is a transcript of that interview.
Paul Massara is an investor and advisor to a number of energytech businesses including Zeigo Energy, Isize Technologies, Habitat Energy and Electron, prior to this he was the former CEO of RWE NPower and a member of Centrica’s Executive Committee. He has been serving on the Committee on Fuel Poverty, which advises the UK Secretary of State for Energy. He has most recently…
We’re a bit more optimistic. Here’s why…
by Paul Jackson, PhD
After years of hype, many people feel AI has failed to deliver. So begins Tim Cross in The Economist’s Technology Quarterly of June 13th. Debates on the future of AI, it goes on, demand a ‘reality check’. AI, the newspaper says, is running up against limits and has failed to deliver on grandiose promises (fair enough, although grandiosity is hardly a fair yardstick for measuring success).
Citing Sundar Pichai, Google’s boss, The Economist is perhaps right in its wariness towards his claim that AI will spur more profound change…
by Selin Nugent, PhD
Artificial Intelligence and advanced data analytics systems are becoming increasingly sought after tools in human resources functions for the purpose of automating time-consuming, repetitive operational tasks and expanding strategic potential. However, as the engineering of these products becomes more complex, it is more difficult for employers to confidently assess whether the technology is well suited for their needs, functioning to their expectations, and ensuring safety and fairness to users.
Employers don’t necessarily need to understand the technical minutiae and engineering of AI systems, but they must face the equally important and difficult responsibilities for self-reflection and…
Promoting the ethical development and deployment of AI technology.