Improving ethics in artificial intelligence and big data

Digital Leaders
Digital Leaders
Published in
5 min readMar 26, 2019

Written by Lucy Lin, Deputy President of the University of Sydney Business Alumni Network & Founder and Chief Marketing Officer for Forestlyn.com

It’s 2019, artificial intelligence (AI) is everywhere and here to stay. AI is dominating our homes, offices, and the great outdoors — there’s no escaping it in an increasingly digital world. Our everyday lives are affected by AI processes from smart email categorisation to how car loans are decided to online product recommendation and interacting with smart personal assistants such as Siri, Cortana, Google and Alexa.

Machine learning is a core component of AI. This is the ability to learn without any human supervision by identifying patterns in streams of inputs. In this sense, humans have programmed the machines to learn by formulating and setting rules and parameters from large datasets, but the questions are: who is checking the human’s work? Will their personal biases also appear within the AI? Also, can we trust the data within the datasets? In today’s world, the biggest challenge of AI and big data is an ethical one.

At a University of Sydney Business School event a panel of AI experts was asked about the ethics of AI and big data. Dirk Hovorka, Associate Professor of Business Information Systems at The University of Sydney Business School, believes that data is collected not just by one company or organisation for one purpose. Ethical data should revolve around the specific use cases of people, he says. It is an opportunity for engineers to be more people-centric and to think more about the end user. Data can be easily manipulated, so the need to have moral considerations in place is essential.

Penny Wong, co-founder of AI startup Radmis, believes that sense checks are constantly required when developing the technology. A self-confessed optimist, to her, the state of AI is currently a “2-year-old baby that we will need to raise and teach values, human morals, ethics and laws that govern the AI in its country of origin.” For Dirk, the reality of a child suggests consciousness, which he doesn’t agree with, as machines cannot feel emotions like humans can.

However, I believe Dirk’s view may be valid in the short-term, but could change in the long-term as robots are becoming more human-like, and fast becoming surrogate carers for the elderly and the lonely. In Japan, robots for communication, health monitoring, cleaning, laundry, exercise and assisting bodily movements are dominating the nursing home and domestics industry (a potential $3.8 billion market). This is making life easier for the nursing home staff and providing comfort to the residents. In a country with a large aging population and a dwindling workforce, robots are filling a crucial gap in the market of the skilled workforce. As a result, we need to ensure robots using AI and big data are as ethical as possible to be trusted with our most vulnerable members of society.

For Dr. Kyusik (Mav) Kim, Head of Advanced Analytics and AI at Westpac, it is crucial that strong governance mechanisms for data are in place. He says the establishment of an ethics committee and the need to consult with the public are important to ensure data is collected and used in the right way, and at the same time to gain the trust of the public and end users.

Below are some additional suggestions I recommend when minimising bias and increasing fairness in your datasets:

Embrace diversity: Overcoming bias is the need to implement a team of various genders, age groups, skill levels, nationalities, mindsets, personalities, backgrounds and industries to avoid the “groupthink” syndrome. Additionally, a diversity of datasets to triangulate and augment datasets against another and the use of different modelling techniques will also minimise bias when examining data.

Say something and speak up: We need to encourage individuals to report organisations that have bad data practices or abuse our data rights. Power is to the people if there are enough voices, we (the masses) will be able to bring about change that cannot be ignored. In a famous recent example, Facebook’s Cambridge Analytica data debacles led to vast public outrage with boycotts of the social media network, which finally forced the company to implement a Privacy Policy that is more transparent and places more value on individual data rights.

Implement remediation measures with your data: A clear plan of action to consciously monitor, remove and eliminate any data that affects impartiality and fairness.

Privacy by design: The European Union’s General Data Protection Regulation (GDPR) on data protection and privacy for all individuals is bringing about positive changes in how individual’s data is protected, and raising the awareness of privacy. With GDPR, individuals now have a right to choose whether or not they will give their data away to companies, no longer making it a decision solely made by companies.

Use of blockchain technology: Blockchain’s decentralised model makes data more secure as it is stored across multiple storage places in various cities, countries, etc. The immutability and transparency benefits that blockchain provides can give users’ back their right to privacy. Providing complete control to the user regarding their personal information will appease privacy concerns and allow users to be the rightful owners on how they will monetize their own use of data.

As a relatively new topic within emerging technology, there are still a lot of questions concerning the ethics of AI and big data. It will require a happy medium to strike between human and machine. While the laws and regulations guiding AI are still in their infancy, it is for us to question if the data we are using is correct and if we trust the data source. The reputation of the source becomes incredibly important in how we form this trust. As development in AI continues to improve and evolve, we will need to constantly ask if what we are experiencing is authentic without aspects of human bias involvement. Eliminating bias within data and AI is what we hope to achieve and shedding more light around the importance of ethics and ethical practices will hopefully ensure that we will be free from human bias, the data will always be fair for the population and it is never deemed unethical under any circumstance.

Originally published here

More thought leadership

Originally published at digileaders.com on March 26, 2019.

--

--