Ethical considerations for Computer Vision

Joshua J Morley GAICD
Joshua J Morley IoT
7 min readFeb 4, 2022

Innovation is creativity applied with technology — Joshua J S Morley

Join medium and get unlimited access to huge amounts of knowledge here

With each passing day more and more innovative uses for computer vision become apparent. But in the words of ‘Dr Ian Malcolm’ “your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”.

Computer vision is a powerful technology that has use cases from improving the detection of lung cancer, identifying defects such as in solar panels and detecting diseases in agriculture or aquaculture to eliminate it early, protecting the rest of the yield from spoil.

But there are many applications of computer vision technologies that can be deemed unethical, possibly putting the safety and privacy of people at risk. Many regions around the world are struggling to keep legislation up to date with novel technologies. To ensure you are considering ethical as well as technical requirements, there are some simple guiding principles you can apply to evaluate your current computer vision offering, or when designing new computer vision solutions.

In November 2021, Adelaide City Council passed a resolution to ban South Australia Police from embedding facial recognition technology into Adelaide’s planned CCTV network until appropriate laws are developed by the state government. The approach of prohibiting public sector computer vision technologies such as facial recognition until governing legislation can be introduced is not new. Earlier in 2021 the Australian Human Rights Commission (AHRC) also sought a ban on the technology until legislation could be developed.

Within the private sector however, due to the distinct lack of legislative restrictions and guidelines, there are a number of applications of computer vision that have been critiqued as unethical. From ethically questionable platforms such as Clearview AI, an app that uses facial recognition to find all public pictures (and links to those images) of the subject, to discriminative platforms such as “Giggle”, a ‘female only’ social media app that uses facial recognition technology to discriminate against trans women and women of colour.

With all this in mind, while we wait for legislation to protect the public, there are a number of issues that can be identified with computer vision applications as well as some considerations to help guide the ethical use of computer vision. Because there are ways you can utilise the incredible power of computer vision in an ethical manner. A starting point is to simply recognise the inherently complex and issues bound space that is the use of computer vision and armed with that understanding you can design your solution appropriately. An article by Mikael Lauronen in 2017 surveyed computer vision literature and identified 6 common themes of potential ethical issues in computer vision.:

1. Espionage

2. Identity Theft

3. Malicious Attacks

4. Copyright Infringement

5. Discrimination

6. Misinformation

While points 1–3 are deliberately malicious uses, I find it interesting that themes 4–6 can seemingly be incidental results of the technology being used without thorough thought throughout the design process. By considering both ethical and technical requirements holistically, whether you are developing new applications or you already utilise computer vision, you can evaluate your systems to check for unintended unethical behaviour by looking for evidence of the following caveats.

Issues:

Training Data Bias
There is a popular expression in data analytics and artificial intelligence “Garbage in, garbage out.” As with any kind of machine or deep learning, before you can effectively utilise the technology (in this case to recognise objects, detect faces, perform facial recognition, or any other operation of computer vision) you need to train a data model, and this requires training data. The preliminary training step is where many biases (especially historical bias) are introduced into new machine learning models.

Looking back at historical datasets, it is plain to see the potential for widespread historical biases including sexism, racism and ableism. We can look to the formerly mentioned “Giggle” application, which imposes historical, but outdated concepts of femininity and gender.

Outdated definitions concepts isn’t the only historical issue when it comes to training data. Bias can be embedded simply due to the size of available data sets for different contexts. A prime example of this are scenarios that have come up as a result of historical targeting and criminalisation of people of colour in western society and models trained on disproportionately white training data sets that result in throwing false positives or misidentifying people of colour. To this day we see huge complications from training data bias when applying CV technology to a racially diverse set of people.

Another example of bias originating in training data bias is the “HireVue” system, an AI video interviewing system that analyses video interviews and looks for speech patterns and tone, facial expressions and movements, amongst other datapoints, in order to elicit who should receive a follow up interview. The HireVue system was featured in a report by Jim Fruchterman and Joan Mellea that focused on “Expanding Employment Success for people with Disabilities”. The report authors note: [HireVue’s] method massively discriminates against many people with disabilities that significantly affect facial expression and voice: disabilities such as deafness, blindness, speech disorders, and surviving a stroke.”

Acquisition of training, testing and input data
Whether your method of training data acquisition is ethical is another important factor to consider. For instance, using a web scraper and pulling images from social media sites may not only be unethical but could very well be illegal depending on your location. Is the subject of the images or video footage aware you will be using their likeness, and have they consented?

The aforementioned company “Clearview AI” scrapes images without the subjects consent from websites such as Facebook and Instagram, an act that that is currently legal due to outdated Australian laws, however legal battles are taking place around the world between the company and different countries or subdivisions, with different legislation. Legal or not, I’m sure you would be hard pressed to find a single individual who consented to their images being scraped from their social media pages and repurposed for corporate profit.

It wouldn’t be an article about ethics and technology without Meta (Facebook at the time) being included as one of the villains. Meta scaped 1 billion Instagram images in order to train a model, but interestingly enough excluded EU images. A move that many have predicted was a by product of the General Data Protection Regulation (GDPR), a regulation in EU law focused on data protection and privacy.

Accuracy and Reliability
Whilst accuracy and reliability does tie into (ethically sourced) training data, ensuring the data is curated and modelled correctly is vital for your resulting computer vision. Poor data curation and quality can result in negatively impacting system users, subjects or even system bystanders. Poor accuracy can result in security implications (false positive for biometric security via facial recognition), prejudicial treatment of innocent persons (false positive for criminal facial recognition) and the previously mentioned false negatives for interview screening for people with disabilities.

There have been a number of famous examples of inaccurate computer vision models result in highly offensive labelling. One such example is the infamous Flickr conundrum where Flickr computer vision was misidentifying people of colour as an ‘ape’ and native American dancers tagged with ‘costume’. Google also got caught with poor accuracy when Jacky Alcine found an automatically created Google Photos album featuring himself and his friends labelled ‘Gorillas’.

Australia’s first nations people are the most incarcerated people in the world. At present I was unable to find any official statistics of wrongful convictions/exonerations for Australia. From the US we have had the information since 2016 that African Americans make up 47% of wrongful convictions and then exoneration despite being only 13% of the population (figures true of Oct 2016). Combining this figure of disproportionate wrongful conviction of minority people of colour with the issues faced with the accuracy of computer vision models relating to people of colour, we must proactively take action to curate and ensure the accuracy and reliability of computer vision solutions is increased.

So what we can we do?

With these factors considered, what are some ways to ethically create computer vision?

Opt-In, not Opt-Out
Ensure that your training data, testing data and final input data is ethically sourced, and ensure data subjects have opted in to the solution, rather than pulling the data without consent. This equally applies to requesting consent after informing subjects they are being viewed and analysed when the technology is applied in currently legal environments, such as in public places, your home security systems, or publicly available social media images.

Curate data to reduce and eliminate bias
You can begin to eliminate data bias and remove quality issues through data curation. Have data SME’s (data owners, custodians or domain experts) manually tag (or remove tags) in the data. Utilise cross validation and blind verification tests to increase confidence that bias is being reduced and eliminated. You can further enrich datasets and reduce bias by conducting reports to understand the distribution of subject demographics and ‘scale up’ your inclusion of data that would equalise this representation (for example the targeted increase of training data of light skinned people in law enforcement solutions to eliminate bias resulting from historical targeting of people of colour by law enforcement entities).

Delay automated decision making
Whilst the data is being curated and validated, ensure a manual verification step is always included in processes. This will ensure automated biases and false positives/negatives are not pushed to actions and allow you to enrich the quality of your data as inconsistencies or data quality issues are discovered through operational use.

In this article we have explored some ethical considerations and implications of computer vision. I have given some advice on steps to evaluate your current computer vision offering, or items to be cognizant of when designing new computer vision solutions. By applying these three guiding principles to evaluate your current computer vision offering, or when designing new computer vision solutions, you can ensure you are considering ethical as well as technical requirements. As technology advances and humanity innovates, its vital that we, the innovators, work to eliminate historical bias and prejudice, and don’t take advantage of the lagging legislative landscape or the naïve public.

--

--

Joshua J Morley GAICD
Joshua J Morley IoT

Global Head of Artificial Intelligence, Data & Analytics (ADA), Distinguished Lecturer ADA, IoT, Immersive Technologies & Web3.0. NFP Non Executive Director.