Two Very Different Examples of Computer Vision

Alex Chapman
6 min readJan 10, 2024

--

On another day of scrolling through LinkedIn today I encountered two posts about computer vision that caused me to have very different reactions.

Whenever I see a new application of AI I tend to start thinking immediately not just of the potential positive applications, but also of the ways it could be used badly or for nefarious purposes, and I don’t think I’m alone in this. These two posts can provide a really good example of the pros and cons of this technology.

SightBit AI Lifeguard

SightBit is an Israeli start-up founded to prevent drownings using computer vision. They propose to place cameras at strategic locations along beaches where their AI can monitor the video to alert lifeguards of swimmers who may be in danger.

From a video posted by SightBit on LinkedIn

Since a person can only focus on one area at a time, its possible they could simply miss a drowning person. SightBit proposes that their technology would allow lifeguards to broaden their field of vision, alerting them to potential danger and allowing them to save more lives.

Obviously there is potential for some issues here. Aside from the obvious - that some beach goers may not be so keen to appear on camera - it is important to consider what data any AI has been trained on and what existing biases might be baked in. The video shows a largely ethnically homogenous group of people and makes me wonder how well it would perform when faced with someone who looks different from them. For example, it’s well-documented that facial recognition technology does not do well at detecting darker skin tones, and has been noted as worse at detecting and identifying women with dark skin tones.

Despite research from institutions like Harvard, Microsoft, and the US government, and projects such as Gender Shades which aim to shed light on this issue, the problem persists. An AI that purports to protect people from drowning but fails to note when someone with dark skin is drowning could be potentially disastrous. There is also a risk with any emerging technologies that users become over-reliant on them. A lifeguard might become focused only on alerts, not taking into account that the AI could fail to notice more subtle warning signs — for example, a mother looking agitated while looking for her lost child on a beach might be ignored by the AI, but would likely put a trained human observer on alert.

While these risks are nothing small, overall I felt excited to see this technology. The potential for saving lives that might otherwise be at risk is extremely valuable, and it illustrates one of the many positive use cases for computer vision.

NeuroSpot Baristaeye

This post about a coffee shop which monitors its workers has been floating around various Data Science and AI learning groups on LinkedIn for a while. After a little digging, it appears to be a sped up version of a Youtube video posted by NeuroSpot, a company which specialises in computer vision for retail settings. There’s no evidence this is actually a real coffee shop using the technology, instead this video seems to serve as a demonstration of NeuroSpot’s capabilities.

From a YouTube video posted by NeuroSpot

I felt somewhat less excited by this application of computer vision technology. It seems that the use of this would be to try and limit the amount of time customers spend in the coffee shop, as well as monitor employees on a strict, possibly arbitrary measure of productivity. I was reminded of my time in food service during university, where even taking a spare moment to catch a breath after a long day or a difficult customer would see my manager shoving a rag into my hands and reciting ‘If you can lean you can clean’, directing me to clean an already sparkling table for the third time. Often, the pressure on employees is to appear busy when their time may really be better spent taking a brief rest or having a moment to think or plan. I imagined what that job would have been like had I been monitored by a computer vision AI, and I can’t say I imagined anything pleasant.

The use of AI to monitor employees has increased since the lockdown shifted many workers to working from home, and has become such a topic of concern that the UK’s Culture, Media and Sport Committee released a paper on it. In that paper, they quote researchers who point out that:

‘Instances where the micro-determination of time and movement tracking through connected devices, which had been introduced to improve productivity, such as in warehouses had also led to workers feeling alienated and experiencing increased stress and anxiety.’

In the US, surveys found that 81% of worker respondents stated ‘AI monitoring makes them feel like they’re being inappropriately watched’, according to a report by CNBC. While there are many considering the ethical and legal arguments for and against using AI to monitor employees, the use of new technologies to monitor employees doubled during the pandemic (according to a report by Reuters which quoted the Wall Street Journal).

Personally, I think monitoring keystrokes and the number and frequency of messages is as pointless as it is harmless — I don’t think either of those is an accurate measure of productivity for most desk jobs, nor do I worry about losing my job due to that monitoring. However, the application of it in low-paid retail and food service jobs seems to me to have a higher possibility for misuse. I return to a quote in the Culture, Media and Sport Committee paper, which discusses that workplace culture can have a huge impact on how this type of monitoring affects employees: ‘the key difference is the nature of the employer/employee relationship and its inherent power imbalance’. As anyone who has ever worked as a barista will know, the inherent power imbalance between a large coffee chain and its minimum-wage, part-time staff is extreme. This leads to yet another case where technology, instead of improving lives and productivity, only serves to further entrench existing bad practices.

Are we heading towards an AI utopia free from accidental drownings and where our coffee is always ready on time, or an AI dystopia where only certain types of people are rescued and coffee shop employees have their pay docked by an uncaring computer vision AI? In the end, I think it comes down not to available technology but how we use it. Are we considering all the possible pitfalls of this technology and the ways in which it could fail at or even contravene its intended purposes? Are we ensuring we don’t build pre-existing biases into the technology by testing it on a diverse range of people? Are we using it to further alienate or disempower people who are already overworked and undervalued? These are the types of questions it is essential that we ask of these developing technologies.

Thanks for reading to the end! I’m Alex, a data analyst, data storyteller, and student data scientist. I’m pursuing my masters in Data Science and Artificial Intelligence at the University in London. I’m particularly interested in the social and ethical impacts of AI, as well as the creative and sometimes not-so-friendly applications of the technology. Please leave a comment and follow me if you’d like to read more!

--

--

Alex Chapman

Data Specialist | Currently studying MSc Data Science with Artificial Intelligence