Speculative Futures Meetup: Ideating Counter-Surveillance wearables
Workshop report of ‘Ideating Counter-Surveillance Wearables’, lead by Theo Ploeg and Mick Jongeling of Speculative Futures Amsterdam at the Digital Society School on the 19th of February 2020.
Speculative Futures is an international community of chapters focused on Speculative & Critical Design, Design Fiction, Futurism, and Strategy & Foresight and any vision or approach which involves using design as a vehicle to speculate about potential or alternate futures. Speculative Futures is an effort steered by the Design Futures Initiative. Instead of focussing on problems in the present, Speculative Design can help anticipate the future, challenging prevailing structures and our relationship with our environment. This approach requires a lot of foresight and expertise in a field that is quite undefined and isolated. Speculative Futures Amsterdam has a meetup that occurs every 3–4 months. In this meetup, Theo and I aim to give a stage to methods and prototypes that inspires thinking, raise awareness, provoke action, spark discussion, and, crucially, find the alternatives most needed in the world.
Introducing the theme: Counter-Surveillance Wearables.
This night was all about ideating wearables to hide the user for surveillance techniques in the context of smart cities. We have seen an increasing rise in speculative work that question the motives of smart cities and even more, a spike in need for this work with protests in cities around the world.
The friction that is created due to law enforcement and citizen science adapting to more efficient and scalable versions of control brings an interesting dynamic in the freedom of citizens. There seem to be no public laws that forbid the use of facial recognition software in the public domain and therefore, municipalities and companies experience a playground when it comes to testing these algorithms. The algorithms deployed are biased due to their training data. The release of these algorithms in the public sphere should add richer data to the models to improve their accuracy. This way, it seems people have become the guinea pigs in larger experiments. Even though this is being rebelled, cities always relied on studying their inhabitants to improve themselves. When time-sharing was created during the early days of network-based computing, it was meant for debugging. Developers could enter other operating systems to scrawl through code, reading line after line. This process was appropriately named ‘Peeping’.
When the Internet started to enter the market as a consumer product, the investments made in the network were being paid by the companies providing services. To earn money for these investments, they created what is now known as ‘Free Labour’. The users would freely test these services provided and companies would collect data to improve their services. While the perception of free labour has seemed to be lost in the notion of network-based computing, we are becoming more aware of this in our physical world. With digital technologies embedded in physical tools and services, it seems using any products requires a complicated set of terms and conditions. However, these terms and conditions do not seem to be necessary when it comes to living in the city.
Introducing the method: Value Columns.
We ideated with a hybrid adaptation of the “Mash-Up” method. I adapted this method to go a little deeper into values behind certain decisions. Participants were divided into groups of 5 to provide different perspectives and attitudes. The instructions for Value Columns go as followed:
- Create the first column and list down as many active measurements of counter-surveillance you are currently taking to protect yourself or your online identity. As an example, one could write down two-factor authentication and a fingerprint lock on a smartphone.
- In the next column, write down the Why behind these measurements. What are you trying to protect, conceal or abstract? Team members are asked to ask deeper questions when they feel why is not formulated correctly. This encourages a deeper look into the motivations behind certain measurements. The two-factor authentication could reveal that we don’t want to want to verify our consent with another device and the fingerprints utilize biometric data to verify our identity.
- The third column is about listing as many technologies as the group knows. This can be as simple as implants and goggles, but also can mean drones, clothing and smartphones.
- In the final column, participants take two answers from the Why column and one from the technologies. These make up the design criteria.
- Ideate with these design criteria. To continue our example, verifying consent with multiple devices + biometric data + implants = Chips in multiple limbs of your body. The software will request random limbs to verify it is actually the user. It is like Twister, but with authentication.
Groups were asked to come up with as many ideas within 30 minutes. They were encouraged to move to a situation where technologies are fully accepted by society to either wear, carry or possess.
Clustering the ideas into domains of ethics
The ideation phase delivered laughter, astonishment and plenty of clever possibilities to disrupt surveillance technologies in our daily lives. The teams were asked to gather in front of three diagrams, each with a binary choice of domains where this invention could be classified as. The domains were:
- Humanizing — Dehumanizing, the separation between technology that allows us to behave more as free individuals or if it suppresses the expression of either an individual or a group.
- Low-Tech — High-Tech, the feasibility to recreate the concept. Can it be made from “dumb” materials or does it need programming?
- Technical — Social, the implementation of the concept. Is it something that an individual with the right skillset can use or is it something that can be used as a group and is easy to share?
These domains divided the ideas and gave room for some reflection into the zeitgeist of the participants and the challenge of creating ethical technology. The majority of the ideas were spread between Dehumanizing, Low-Tech, High-Tech and Technical.
The conclusion we have drawn from this is that it is a challenge to create technology that can be considered given our humanity or self-expression back and technology that empowers the group rather than the individual.
Applying the Future Cone
The groups were asked to cluster the ideas into the Future Cone. This diagram was appropriated to cluster ideas into the 5 separate areas: Potential: Impossible or unrealistic. Possible: Could but not desired. Plausible: Desired, but not feasible. Probable: Feasible, but not desired. Preferable: Should and Desired.
Fictional World Building & Sharing
Groups were challenged to come up with a story that described how their concept could change daily life. We do not want to share the outcomes of their inventions, because ideally, they would help protect our data and prevent data exhaustion.
Curious to see more? Here are some of the counter-surveillance technologies that were ideated. As an extension of the workshop, these are given without context. Maybe it can inspire an image in your head that can be a real possibility in the future.
- Hairpin distorting your image
- IWatch jailbreak to send incorrect info
- Flitsmeister for Facial Recognition cameras
- Party glitters that trigger influencers Instagrams.
- Modular face prostheses to reconstruct your point cloud.
- QR code face filter that directs traffic away from the used platform to another.
- Pay your way into anonymity: cookie blocker premium
- A phone that doesn’t connect to the internet.
- A hammer.
- Retina changing chip.
- Bio-implants that alert you when someone with the same activist goal is in your proximity.
Would you like to participate in our meetups? Join our meetup group: https://www.meetup.com/Speculative-Futures-Amsterdam/
See you in the futures!