Ethical Architecture: Should IT Architects Address Ethical Aspects of Technology Implementations?
In the early 2000s, I worked at a company that created chatbots. The aim of the company was to provide the best possible answer to questions asked by visitors of the website the chatbot was housed on. It tried to achieve this by using natural language recognition, combined with an understanding of the context of the questions asked.
Bots were aimed at “information-intense” organisations: back in those days, unsurprisingly, insurance companies, banking and the government. But how did you market these bots properly? As replacements for call center employees (with Pareto as a guiding principle, providing a way to answer the easiest and most often asked questions automatically), or did they provide a personification of a brand image — a gimmicky approach?
Getting to the right answer was always a game of catch-up. The quality of the engine increased rapidly, but only because it was continuously fed with answers to questions it did not get right the first time. The question was, when would the engine reach a state of “good enough”?
Fast forward a decade and a half. Chatbots are ominous — even though if you do not use Facebook, you might not run into them that much. But what purpose do they serve? I think they provide just another way of accessing the vast amount of data that is starting to form everywhere, including in our households. Other such ways include voice recognition in the personal assistants embedded in the devices we use every day, such as Google Assistant, Microsoft Siri and Apple Cortana — or devices specifically built for this purpose, such as Amazon Echo. Or even specific AI, such as IBM Watson.
The underpinning theme seems to be that when things get too large to comprehend or make sense of, we increasingly rely on automation and machine learning. From a household perspective, the interface to the data we store will probably allow us to move from relatively simple questions (“Hey Siri, how many photographs did I take in Barcelona last year?”) to more complicated ones (“What is the name of that church I took so many pictures of”) within the next few years. Maybe even sooner? This will increase our sense of control on the data that surrounds us, but I fail to see actual, practical implementations currently. I do prefer a certain amount of control over the fridge (and my life) before it starts to restock itself.
The next revolution, so we are promised, is artificial intelligence (with its approaches machine learning or automated automation). We no longer need to tell a specific device to turn the lights on, but the device will predict when to do just that based on studying, learning and understanding us. Predictable is a keyword here. An important question underlying this potential revolution is “How predictable do we want to be?”.
Rewind two decades: I obtain my degree in Arts and Sciences. That study provided me with a holistic view on society (and why it works the way it does) from various perspectives: The Arts, Philosophy, Sociology and History, to name but a few. A sub-field of that study is the Philosophy of Technology. Ever since the ancient Greeks, the study of technology from a philosophical perspective took place in an ever changing, continuously industrialised environment. A common ground through the ages has been that technology mimics the world that surrounds us and actually tries to improve it to a certain extent.
In the tradition of the Philosophy of Technology, the potential adverse effects of technology on society are attributed to the user of technology, as technology in itself is considered neutral. It can be put to either good or bad use. Recently (where “recent” should be considered in a philosophical way), this met with severe critique. The ethics of technology have since become a prominent research topic.
Fast forward back to today. It feels like we forgot, somewhere along the way, that we need to keep in the back of our minds — without halting or hindering progress — what it is we are trying to achieve with technology implementations, our ever-increasing hoarding of data and desire to make sense of everything (or have a machine makes sense of it).
I think we should consider the increasing influence of ethical aspects in technology implementations. A good place to start would be within Architecture, in a sub-field that — for lack of a better term — could be called Ethical Architecture (not to be confused with the Tom Spector book from 2001). Maybe as an extension of Information Architecture, because it shares some similarities. It should, however, not be instrumental.
Why? It is not enough to describe how various collections of data will be combined into a single data-lake, or what measures will be taken to protect the privacy of the individual. The underlying questions to be answered is why we want a data-lake and why the privacy needs protection. We then need to formulate governing principles surrounding these answers. This is the job of the architect.
I realise it could be considered a small thing, when someone writes down in a paragraph of an architecture document that a certain design decision could have an ethical consequence. But it is a start, something that we can do without much effort.
