MyData 2019 — What Inlanders learned at the conference
The global MyData2019 conference was organised at the end of September. It is now time for us to evaluate what we learned and how the learnings can be added to our working context, the Finnish Immigration Service.
What was MyData 2019?
For those who haven’t heard about the conference before: MyData is a three-day conference that happens annually in Helsinki. Speakers and participants come from all over the world to discuss the latest developments in the MyData community. 2019 has been a remarkable year for the MyData community, because it marks the first birthday of the global MyData association. The 2019 conference was a big experience and packed with programme from early morning sessions to late evening restaurant hangouts. Of course, the main part of the programme consisted of sessions (usually with multiple speakers) and workshops, but even those lasted from 9am to 6pm, always multiple sessions running in parallel, so it was impossible to attend everything.
What is MyData?
MyData is a human centred approach in personal data management that combines industry need to data with digital human rights. […] The core idea is that we, you and I, should have an easy way to see where data about us goes, specify who can use it, and alter these decisions over time. (myData.org)
So, in our words: MyData is a concept, that gives people the ownership over their own data. We learned during the conference if the concept is taken seriously it has effects on many levels, ranging from the design of services to technical solutions and the implementation of laws and everything in between.
My Data principles are according to the MyData Declaration:
- Human-centric control of personal data
- Individual as the point of integration
- Individual empowerment
- Portability: access and re-use
- Transparency and accountability
As individuals, we stand by these principles and want to help our organisation to take them more seriously in the future. However, we also understand the organisational restrictions that Migri has and therefore implementation of MyData principles has to go step by step.
Recurring topics during the conference sessions were ethics, artificial intelligence and GDPR. We will get back to those.
Why are Migri and Inland interested in MyData?
There are multiple answers to this topic:
Most obviously, Migri has a lot of customer (personal) data and therefore needs to comply with GDPR. GDPR can be seen as the first step to make MyData a reality. Users have the rights to access their personal as well as case data and Migri needs to provide it upon request. Since Migri also runs multiple websites, we also track certain user data there, which we want to treat according to MyData principles.
Secondly, at Inland we have had two project ideas that relate to MyData (both are referenced in this earlier blog post). Even if neither the strong digital identification for immigrants nor the human-centred asylum seeker journey have made it further than to a vision presentation, the projects are still on our mind. MyData2019 for us was a way of learning new things, that might push these projects further again.
Thirdly, there are many ethical issues that relate to personal data collection, and concern also Migri. Questions like: what data is actually needed; how are we sharing that information with other public organisations; what is the role of customers to choose what information they want or do not want to share with us; how do we support our customers to really understand the consent they have accepted when using our digital service platform EnterFinland. There is also a huge ethical question about automatisation and effects of automated decision making.
What did we learn?
In order to describe our learnings, let’s get back to the recurring themes we discovered during the conference: Ethics, artificial intelligence and GDPR. Learnings in these areas are not always easy to separate because they interact with each other.
MyData & Ethics
The MyData opening plenary keynote talk by Linnet Taylor from Australia raised many interesting thoughts about today’s global digitalisation and its relations to ethics. She introduced us the term phygital, which intertwines our physical bodies and environments into digital. As we are born into the physical world, we have already been born in a digital form way before our physical body appears to exist. The day we are born, we already have a so-called digital twin, and there is a lot of information that is tied up to that twin. The pure physical form of living is no more possible in our present western societies. With the data come also issues of power. Who has the right to handle it and who has the right to own it? Is the dystopian scenario of digital slaves just a scenario or is it already happening? The important question that arises is: What kind of future we want to build for ourselves and the children of our children? Linnet’s argument at the end of the presentation is valid to keep in mind: All the work we do in the context of digital data processing has a political dimension. Every decision embodies choices that impact both the way our future is shaped and how its values are composed.
Honouring Children’s Rights in the Data Economy
Right as the first thing Suse attended a workshop about Children’s Rights to their data. At the Children’s Rights to their data workshop, Irene Leino (Unicef Finland) pointed out that 97% of 9–18 year old youngsters have smartphones in Finland but that officially parents are responsible of their online interactions. The Finnish GDPR gives 13–18 year olds some responsibilities to manage their data. It is very clear that parents cannot control their teenage kids and the industry stated during the session that they don’t recognise kids as their customers (e.g., smartphone contracts are under the name of their parents). It became clear that children need to be educated better about what traces they leave during online interactions, but that the industry also needs to recognise the special responsibilities they have towards children in our society. This is an ethical responsibility we all have, when creating digital services in particular.
How is this relevant at Migri? We think that Migri has the same problem as the private sector industry: We have many underage customers, but we usually communicate our processes only to the parents of these children (the exception are unattended minors coming to Finland as asylum seekers). It would be great to do a project on how to communicate residence permit and asylum processes to children. Maybe we find a thesis worker for this one day.
Creating data visualisations ethically
The second very interesting topic was introduced by Katherine Hepworth from University of Nevada. She created a workflow to help people create data visualisations in an ethical way. The starting point of her talk was this:
Visualisations are never objective and therefore potentially harmful.
For us, this was revealing: We, like probably many others, create data visualisations in order to convince people of certain actions or to make better sense of the data. Nevertheless, we never really thought about ethical implications of such actions. Katherine describes the ethical data visualisation workflow in three main phases:
1. Scoping the visualisation includes identifying the goal, defining the main argument to support that goal and reviewing existing materials.
2. Preparing the dataset consists of gathering and combining the data sources and describing the data used in the visualisation.
3. Only during the “visualisation the data” step does the making of the actual visualisation start. Now one should review the ethics, refine the arguments and design the final data visualisation.
How is this relevant at Migri & Inland? We are not very often involved in making data visualisations, but when we are, we tend to just go for it. The Ethical Data Visualisation workflow highlights the importance to stop and think, before starting to actually do a visualisation. It also highlights that the bias already starts in data collection, while we often think that the visualisation process only starts when we look at existing data and think about how to visualise it.
Designing for introversion
This session by user researcher Arathi Sethumadhavan from Microsoft was one of the most interesting sessions during MyData 2019. Arathi started by introducing Microsoft’s Ethical Principles for AI:
- Privacy & Security
Regarding inclusion she also introduced the company’s Inclusive Design principles. They start from recognising different types of restrictions that people have with using their senses and how long these restrictions last:
In the later part of the presentation the user researcher talked about introversion and how to include introverts in meetings. Similarly, to the inclusive design guidelines the first step is to recognise personality types in meetings and then start designing a solution for one of those, in this case it was introverts. Unfortunately, no results of the experiments conducted were presented due to company policies.
How is this relevant at Migri & Inland? One of the projects of Inland’s Service Design Ambassador training at Migri was about making remote meetings better also taking different personality types into account. We think that designing with introverts in meetings in mind, will make meeting time more productive for everyone. Microsoft’s Inclusive Design principles have been eye openers in the past and we are trying to apply them more and more systematically in our work.
MyData & GDPR
GDPR and all relating issues were one of the big topics at MyData this year. It was probably mentioned in every other talk, so we’ll summarise only the one most relevant for our practice at Migri:
Design to encourage questioning
This is our headline to a talk by Georgia Bullen from a US-based company called Simply Secure. She summarised that consent needs to be
- Freely given
- Enthusiastic (Have you ever enthusiastically said “Yes” to any “Terms of Service”?)
All these principles are based on the human-centred design process. Apart from the consent we need to keep in mind the situation the user is in and focus on high-risk cases (e.g., is the user very stressed when giving the consent?).
Georgia argued for threat modelling when introducing new technologies, services or products: Does the use of it affect the person’s safety? If it does, then we should be extra careful when designing the consent they give.
One sentence we remember from Georgia was this:
Set good defaults and allow people to opt in rather than opting out.
How is this relevant at Migri? We have to admit that we are handling consent in the old-school way at Migri. It is, as Georgia pointed out also, a fallacy of public service to give consent of use, since there is no real choice when it comes to public services. Nevertheless, in our future projects we want to handle consent by setting good defaults and design to encourage questioning, rather than the fastest way through to the service.
MyData & AI
Building standards as a way to regulate AI efforts
We all respect IEEE for their work in standard setting, and they are also working to regulate AI efforts. Apart from introducing the ethically aligned design as a conceptual framework (IEEE P7000 standard series), Clara Neppel also introduced the way that IEEE norms are made.
How is this relevant for Migri? We found the way of thinking at IEEE very similar to the culture at Migri: If it is not regulated and there is no law or norm for it, things often do not exist in people’s mind. During the several talks from IEEE members during MyData there was different types of critique raised. It showed clearly that thinking in norms, laws and regulating is only one way to build the future.
MyData, Chatbot & Design
This was one of the main sessions why we attended MyData 2019. The content relates very closely to Migri’s chatbot project and we came to learn more. This was especially good timing, since next year we want to enable Kamu (Migri’s own chatbot) to authenticate users and give more personalised advice to them.
The workshop started with introductions by the three facilitators:
Estelle Hary & Regis Chatellier from CNIL, the French Digital Data Protection Office, introduced their goal to bring GDPR closer to designers, because
the interface is the first object of mediation between the law, rights & the users.
They introduced the key GDPR concepts for designers:
1. Information should be concise, transparent, understandable, easily accessible, and clear and simple
2. Consent has to be freely given, specific, unambiguous and informed.
3. GDPR facilitates the rights to access personal data as well as tracking the request and must afford giving feedback.
Laura Varisco from Politecnico di Milano identified the ways in which data influences us beyond privacy concerns. These are namely influence on our awareness, our actions, our relationships as well as our societal agency. In her research she identified 12 themes that need to be considered in the design process. Most importantly, the accessibility or denial of services thanks to data collection and sharing, as well as the awareness of data tracking and sharing.
The second part of this session was held in a workshop setting. Participants were encouraged to explore the impact of personal data during a conversation with a personal voice assistant (Siri or Cortana). Secondly, our task was to design privacy statement(s) during different phases of a first-time use of a personal voice assistant.
While we had high expectations and definitely learned something about personal data and personal assistants, we had expected this session to be more relevant for our work at Migri. Shifting the focus from chatbots to personal assistants focused it on much deeper level issues, than those presented by chatbots like our Kamu.
MyData and the people
For us Bo Harald’s talk about MyData and the people was a good wrap up of the conference. He changed the long-known mantra among user-centred designers and developers into:
Do not serve customers.
Do not serve citizens.
Serve their life events.
Reading the first two lines, it is a quite provocative slogan. The third line then is very much in line with the recent thinking in Finland’s government programme and especially the AuroraAI project, that is now starting.
Attending to My Data 2019 was a good experience. It deepened our understanding of the complexity of the issues that makes personal data collection ethically, technically and visually challenging. We also got an extra value by networking with different people from all around the world. However, as beginners we sometimes felt quite lost in choosing sessions as it was very hard to know beforehand what sessions were really about: Some were so expert level that we simply could not follow; in many the red threat, the explicit relationship with the MyData concept, was fuzzy; because sessions had one title but only names of contributors to the sessions were online (not the names of the talks) it was hard to know what sessions were about; and lastly, you never knew beforehand if you were going for a talk or a workshop session (except on day 1).
We want to suggest one most important action for the next edition of the conference: Make the programme more understandable and introduce a colour code (or similar) to understand the level of MyData expertise needed to participate in the session.
Even though we have some development suggestions, we definitely think that MyData 2019 was worth our time. It was three intense days of networking and learning for us. We will take many of these learnings forward at Migri, but as we are working in a public sector organisation: Go forward steadily, but never forget your patience.
These are some links we learned about and will use as resources:
- Ethically Aligned Design standard by IEEE: https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead1e.pdf?utm_medium=undefined&utm_source=undefined&utm_campaign=undefined&utm_content=undefined&utm_term=undefined
- GDPR for designers: https://design.cnil.fr/en
- Database of potential issues with data beyond privacy concerns: https://public.tableau.com/profile/laura.varisco#!/vizhome/CriticalThemes/Criticalthemes
- Microsoft principles for AI: https://www.microsoft.com/en-us/ai/our-approach-to-ai
- Microsoft inclusive design: https://www.microsoft.com/design/inclusive/
Authors: Suse Miessner & Pia Laulainen