Career Recommendations or Self-fulfilling Prophecy?

Originally posted: Monday, 21 August 2017 by David Barnard-Wills, Trilateral Research Ltd, UK, on develop-project.eu

One of the roles of the ethical, privacy and societal impact assessment work conducted by Trilateral Research in DEVELOP is to explore emergent issues in the software during its development. What makes this different from an academic exercise, is that we’re then looking for ways to mitigate negative unwanted impacts, and even increase the benefits of positive impacts.

This is where the art and practice of the exercise comes in. There are often no fixed set of solutions that can be dropped into place and simply implemented. Part of this comes from the relative immaturity of the field [1] — there just aren’t that many established design or engineering patterns, although there are promising efforts in those directions [2, 3, 4].

It’s also because ethical and social impacts are highly determined by the social context. Not all recommendations are created equally (there’s a big difference between recommending your next book to read and your appropriate course of medical treatment for example), and what might be perfectly appropriate flows of information in one context would be experienced as highly invasive and downright creepy in another. [5] Engineering a mitigation measure requires understanding this social context and how technological changes may interact with it.

As an example, one issue raised by our consultation workshop and our internal review was:

“How do you prevent the recommendations from the system (about career goal, or suitable learning interventions) and its assessments (for example about a user’s management style) becoming self-fulfilling prophecies?”

We’ve seen this in education when the idea of learning styles was popular. A child could be identified as having a preferred learning style, and then tend to encounter more material in that format, strengthening and reinforcing that learning style at the expense of others. [6] The ethical risk is that we’re potentially limiting human agency and autonomy. We’re open to the sort of critiques directed at “nudge theory” [7]. We are also in the same sort of space as ethics of search problems. [8]

It’s a tricky question to respond to, because we’re well outside the realms of legal obligation, and the time horizons are too long to really test within the project. (Also, it’s difficult to ask people “how much did you feel your choices were predetermined?”)

You can’t fully guarantee against some steering effects, but within DEVELOP we’ve taken the following steps.

1) Tell me first: Putting the employee at the centre of the flows of information.

The information flow is mostly pointed at the user. So, for example, they get recommended learning interventions and career goals, rather than these being recommended to their manager on their behalf. This should reduce some of the determinism, as the user probably has their own set of goals more firmly in mind and is able to reflect upon them if they seem incorrect or inconsistent with their goals.

2) Don’t lie to me: Qualified claims.

We place caveats about the accuracy of the system in the supporting documentation for users and managers, but also importantly, in the user interface itself. It is tempting to upsell DEVELOP as a perfect algorithm, but it’s potentially dangerous to do so. The aim is to minimise the idea that DEVELOP is an all-seeing, all-knowing system, but is rather an assistant to a human being, making a decision for themselves.

3) Don’t nudge me: The options to turn off recommendations.

DEVELOP filters the list of available courses and learnings based upon various assessments and the user’s career goals. However, by providing an option to see the unfiltered list of courses we retain the ability to exercise autonomy. You obviously lose some of the functionality, but you can trawl through the whole thing if you wanted to, and see what you are not being recommended.

That being said, we do strongly recommend an ethical and privacy impact assessment; by considering this risk and others like it from an early stage, and by being part of the design team, we’re able to address these risks within the core design rather than having to rely on messy fixes.

Find related articles on Ethics and Privacy, and Career Development in our blog.

References

[1] https://blog.xot.nl/2017/08/02/the-challenges-of-privacy-engineering/

[2] Ian Oliver (2014) Privacy Engineering. Sipoo.

[3] http://pripareproject.eu/research/

[4] https://catalogue.projectsbyif.com/

[5] Helen Nissenbaum, Privacy in Context: Technology, Policy and the Integrity of Social Life. Stanford, Stanford Law Books, 2010.

[6] Jussim, L. (1986). “Self-fulfilling prophecies: A theoretical and integrative review”. Psychological Review, 93(4), 429–445. http://dx.doi.org/10.1037/0033-295X.93.4.429

[7] Gregory Mitchell, “Libertarian Paternalism is an Oxymoron”, Northwestern University Law Review, Vol.99, №3., 2005. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=615562

[8] Alexander Halvais, Search Engine Society, Polity Press, Cambridge and Malden. 2009