The dangerous business of pixelated love

Enrique Dans
Enrique Dans
Published in
2 min readFeb 15, 2024

--

IMAGE: A heavily pixelated icon of a kissing face
IMAGE: Open Clipart Vectors — Pixabay

It was question of when, not if: soon after November 2022, when ChatGPT appeared with its then impressive capabilities for dialogue, tools to create virtual partners began to appear; algorithms playing the role of a partner, which you can even supposedly build yourself, with whom to exchange conversations and messages.

Sold as “software and content developed to improve your mood and well-being”, such applications are, as a report by the Mozilla Foundation shows, a major invasion of the user’s privacy, based on extracting as much personal data of all kinds as possible, huge and unjustified amounts of trackers of all kinds, and extremely unclear policies that allow virtually unlimited exploitation of such data.

Attempts by providers such as OpenAI to prevent the development of such applications and remove them from their platforms run up against the difficulty of identifying them and conceptualizing functionalities often disguised as “virtual company” and accusations of “digital moralism” that disguise attempts to create yet another, and particularly powerful, arena for the capture and exploitation of user data.

Conversations with virtual partners will inevitably include all kinds of personal information, which should be specially protected, but that the user voluntarily provides. What happens next is so obvious it hurts: as…

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)