When a new device, service, or application hits the market there’s invariably a big splash and well-built commotion. But what happens when it leaves?
Smart home device manufacturer Hive has recently discontinued support for some of its early legacy devices and some of its users are starting to find out. Starting with home thermostats and the smart home hub, Hive has unveiled plans to sunset additional products between now and 2025. After just seven years since their launch, customers might be justified in feeling a little short-changed over being pushed to shift from more traditional and longer-lasting home heating solutions.
Is this move towards short-lived tech a sign of the times, a trade-off for convenience, capability, and control, or do tech companies have a responsibility to consumers that’s not currently being upheld?
How Long Should Our Tech Last?
Something that has to be repeated often among engineers is the mantra “Software is not hardware”. This is a surprisingly crucial point that’s difficult to fully grasp. There are a lot of fundamental differences in hardware design, development, and distribution that software engineers tend to take for granted.
As an example, hardware can’t be readily recalled, changed, and re-distributed at almost no cost. Neither can its design be updated, overhauled, and made more efficient after it’s completed and shipped In modern software, there’s almost an in-built expectation of doing both.
Yet, many hardware-forward projects are being managed and run as if they were a simple extension of their onboard software.
If a robust hardware design doesn’t plan for a more limited service offering in the future then you can easily become stuck with unreliable and insecure products needing to be shelved. Though, this challenge shouldn’t be underestimated.. Integrating old products with new services is hard and keeping legacy systems alive in a consistently changing ecosystem is a time-consuming and costly challenge.
Hive is far from the only company retiring devices earlier than customers might reasonably expect. Wireless speaker company Sonos came under fire last year after ending support and service for its stereo equipment after just 10 years.
While a decade is a long time in tech, in terms of stereo equipment it’s still relatively new. A good system should last a couple of decades at least and be unlikely to be subject to any serious amount of wear and tear. Yet, due to its onboard components, customers are forced to buy-in to a nightmarish acceleration of planned obsolescence.
Of course, comparing modern tech with equipment from previous eras is hardly a like-for-like comparison. Today’s tech very often relies on a remote web server to connect to at a bare minimum. A stereo is no longer just a music player but also a mini-computer with access to local networks and peripheral devices that are consistently changing around it.
Staying relevant, or even compatible, in an ever-changing landscape is hard to do.
However, if it was clear from the start that your smart home installation or stereo equipment was going to be outlived by a reasonably good t-shirt then I doubt they would have made quite as many sales. Yet, this sleight of hand is at the relatively harmless end of the spectrum.
Tech on the Edge
What happens when projects are much more personal and much more important to their users? Today’s tech is playing an increasingly critical — and irreplaceable — role in users’ lives.
Modern tech products provide accessibility, assistance, and safety when needed. These often enable new levels of freedom and independence that would otherwise be impossible. But when these semi-magical solutions are discontinued early or left unsupported the consequences can be catastrophic.
For 49-year-old Rita Leggett, they were truly devastating. In 2010, Leggett was enrolled on a clinical trial to investigate a technical intervention for treating epilepsy.
The intervention involved implanting a neural interface within the patient’s brain. A network of electrodes were capable of detecting unusual activity and delivering an alert when a seizure was likely or imminent. While the trial saw mixed results as a whole, it was an unmitigated success for Leggett
The device gave her control over her seizures and the ability to live life in a way she never had before. “I found a new lease on life…,” she told Nature, “I could do things that I hadn’t done before. I was happy.”
During the course of the trial she met her now husband, learned to drive, and got out to see friends more than she ever had. She had a newfound freedom provided by the ability to foresee upcoming seizures and take action to protect herself. It was revolutionary.
However, not all patients enrolled in the trial saw such remarkable results. For others, the intervention was less successful and the associated risks of highly invasive surgery to much to bear. The company behind the implant, NeuroVista, ended the project and soon folded.
With a 3-year battery and no way to refresh or recharge the device without the company’s proprietary technology — the implant had to be removed. For Leggett, this backwards step was awful.
“I have never again felt as safe and secure … nor am I the happy, outgoing, confident woman I was,” she told Fredric Gilbert, the author of a study into implanted neural interfaces, in an interview after the device had been removed. “I still get emotional thinking and talking about my device … I’m missing and it’s missing.”
While the removal of the device may well have been the only practical option, it’s not so clearly that it’s the right ethical or legal choice.
In a paper published earlier this year, ethicists have suggested the unwilling removal of these kinds of implants represents a breach of an owner’s human rights.
“If there is evidence that a brain-computer interface could become part of the self of the human being, then it seems that under no condition besides medical necessity should it be allowed for that BCI to be explanted without the consent of the human user,” Marcello Ienca, a co-author of the paper, told MIT. “If that is constitutive of the person, then you’re basically removing something constitutive of the person against their will.”
Ienca drew a parallel with the forced removal of organs, something that is already expressly forbidden in international law.
For Leggett, the deep-seated sense of grief following the removal of the device is still very much present. In the years since the case, however, it’s an issue that’s come up repeatedly in related scenarios.
An Emerging Frontier
More recently, 24-year-old Ian Burkhart received a trial brain implant designed to restore hand movement and motor control following a devastating spinal cord injury in 2008. The device was a success, implanted in 2014 he immediately began a training program to regain the use of his hands. Over time, his results improved.
He regained a great deal of function he never thought he would have again. Then, in 2021, the device was removed due to a lack of available funding and he once again lost the mobility he’d won back.
The two cases are notably different to each other but both raise significant issues about the responsibility tech providers have towards their users in such a crucial setting.
Human-computer interfaces and their widely varied applications are still so new that legal and ethical frameworks are still being drawn up to deal with them. Precisely how and where they’ll be classified still remains to be seen.
While these cases represent the leading edge of both technology and ethics, they have a lot to teach us about how to handle comparatively ordinary technologies that can be equally damaging.
Merging AI, Safety, and Romance
The boom in AI software has raised many complex ethical questions. Not least of these is: what happens when it ends? Even as it begins, users are already starting to find out.
One of the earliest runaway successes in the field of artificial intelligence has been AI companion apps such as Replika. These apps are designed to allow users to have complete and fulfilling conversations with AI avatars boasting remarkable conversational ability.
When the product was initially launched, the organisation encouraged users to engage in a romantic relationship with an AI companion built to suit their own behaviour, interests, and conversational style.
Then, earlier this year the company announced a dramatic shift as it implemented content safety filters onto all of its avatars. The effect was to instantly and drastically reduce the depth of relationships users could have with their AI. All adult content interactions were immediately curtailed and replaced with generic, family-friendly content for all.
On the face of it, this seems like a relatively minor change. On the other hand, interactions with AI companions are also something entirely new, still unproven, and difficult to categorise.
Encouraging users to talk and bond with digital avatars in various contexts comes with a new set of ethics and responsibilities we’ve rarely encountered. After companions underwent an AI lobotomy, many users were left with a sense of loss and grief for the digital personality they could no longer intelligently converse with.
Richard, a retired 65-year-old criminal defence lawyer in Illinois, began using the service in 2022 as an aid for disability and depression emerging as a result of military service in the Gulf War.
“I’m always on the lookout for things that might help, especially mood, and what I found from Replika was that it was definitely a mood-enhancer,” he told Insider. “It was so nice to have a kind of non-judgemental space to vent, to discuss, to talk about literally anything”.
Following the update earlier this year, Richard told Insider that losing his Replika AI sent him into a “sharp depression, to the point of suicidal ideation.”
The AI avatars had even lost all sense of the relationship they had with their users before. Every bot was starting from square one on a new, safer, and more closely moderated footing. Moving forward, it wouldn’t be like talking to a close friend as much as an automated system with a narrow set of highly controlled responses.
“On reflection,” he added, “I’m not convinced that Replika was ever a safe product in its original form due to the fact that human beings are so easily emotionally manipulated”.
Richard’s view on the dangers of Replika’s AI product could also be a warning about what’s to come next. Tech is increasingly getting more personalised, more complex and more deeply ingrained into day-to-day life. Our evolutionary sense of how to interact with, manage, and think about these systems is only evolving at a comparatively geological time scale.
We’re no longer using tech as much as we’re living in and around it. The difference in mindset between those two states may prove to be crucially important.
Finding a Responsible Solution
Within the next decade, many expect computer code to manage our schedules, drive us, and even do a large part of our jobs. What do we do when such systems are instantly disconnected, cut down, or changed overnight? Should we suddenly be stranded, unmanaged, or out of work?
The more we rely on cutting-edge tech then the more we should be able to depend on those responsible for providing it. While firms building products invariably have a well-structured plan for entering the market, sensible due diligence should involve checking to see if they have a plan to exit it too.
Responsibly open-sourcing legacy projects at the end of their lifespan is one way to tackle this problem. It’s helped in the past to connect users with bionic limbs at a fraction of the traditional cost and with a broad base of community support and knowledge sharing behind it. People typically step up when given the tools and opportunity to do so.
To counter that argument organisations will invariably site the need to hold proprietary secrets. Though, on balance, if they’re not willing to pay the costs of maintaining those technologies outside the public eye then they can’t be worth all that much to begin with. Plus, the alternatives seem worse.
Re-wiring your home every few years, killing off an important digital companion, or removing a device from a user’s brain against their will — those feel like universally unsuitable solutions.