Nash and the Noise: Using Iota to Mitigate Interruptive and Invasive Technology.

This informal essay is an attempt to summarize my thinking about an idea that has been gnawing at my consciousness the last five years. I need help to either expand on it, hand it off or put it to rest. I recognise that this essay is in danger of slipping into Underpants Gnomes territory. So, let’s start by borrowing the following disclaimer from @erikphoel :

WARNING: reach may exceed grasp.

Fine. Onward!

Expecting humans to deal with the interruptions, stresses and invasions by networked machines is the equivalent of asking a goat to get through a salt-lick the size of the Eiger. In order to get around this problem, machines need to deal with the interruptions and stresses caused by other machines.

For years, we’ve been struggling with the creation of a ruleset that can be automatic in day-to-day activities, free from the common errors and pitfalls, and beneficial enough to actually be used and appreciated by consumers. In short, human-machine consensus needs to move from one of implicit trust in machines to monitor humans with open-ended risk to one of explicit trust in machines to monitor other machines with bounded risk.

Consider the case of a networked personal assistant in your home.

What we have right now, framed as Nassim Nicholas Taleb would frame it, is a limited and strictly bounded upside consisting of the the delivered (not just advertised) benefits, with a theoretically unlimited downside. With most networked consumer products we have no way of knowing whether or not we have control of the device’s sensors, e.g., microphone; or output capability, i.e., audio hardware, displays, blinky lights etc. We don’t know if we are in control of these systems or not. Even if the delivered benefits are known the results of un-monitored and uncontrolled sensing and mining of our data are not.

A more desirable state would be a tradeoff between a known and bounded quality and quantity of risk in exchange for delivered benefits. Furthermore, the technology that facilitates consensus should not have its fate tied to any of the actors it interacts with.

If networked devices are the scrum, a consensus framework is the referee. The referee might make mistakes sometimes, but makes decisions on behalf of humans, not as a function of third parties to out-compete each other for the human’s attention.

So, who legitimises the referee and what rules are used? Let’s use a Nash equilibrium.

A Nash equilibrium is a set of strategies for a group of players where it’s in no one’s interest to change their strategy based on what other players are doing. Put another way, it’s like a law that no one would break even if there were no chance of being caught. A classic example is outlined here. The rub is that it’s ultimately in everyone’s best interest to respect them. This is what has been missing in the way devices currently operate in our environments. We need beneficial (to humans) cooperative notification or intrusion e.g. machine listening permissions or user-designated agent controlled time-gates or states within localised IoT ecosystems. In short: we have a lot of networked things that make noise or survey us in our environment and the only way to make these networked things behave is to require cooperative action.

A distributed, zero-transaction-cost ledger, such as that supplied by IOTA’s Tangle that could confirm or deny a Nash equilibrium for beneficial cooperative notification or intrusion e.g., machine listening permissions, or user-designated, agent-controlled time-gates, or states Within localised IoT ecosystems, notification or intrusion, e.g. machine listening permissions, must be done in a manner beneficial to humans.

If you don’t want to dig through all this one thing you need to know is that this tech isn’t based on a blockchain and doesn’t rely on Proof of Work to verify its transactions. Furthermore the effectiveness of Iota’s Tangle by design gets more reliable and faster as it gets bigger. There is a lot of information online the Tangle and recommend looking at the the white paper which explains the theory behind the technology, its scalability and functioning of the directed acyclic graph or DAG, for short.

For example: actor A in a group of N-actors dials in its audio notifications and/or sound-control apparatus to benefit its user at that time and not in conflict with other actors. What could this look like? All the stuff in your living room agrees to make noise within parameters that are user-beneficial, not single-agent beneficial. Within localised IoT ecosystems, notification or intrusion, e.g. machine listening permissions, must be done in a manner beneficial to humans. In simple terms, your home assistant only notifies you when it’s beneficial to you to have the interruption. It acts on your behalf, not its creator’s.

Furthermore and what may be an interesting new source of value: the fuzzy edges between networks become markets as do user-specified “interruption gates.” It is at these points that devices can pay for attention.

In this way, advertisers compete for your attention at a given point in time that you or your agent has established as the allowed marketplace. It is at these intersections that the micro-brokerage of your data-exhaust could become active. Using Iota to facilitate these kinds of transactions makes sense. In this way you could design very specific cases for when, how, to whom and, for how much you share your data.

I’m guessing the markets could exist in all states. It doesn’t matter if they are inside an equilibrium — in an overlapping or undefined space, there are always opportunities for actors to negotiate the exchange of some kind of value. As the long as a notification stays within the limits placed on it, there is also the possibility that another notification would bargain for the ability to be played first or leverage the playout capability of the system that it is bargaining with. From a user’s point if view they don’t care who goes first as long as the devices aren’t stepping out of the bounds placed on them.

It doesn’t matter if they are inside an equilibrium — in an overlapping or undefined space, there are always opportunities for actors to negotiate the exchange of some kind of value. As the long as a notification stays within the limits placed on it, there is also the possibility that another notification would bargain for the ability to be played first or leverage the playout capability of the system that it is bargaining with.

Why this is important

We don’t want to be the goat working our way through a mountain-sized salt-lick of machine-generated demands on our attention. Nor do we want to spend endless hours wittering away in our heads trying to make decisions on what to trust and what NOT to trust.

Because, like The Cluetrain Manifesto says:

We die.

You have a limited amount of time in your existence. Machines have more. Lots more. In fact as they get faster they effectively get more time to play with. More cycles = time gets sliced into smaller chunks and, from the point of view of the, machine; time dilates. Oh, and those are just machines based on binary computing. Maybe not as fast (yet)as the Mind’s of Iain M. Banks’s Culture series who compose an entire civilisation’s worth of literature in between a human uttering the syllables “ba” and “con”; but they are still monumentally faster than you at many tasks and getting faster, more adaptable and more complex by the day.

Consider Srigi’s tweet below. Although he’s talking about latency values in programming we see that machines operate in a dynamic range of time completely outside the envelope of human existence.

So, if we are going to leverage the power of networked machines that learn, their interruptive and invasive nature must be tamed by other networked machines. We don’t have enough time in our existence deal with the machines operating at speeds many orders of magnitude above what we call perception.

To wit:

If everything is noisy, everybody loses.

and

If many things are untrustworthy, more lose than win.

I believe that the current system is already on a bad path, and it’s becoming worse. As systems try to out-compete each other for your attention (or out-sneak you for your data) the result is a lose-lose scenario for all human actors.

This is what led me to the Nash equilibrium and ultimately, Iota. Why use a Nash equilibrium as a framework for controlling interruptions? From what I can understand it is the exact opposite of a system whose detrimental effect on the humans around it is inversely proportional to its efforts to outcompete its neighbours for example by using volume or frequency of occurrence.

In my minds eye the two functions are polar opposites.

I shared this idea with Tom Munnecke and Bob Frankston in 2016 and Tom suggested that all interruptions should start from 0 and be added by permission by a human. He said:

flip the whole thing on its head and surrender to the benefit of the group and ultimately the user.

and Bob said:

Another approach is to have the devices provide APIs and then a moderator can try to manage the overall information. The advantage is having rich information and context rather than relying on out of context information with each device vying for attention.

If we can take advantage of rich information in context, why not under- or even un-utilised features, bandwidth or even individual components of devices within a trusted context.

The sharing of features or components could be part of the local marketplace. For example: sensor groups that are in proximity with each other can coordinate their efforts to increase accuracy in exchange for access to each other’s resources. The ticket to join is an agreement to submit to the control of a Nash equilibrium. If you misbehave you are smacked down.

Why Iota?

It offers both the possibility to integrate marketplaces with consensus-controlled, and thus trusted, networked devices. Will it work? We’ll see.

Currently they have a prototype of a data marketplace online. However, if their technology can support this, I’m guessing it can support the brokerage of mutually beneficial value across networks of devices.

Here is where I quote David Sønstebø.

From: https://blog.iota.org/iota-development-roadmap-74741f37ed01

In order for IoT to securely mature into its full potential we have to fundamentally change how we think about machines/devices. Rather than perceiving them as lifeless amalgams of metal and plastic with a specific purpose, we need to shift toward considering each device as its own identity with different attributes.

For instance, a sensor should not only have its unique identifier, but also accompanying it attributes such as: who manufactured it, when it was deployed, what is the expected life time cycle, who owns it now, what kind of sensor data is it gathering and at what granularity, does it sell the data and if so for how much?

When each device has its own ID, one can also establish reputation systems that are vital for anomaly and intrusion detection. By observing whether a device is acting in accordance with its ID or not, the latter which can be indicative of malware being spread, the neighbouring devices can quarantine it.

…..IOTA’s ledger will serve the role of ensuring that the device’s attributes and reputation is tamper-proof.

IMAGE OF DEVICE COMPONENTS TRYING TO GET IN A CLUB “YOU MUST HAVE AT LEAST SO AND SO HEADROOM TO PARTICIPATE

Some applications

– We limit when something can interrupt based on whatever predetermined conditions the user has set for their environment for it to be in equilibrium. Instead of dialling in each and every device individually, the devices submit to a consensus algorithm whose default is silence. We could discuss if an emergency class of notifications could break this rule, but a Lands’ End flash-sale certainly cannot.

– By registering individual audio components within a system, we can track and control their validity and health. If something is going wrong, the device is prohibited from taking part in the network. This could be any measurable parameter that exists as a single value, a range of values or a function that approaches a limit. Component ratings — e.g. harmonic distortion, maximum sample rate, or even user reviews or histories of trust (or betrayal) could be referenced using a distributed ledger like Iota’s Tangle.

(IMAGE OF SEVERAL DEVICES CREATING DIRECTIONAL ALERTS BY SHARING PLAYOUT HARDWARE)

Shared playout fields offer unique opportunities for real-time sonification and audible notifications. Imagine the ability to use an entire suite of playout hardware to reproduce directional and indicative sounds. When playout fields or “audible actors” interact, the result should be more than just interference and addition. These fields should interact on a logical, interdependent level.

Certainly there will be problems with phase coherence if different devices have clocks of varying accuracy. This as well could be part of the threshold criteria for participation in a multi speaker playout or other networked event. For audio user interfaces, the most important thing will be to play the right sound at the right time in a way that doesn’t yank the listener out of their flow.

Problems and questions

Is Iota really the solution?

I don’t know, but its the best thing I’ve seen so far. That isn’t to say there won’t be something better in the future, but their solution is designed for exactly these kinds of machine-to-machine contexts. The technology is still in development and if it doesn’t take off, someone else will probably develop something that does the same job. I believe that whatever solution that leverages a Nash equilibrium to keep the aforementioned lose-lose scenarios (too much noise and too little trust) from developing and facilitates micro-brokerage of personal data to empower humans is likely to be an anti-fragile one.

What time is it?

How do we synchronise clocks? I guess it depends on the expected granularity. Iota’s Tangle offers two different ways to solve this problem. One offers a deterministic result at a higher computational cost while the other is computationally cheaper but only offers a confidence interval. I guess the level of granularity possible in this case would be a function of tangle size, hardware limitations and network speed.

If we wanted to minimise phase problems that occur from digital devices playing the same file but under different (or jittery) digital clocks we might need a different solution. As of 19.12.2017 Iota announced a feature called “flash channels” that expedites transactions within a temporary network. Maybe this could be used to generate an accurate shared clock within a localised i.e. living room or factory floor sized, network. Honestly, I’m out of my depth here–any engineers reading this please chime in!

Show me the math!

My understanding of the Nash equilibrium and game theory are shaky at best. If someone who has deep knowledge of these subjects could step in here I would be most grateful!

So you are condoning a networked-everything techno hell-scape?

No, not at all. If I ever get to build a house it will be a Faraday cage with an oak rack for my tinfoil Stetson. For now, I accept that I will continue to rely on networked machines in my life. However, I don’t want to suffer from the stresses and risks outlined above. So, this is a possible solution I would use if I could build it.

In addition to the above framework, let’s equip products and homes with an easily accessible switch that can air-gap all data connections, sensors and noise (light, sound, display). I want to be able to push the probability it is “off” orders of magnitude from where we are now. The switches themselves could be big enough to symbolise the relevance or risk of what they enable. Who doesn’t love knife switches? At dinnertime the switch that connects the house to outside networks is open in full view. No sensor data from the home is exposed to the world–its just the family at the table. Your devices (also equipped with small dip-switches for The Big Off) are free to support you locally but they can’t talk to anyone outside the home. I’d like more control of the endless numbers of causal threads that can work against my unconscious self. The term I’ve been using for this is “ambient stress”.

Aren’t you just a shill?

I hope not. As stated above: Iota is the best thing I know about so far that might be able to solve a problem bouncing around in my head for a long time. If they don’t crack this hopefully someone else with a similar technology will.

What does this mean for UX in general?

I believe that in the not-to-distant future user experience design, as we know it now, will be radically transformed–as in black swan transformed. Instead of asking an experience designer to create sensible interactions for systems that are becoming exponentially complex, the designer becomes more of a curator who assigns dynamic rulesets instead of handling every single decision gate. There are many good UX design rules and sound design rules. Eventually they will all need to be implemented in real time. Eventually the UX designer will specify specific desired states but not necessarily the exact method to achieve that state, in the same away that Napoleon transformed warfare by supplying strategic outcomes he wanted–but not caring about how it got done. Eventually UX design will be about setting up probabilistic thresholds and micro-markets for outcomes within an acceptable range. This would allow the designer to spend more time thinking about ways to provide delight to wonder–more ways for humans to enjoy being humans and not have to worry about so many causal complexities.

What if this has horrible side effects and kills all the crypto-kitties?

Its true, I haven’t considered second-order effects. Is there a black-hat or digital-Kudzu scenario that I’m not seeing? You tell me. I’m putting this out in public so that someone can pulverise or improve it.

Nelson

This the first of a number of essays I’ve been working on for a while. More soon. If you can help push this idea further or can correct and inaccuracies please contact me or write suggestions via Medium’s comments. Thanks to: Kellyn Bardeen, Phil Quitslund, Tom Mandel, Navin Ramachandran, alysha naples and Toby B. for suggestions and corrections. Finally, if it hadn’t been for an impromptu brainstorming session about networked assisted hearing with Sheldon Renan at Jerry Michalski’s retreat in 2011 this idea wouldn’t have started to brew.

If you want to read more of what I’ve written please take a look at the book co-authored with Amber Case called “Designing Products With Sound” available from O’Reilly Books in 2018.