It’s a concept!
The “trusted middleware” is not a specific location, neither physical nor logical. It’s a metaphorical “place” where data and algorithms are handled so that the parties involved in the data processing can work together to create value without disclosing data or or intellectual property (this is the case for algorithms). Practically everything can act as a trusted middleware, provided that it can provide storage, computational and networking capabilities to a sandboxed environment running a specific protocol (agreed between the parties).
How it works?
I take a simple example here below to describe what is the purpose of a trusted middleware in the context of an “elabo-relation”, i.e. a relation taking place in the digital space and whose purpose is to create value through data processing.
EXAMPLE:John knows that the company XYZ offers an exclusive SERVICE: predicting the risk of developing cardio-vascular issues in the next decade by analysing the last two years of blood analysis results of any person between 35 and 45 years old. The price of the service is $199,00.
Before putting the “trusted middleware” at work, let’s have a look to a couple of possible options describing what could happen:
…and the corresponding alternative, which is far less frequent today:
OPTION 2 - Traditional but atypical deployment for the example, w/o trusted middleware.John subscribes the SERVICE and pays 199,00$ to XYZ. Then XYZ sends his proprietary algorithms to John in the form of a software. John executes the algorithms on his computer, disconnected from the Internet. He gets his response, then he uninstalls the software to prevent any eventual call back home of the XYZ's software (John wants to protect his privacy to the maximum extent possible). The contract agreed between John and XYZ might specify that XYZ's software must not be reverse-engineered, so that XYZ will not lose its intellectual property and its competitive know how. Anyway, even without reverse-engineering (which might be possible, in many forms), XYZ could be concerned about John using the software to offer the same service or even sub-distributing the software to third parties*. This kind of relation is possible because XYZ trusts John to the point of not requiring any additional protection than legal enforcement of its rights, if necessary.
(* - This case can be further complicated and refined in many sub-cases, but it's vary hard to find a valid workaround for "asymmetric trust", especially taking into consideration sufficient amount of time, commitment and resources available to the malicious party.)
Now, the problem is clear: in both the options described above, each party has complete trust over the other, but what happens when John wants to protect his own interest beyond any eventual malicious behaviour of XYZ (or viceversa)? What happens when they do not trust each other? Do they have to give up on the opportunity to exchange value?
This problem is -at least theoretically and passing through many simplifications- non so different than the one digital currencies had to resolve to avoid double spending: when parties cannot trust each other, either because they don't want to or simply because they can't, the relation cannot be based on trust over the counter part.
Instead, trust needs to be built on a common ground and should be independent from the parties themselves. One mean of trust can be a protocol, executed in a neutral environment. The neutrality of the environment could be guaranteed by the cooperation of multiple parties in a network under the rules defined in the protocol and this more or less what happens with BitCoin.
The additional complexity BitCoin had to address is the size of the network of trust, because of course a global digital currency needs to work at scale with potentially millions of parties. In the simples case of two parties, like John and XYZ, complexity can be handled in other ways. Before homomorphic computing will be a sustainable option, this is an example of an OPTION 3 for the example above, implementing a trusted middleware:
OPTION 3 - Alternative deployment with trusted middleware.John has already installed on his own mobile phone an open source software -let's call it ABC- that allows him to host applications developed by 3rd parties and execute them locally on his personal data. Then:1) John pays 199,00$ to XYZ and downloads the ABC-compatible version of the XYZ's SERVICE, which is encrypted with ABC's public key (only ABC will be able to decrypt and use the code);2) John uploads hid blood diagnostic data from his SOLID personal data pod to ABC (this automatically encrypts data with ABS's public key, thus making ABC the only entity able to see John's data);3) John launches the execution of the XYZ's code into ABC;4) ABC executes the code and collects the result, encrypting it with John's public key (John will be the only one able to see this result).Finally, John has received and answer from the XYZ's SERVICE without sharing his blood diagnostic data with XYZ. At the same time, XYZ has sold a service without handling its execution while protecting intellectual property and competitive value.The ABC software acted as "trusted middleware" using sotrage and computational capabilities of John's mobile phone, without giving access to encrypted data to anyone, including John.
☞ The example and the OPTION 3 show that operating a digital service without compromising privacy or assets between untrusted parties is possible.
The trusted middleware, in this case an open-source software in which both parties rely, running on the consumer’s mobile device, has made it possible to handle the process. But the trusted middleware could even run in the cloud or in the service provider’s servers.
As you can see, implementing a trusted middleware is essentially a design challenge, not a technological one.