Parametric Privacy

The real world is driven by interactions of different natures between different parties — these could be broadly classified into the following atomic categories: purely transactional, value-creating and value-affirming — which could be bundled together in various real-world interactions. What these could involve is the flow of relevant information between parties pertaining to the nature of the interaction following setting up a procedure/protocol of interaction — and this could involve granting access/ceding control of certain assets (be it tangible/intangible) as well as contractual obligations binding the specific use of such assets. In addition to this — the additional overheads we typically need to associate with such interactions is ability to utilize a common framework/protocol for communication plus verification. This is where there could be multiple nuances: mainly pertaining to verification of rights of access, identity and procedural integrity. This essentially is the crux of the post and the associated issues around the nature of trust and the means to verify the same.

Now that we have outlined the delineation of interactional boundaries, what is the core problem?

There is an outstanding need to create points of authenticity around information and decision flows as regards transactional viewpoints. The more pressing issue is how to do this without compromising intimate details (be it of one’s own feelings, or an enterprise’s data) to other parties — most of which is done these days by value-affirming transactions in addition to copying information/data flows.

There are two drawbacks to this:

the first being privacy (or the lack thereof) when parties interact with each other and leak information that may be crucial to their operational activities and internal procedures as well as access to other resources. This is relevant given the moats firms have built around the data and how everyone is in a scramble to build their own walled-ecosystems with larger tech companies seeking to become the connective tissue in this framework.

The second is the lack of verifiability of the computational procedures run on different facets of this data in the value chain — essentially, the ability to verify that this flow of access/information led to procedures being run correctly in one or more parts of the value chain. This is extremely relevant since we would like to attest to the provenance of not just data, but data-powered operational procedures — to essentially checklist that entities working on this end-end process have been doing things in the expected fashion.


Maybe this isn’t very clear: so let’s work with an example. In this usecase, imagine we’re an insurer who wants to ascertain the core cause of an autonomous car crash — the world I’m envisioning is one where the insurer can define the parameters that are relevant to their model — namely, the operational details of how the car software was interacting with sensory inputs, the deductions made on the onboard computers of the parties involved in the crash and potentially the similarity of outputs of such procedures running on other cars/entities in the same neighbourhood at the same time — or at a historical point in time.

Another (far-out) scenario playing out in my head is the ability for employers to verify the veracity of the deployments performed by engineers in other tasks/jobs (at other employers) through running a test that their software modules are actually deployed and functioning as they claim without actually disclosing any further information about the rest of the product codebase.

We have been relying on value-affirming interactions and are tying in incentive-based mechanisms to ensure the right alignment of participation and collaboration amongst different parties — but I’m quite wary of game-theoretic financial incentives as a means of achieving optimal results; and to attest to this, we have seen multiple litigations and lawsuits around the core theme of data infringement and falsification/misrepresentation. This problem is commonly felt in the enterprise world around different data formats, processing standards and specifications — and while this will be interesting to unfold in the real world (in terms of media apps), I’d say at no big risk that this certainly prevents operational flexibility and decision making along with relying on financial incentives as the medium of expressing trust. While this can work in multiple settings, is there really no way around this?

There is a reason why I have been delving deeper to look at the underlying drivers behind interactions and the processes that attest to each party’s identification, information and intents — and being able to verify this in a succinct and contextual fashion. We want to see that the causes and effects of information flow in specific stakeholders before we make a decision on how to proceed with acting on the inputs we receive. I want to take this opportunity to briefly touch upon into classes of cryptographic techniques focused on showcasing verification around control of data and state information in a succinct fashion — these pertain to techniques around multi-party computation. Additional improvements are those techniques which function without giving up any information of the underlying asset/facet and also on the nature of the computation run on such information assets along with the ability to generate extremely succinct proofs for different parties to come along and attest to the state of computation. So far so — we’ve barely scratched the surface and allowed for encoding of all the state of computation given how hard it is to generate a proof for the correct representation of computational state and the bounds on verifying such proofs.

But more so than this, I foresee the ability for such cryptographic techniques and the proofs generated to be more parametric — wherein the verifier can attest to the correctness of the computation that was run for the data with the additional clarification that the same data was used to control the outputs for other parties in the chain — thereby creating a digital twin of an entire process/information flow. In a more elaborative sense, I would love for the capability to attesting that a certain family of programs/entities interacting with this data functioned in the same way as expected and generated outputs that make me confident to continue utilizing and enhancing this chain of information flow. Tying this in with the blockchain’s immutable source of state would make it incredibly efficient and reassuring as to the veracity of the informational process.

This is what I’d term parametric privacy — that I would be able to gain access to the state of certain stakeholders and obtain information pertaining to their behavior as per my requirements — without anyone losing control of functional procedures or their state.


I’m really psyched about the world of value that could be unlocked by enabling what I now term contextual verification — essentially creating a new paradigm around communicating trust and value in real-world and digital interactions. I’m quite looking forward to seeing more work on this — and if you are a startup/researcher working on pushing these boundaries — do reach out!