Minimal leakage covid tracing app part 2

Tallak Tveide
5 min readApr 20, 2020

--

This story has largely been superseded by a third revision of a protocol, described in part three of this series. If you are only reading one of the stories, I suggest rather reading that one.

In the first part, I described a design for a system to track covid-19 with minimal data leakage.

In this story, I will present another idea with even less data leakage, but with the added requirement that we have a central authority that manages storage of one public encryption key per person.

Note: I’m just an amateur doodling away. You may find better suggestions for designing this system. Though I find it hard to get right, and a very interesting problem.

First, lets start with the basic app where the user Bill and Bob meet (a rendezvous), and they exchange random tokens known to both. If Bill is ill, he makes public the token he got from Bob. Bob know that he need to test for infection of the virus.

The problem with this scheme is that the Bob may note down the names of the people who received each unique token. When Bill issues the warning, Bob knows for a fact that Bill is infected with the virus. Thus, some information has leaked.

My first blog post on this issue made some changes to this scheme by stating that Bill needs to reuse his token N times. In this way, Bob knows that there is a 1/N chance of Bill being infected which is better than certainty. The method also has loopholes related to Bill giving Bob N unique tokens at every rendezvous, so that he may still figure out that Bill is ill. In addition, when users have to reuse their tokens, you add the possibility of tracking that person’s movements, if you can meet Bill at several locations while he is sharing the same token.

I will present a new scheme where we have the following guarantees:

  • Bob can’t know who infected him, except that it happened on a certain date
  • Anyone can track a person for one day, but no more, unless they can collude with the public key authority
  • No one can know that Bill is the one infected with the virus

The scheme requires a public key authority. It verifies the owner of all public keys in the public record. The information who owns which key may be disposed, as long at the authority can guarantee that every person gets only one key. The key authority also enforces a rule; every day, each user may (should) change his or her public key, and the change must be signed with the previous private key. Only the key authority may knows which keys belong to which person. Allowing a user to supply a new key signed with the previous one removes the need to verify the user every new period.

If the public key authority should leak information about a users list of public keys, someone could use those keys to track the user beyond a 24 hour period. But nothing more.

Next we will look at Bitcoin BIP-32 Hierarchical Deterministic Wallets. We will not use the whole structure, but only the fact that the following structure is feasible to implement

The arrows represent transformations that are possible. Eg. the public key P0 may be calculated if you know the private key S0, P1 may be calculated if you know P0 or S1. These transitions are one-way.

The key P0 in the figure is stored in the public key record, with a new key for each day. One reason to change the key is to make tracking a user harder, the other is that when a user is warned about infection, he or she will know on which date the infection might have happened.

We will imagine a rendezvous where Bill meets Bob. Bill gives Bob the private key S1 and the public key P0. Bob uses S1 to find P1, P0 to find P1 and looks up P0 in the public key record. Bob thus knows that Bill is a real person, and that Bill is in the possession of S0. He can’t be a tracking bot producing random unique tokens.

Later, Bill is confirmed to be infected by the virus. He obtains a verification of this [TX0], signed by a doctor whose public key is available in a second public record.

Bill then signs the TX0 with the private key S1, only known to Bill and the people he met on that day. This produces TX1.

Next Bill encrypts TX1 with the public key P0 such that only the owner of S0 (which is only Bob) may read the transaction. This produces TX2.

Finally, Bill makes the transaction TX2 available on a third public record.

Bob may look at all such transactions. If he is successful in decrypting any of them, he will know:

  • Someone he met is infected with the virus (by the doctors signature)
  • Since P0 is only used on a specific date, he will know which date the infection took place
  • Only he is in possession of the necessary P0 key to decrypt the message, thus he is the only one receiving this information
  • The message must have come from someone he or she met on that day, as the message is signed by the S1 key.

Thus, I believe this might be the least leaky scheme I have seen so far.

Further improvements could be:

  • A scheme to allow Bill to issue unique tokens to people he meets. The tokens must be verifiable to be owned by a user in the public key record, but not which one. This makes it impossible to track Bill by his public id. A method to implement this could be a ring signature.
  • Implementing a scheme where the public key authority could not track a user’s public key from day to day. This removes the possibility of someone colluding with the authority to enable tracking of citizens.

--

--