Trust in trust systems

Verify.as
verifyas
Published in
5 min readNov 5, 2017

In this post we talk about trust in reputation systems and trust systems. At Verify, we’re building a reputation protocol on the Ethereum blockchain and are sharing these posts in an effort to share our knowledge with the wider crypto community.

We have talked in our previous posts about reputation and reputations system. Now let us talk about trust.

Trust is a relationship that exists between two participants: a truster and a trustee.

If A trusts B, then A is a truster, and B is a trustee.

Usually trust is slowly developed over time and is application-specific. For example, trust in p2p file sharing consists of file quality and download speed. For Amazon, buyers trusts that the seller will deliver the right product. This makes trust a subjective matter (since the level of trust has different meanings in different contexts) and it would be difficult to put it into a single computational model, or capture it with a single definition.

Trust is always represented using a scale (discrete or continuous) and it takes the form of either Functional trust or Recommender trust. Assume Alice is a driver. Functional trust describes how good a driver Alice is. Recommender trust refers to how much you trust Alice’s opinion of Bob, another driver, for instance.

Personal experiences usually carry greater weight than recommendations. Someone who has engaged with someone else and had a pleasant experience will most likely trust this experience and will be further willing to engage with that person. However, we don’t always have personal experience with the parties we are engaging with.

An important caveat here is that a trust threshold should always exist. For example trust for a sensitive task requires a greater degree of trust than that required in deciding on something trivial. However, the accuracy of any trust depends on the data that you have to build upon.

There are several criteria that trust must meet and these are:

A- Trust should have a scale:

Trust scales can be divided to:

  • Binary values e.g. like/dislike (also referred to as binomial values)
  • Multinomial discrete values; when evaluating trust using binary choices are not enough, multinomial discrete values can be used such that more than binary choice is available like “somewhat trust”, “does not trust”, “trust with caution” …etc
  • Continuous values : like a point system.
  • Interval: uses multiple values to represent trust.

B- Trust should have dimension:

  • Trust/distrust: some systems use single trust value the higher it is the higher the trust the lower it is the lower the trust. other system have separate values
  • Timestamp: trust should always be updated as time passes
  • Context: a context surrounding where the trust applies should be considered. A trusted writer does not make a trusted mechanic.
  • Confidence/certainty: measure of how confident the truster is in his trust assessment.

C- Trust sources:

There are four sources of trust: attitude, experience, behavior and similarity.

  • Explicit attitude: truster’s opinion towards trustee can either be positive or negative. The truster explicitly states his opinion on trustee.
  • Evidence/feedback/experience: upon interaction between truster and trustee, an evaluation is made. For example when successful transactions take place between A and B, A can make a trust decision on B.
  • Behavior: how an entity behaves reflects whether it can be trusted. Trust can be obtained from sharing of friends and comments of other users on trust users. Like in sharing posts with friends and commenting on posts. A person commenting on a post and interacting can instill trust between poster and participant/commenter.
  • Similarity: compares entities with each other. A trusted entity is likely to share certain attributes with an unknown entity; by comparing the two and finding similarities (e.g. both belong to a given group and are actively participating in that group) could result in the trust for one entity passing on to the other. Participants who share similar interests could trust each other more readily.

Quantifying Trust

Trust value can be calculated in various ways:

1- Evidence or experience based trust

  • Probability; that trustee will behave as expected; ex binary evidence as input result in trust value.
  • Mean
  • Mode (of discrete values)
  • Difference of positive and negative.

2- Application specific behavior based trust

  • Conversational behaviors: an example is the conversations that happen between parties and whether users forward messages. Also conversation duration and frequency can be used as another measure.

3- Similarity based trust

  • Recommendations based on similarity between users.

4- Reputation

  • It is one type of trust. It is considered in the decision to whether trust or not.

5- Fuzzy logic based trust (we will explore this concept in future posts)

6- Comprehensive trust

  • Considers other opinions and other factors in measuring the trust.

Trust value is not enough to manage trust relationships. confidence should be added to the mix.

Confidence is the measure of how certain the truster trusts the trustee. this helps in distinguishing between unknown participants and untrusted participants.

Confidence also reflects the amount of evidence used in deriving the trust value; the higher the evidence the higher the confidence.

Trust also extends to referred trust. Meaning that if A trusts B, and B trusts C then A is expected to trust C.

There exist two very important operators in the trust inference schemes:

  • Transitivity/concatenation operator: if A trusts B, who in turn trusts C, then A will also trust C. In these cases recommendation is combined with self experiences to adjust recommendation accordingly.
  • Aggregation operator finds trust paths and aggregate them to connect unknown parties; it is similar to transitivity but instead of considering direct links (e.g. A to B), it finds all paths to B (even those through D, for example). This is similar to what LinkedIn does.

In the case of an eCommerce system, we have a seller and a buyer. To base trust on ratings alone is to become prone to different kinds of attacks, like a ballot stuffing attack where a seller sells low priced products to gain trust only to issue a fraudulent larger priced transaction. We therefore need to consider other factors and attributes that would influence this trust. In our transaction example, to measure the trust of the seller we could consider:

  • Quality of Product: that the buyer will rate how related the product is to the description.
  • Delivery time: that the buyer rates the time it took for the product to arrive.
  • Delivery Service: that the buyer rates the shipping company used by the seller. Because it may be that the seller chose a bad shipping service that damaged the item, resulting in the user being unsatisfied.
  • Communication with buyer: that the buyer rates his communication and whether the seller was there to reply to his inquiries.

Now the buyer can also be rated based on:

  • Communication with seller: the seller will rate how well the buyer communicated with him, to infer respect as well as timely responses for any clarifications or inquiries.
  • How fast the buyer marked an item as delivered; this can be measured automatically by the system (through integrations with third-party shipping agents)
  • Quantity and speed of payment; this is also is rated automatically by the system to combat fraudulent behaviors.

All the ratings should be categorized by new experiences and old ones. Usually newer ones are more important than older experiences as it reflects current user behavior (and user behavior may change over time).

--

--