Heuristics and Biases made when Forecasting (as it relates to the Polyswarm project): Part 2

ChrisW
21 min readJan 1, 2020

--

This is obviously not financial advise. This is a theory based academic exercise looking at biases and how they may effect the thinking of an analyst when evaluating a startup project. Don’t use this information as part of any financial decision.

I am writing a three part series where I use lessons learnt from some fantastic books to evaluate the probability of success of a crypto project called Polyswarm Network.

‘Thinking Fast and Slow’ summaries decades of research by Amus Taversky and Daniel Kanneman where they have developed a theory of how the brain works based on behavioral data collected from numerous experiments. Amus and Daniel have suggested a 2 system model of the brain for responding to stimuli where system 1 comes first and quickly interprets the stimuli, provides meaning and makes a judgement. If necessary, this information is then passed to system 2 which interprets and adjusts. It is shown how the decision making of system 1, while good in many circumstances, will systematically lead us astray in certain situations. Here I will list the biases and heuristics and show how they may be present when forecasting the success of a crypto project. I shall then attempt to take this further by showing how they may influence the forecasting of the Polyswarm crypto project given known information about the project.

Image from www.focus-education.co.uk

Biases and heuristics

Confirmation bias: Prior beliefs influence the data the analyst seeks and her interpretation of the data. Here the analyst seeks information which confirms the belief and ignores information which does not. Further the analyst ‘sees’ confirming data as being more credible than disconfirming data.

To understand the threat landscape I have read articles mainly by AV companies, consulting firms and forecasting pundits. All of these are heavily biased to predict a rise in malware and the importance of greater cyber security protection. The articles are drafted mainly to scare readers into using the services being sold or to entertain about future prospects. The articles are mainly alarmist and exaggerate threats, complications and costs (e.g ransomware costs for 2017 by IBM X factor report costs to be aprox $8bn, yet the overall cyber security insurance market for that year was only about $2bn).

To counter this I would need to search for evidence which shows declines in cyber security threats or at least identify an absence of disconfirming evidence. Also I need to search for reports and evidence which may not have such a strong agenda behind it.

An insurance industry report looking at cyber threat insurance provided the most balanced perspective where it indicated that enterprise is starting to take cyber security more seriously but that the amount of premiums collected is steady over the last 3 years. This could mean the market has reached a plateau, is on a decline or will resume an upward trend again after a breather but that it is not in a current incline.

Recently government policy writing toward cyber security has increased further indicating increased awareness of the issue.

IBM X-factor report for 2017 reports a decline in threats to top-targeted industries. It also reports a decline in incidents and attacks for 2017 compared to 2016.

The main takeaway is that some forms of attacks and attack vectors will be on the raise while others will decline. The threat landscape changes and evolves and security measures need to evolve along with it, there will always be new and scary threats to scare the public with but this is simply the nature of the market. Therefore I would not say PN is a desperately needed tool to save the world from the doom that is coming but rather that PN is an additional tool which may help evolve the protection side as the threat side evolves. It looks like the market size is neither increasing nor decreasing.

Possibility effect: This is a term I made up to capture a specific bias I found to be very strong when analyzing new startup projects. It is related to the heuristics developed when scenario planning and is related to the availability heuristic. However it is so influential I believe it deserves its own title. This effect is observed when it is easy or possible to imagine highly positive outcomes for the project. The positive imagined outcomes increase the attractiveness of the project but do not necessarily make it a good investment.

Remember when this was the future for the year 2000

For example, a project aiming to create cloud computing infrastructure would have a high possibility effect bias compared to a project looking to provide software services for assisting in the transfer of plumbing infrastructure from private to government entities. The former project is exciting and we can easily imagine how successful the project would be if it obtains just a small segment of the massive market. Further we can imagine how the cloud industry is expanding and how this new innovative project may lead the way in disrupting the tech and gaining market share. We can think of the success of AWS and we can imagine a future where the new project is involved in fog computing and has managed to outprice AWS thanks to its new innovations. What we dont think of is that the large market is highly competitive and has large players with bigger budgets innovating at a higher rate with established relationships and a steady revenue stream. The latter, more boring, less in your face project has a smaller market and even if they do well its not something that will be noticed by the every day person. It will never be on magazine covers and never be too exciting. However it may be able to dominate its small market where there are little to no competitors and where they are providing a better solution to partners they can easily establish relationships with.

The trick here would be to discard any ideas of what the project could possibly do in the future and only consider what it can do in the very near future given; the technology it currently has, the current market size and existing relationships. Yes, this means missing out on the really big hitters which do realize their possibility, but also means avoiding all the false positives which our imaginations made us buy.

For PN I will only consider a limited use of the network as an auxiliary value add to various stakeholders for testing and bench-marking of AV products and the attestation of fringe artifacts. The limited use will only extend to interested parties already identified (where some will decide not to use the network and others not yet identified may decide to use it). PN is also a very back-end infrastructure providing a service to enterprise. This makes it less prone to the possibility effect as its service is less prominent or visible in every day life. It is also difficult to understand and imagine the role of PN as it is intended to be used. Further and alternative uses of the network are even harder to imagine. For these reasons PN may not induce such a strong possibility effect bias for the analyst.

Anchoring: It is difficult to know where to start in valuing a project, so the analyst often selects an anchor from which to adjust. Special care should be taken to ensure this anchor is not random. Even rationally selected anchors can be found to be random when scrutinized. eg. ICO marketcaps of projects where significantly higher during a bull market compared to a bear market irrespective of the fundamental value of the project. The ICO price of a token is therefore more correlated to the emotional state of the market at time of ICO than to the value of the project at ICO. However the ICO price often acts an an anchor from which to judge current price. Given that ICO price can vary wildly depending on the mood of the market, this anchor should be discarded.

I have been comparing current NCT price to ICO price and feeling that purchasing at current price is a good deal. A better anchor would be average amount of venture capital funds granted to a company in cyber security space.

At ICO launch a team was already established and has been working on a prototype product. We can therefore conclude that the pre-seed funding stage has passed.

If we look at the ICO as the first stage of granting ‘ownership’ to outside investors then the ICO is the seed round funding. I am not considering higher series funding levels (such as series A) as the project is in pre-revenue stages and product not yet launched.

Crunch base provided stats on 572 startups in the cyber security space with an average founding year of 2014. A total of $670m was distributed over a total of 964 funding rounds giving an average of $700k per funding round or $1.17m per company. Genrally between 10–20% of the company is sold for seed funding.

If we take the PN to be a company and assume the entire company was sold at seed stage at the average valuation this would give us a conservative valuation of about $5.85m. This may provide a base rate from which we can anchor our valuations of PN.

Below are some examples of series A funding and seed funding where companies already have a launched product. This may provide an estimate from which to value PN once the product is up and running. The articles do not detail the company valuations or equity provided. So a conservative estimate would be 50%. This places the value of series A funded companies around $20m. The companies listed are generally in a growth phase and are revenue positive making them significantly lower risk.

  • Israel-Based IoT Cybersecurity Company ShieldIOT Raises $3.6 Million
  • Cyware Labs Raises $3 Million in Seed Funding.
  • Prevailion raised its $2M seed round from DataTribe in seed funding
  • Prevailion Secures $10M Series A Investment Led By AllegisCyber
  • Altitude Networks Raises $9 Million in Series A Funding
  • Confluera Raises $9 Million in Series A.
  • Trinity Cyber Secures $23 Million in Funding
  • DefenseStorm Secures $15M in Series A Funding

I have found the above analysis very sobering. It forced me to look at many startups in the cyber security space all of which show how they are going to disrupt the space. Obviously not all the statups are going to become titans of the AV world, so why PN? Also the analysis indicates that the market bottom for PN market cap is most likely a fair valuation and once product is launched and revenue generating, a valuation around $20m may be a better base rate anchor.

Appeal to authority or the expert: Has the analyst obtained forecasting information from an authority/expert/someone or some entity the analyst respects or admires. Does the analyst like the team members? Do the team members seam to fit the description of persons who would launch a successful startup? All of this may skew the analysts ability to be critical of the information received or may cause a bait-and switch (swap the difficult question for the harder one).

Dont be fooled by charisma or stereotype fit

Yes the founding members do seem like technically capable persons able to deliver on a technical product. The quarterly AMAs do instill confidence by providing a friendly face to the project. There has been no significant celebrity associated with the project. Possible effect of McAfee ex CFO involvement may be a false source of confidence.

The positive framing of the project should be tested against the team’s ability to deliver on milestones.

Swarm technologies has been overoptimistic about the ability to deliver a product by a given deadline. Several goals were not achieved as planned 2 months into the project and mainnet launch is about 12 months delayed (projected Q4 2018). They have also not delivered on expansion of artifact types and end-point versions.

Swarm Technologies have expanded their product scope to include the following features not mentioned in the roadmap: artifact meta data, communities feature for confidentiality and scaling (side chain implementations), YARA rules and historical threat hunting.

From this it may be reasonable to say that the team does deliver product development and that they are generally overly optimistic and suffer from the planning fallacy (as most teams do). Also they did not reach their funding goal which may be a source of product development delay.

Forecasting more of the same/ status quo biases: It is easier to take the current trend and thinking and predict a continuation. This makes it easy to predict under stable conditions but very difficult to predict large changes (and its the large changes we care about). Related to anchoring where the anchor is the present state of something or present rate. (e.g housing prices increasing at 10% per year so to predict price of my house next year would simply be ‘current price*1.1’). Also related the to availability heuristic where current sentiment/information is more easily recalled and therefore its probability to occur in the future is overestimated.

A way to get out of this rut is to scenario plan where the analyst uses his imagination to come up with all kinds of interesting alternative scenarios for possible change. The problem with this is it activates the availability heuristic and causes the analyst to overestimate the possibility of some crazy change. If scenario planning is used the options it comes up with should be treated as very low probability events more used to get out of the status quo biases rut than to add to forecasts.

General forecasts on cyber-security market and trends in threats is that it will substantially increase over the next few years. However stats indicate that the trend has somewhat plateaued. Here I shall not make any attempt to guess the future market size and shall exclude any attempt by others to do so.

Anyone who does this is lying to you

Bait and switch: When trying to answer a difficult question the analyst may swap the difficult question for an easier question and then answer the easier question. The answer to the easy question becomes the answer to the difficult question (e.g should I invest in this project? Becomes ‘does the CEO sound confident? The answer to the latter is used as the answer to the former.

Here I believe I asked myself the following easy questions and substituted for the question: Is this a project that will provide a high probability of a good return on investment?

  • Is this project significantly cheaper than its ICO price?
  • Have other people invested in this project at a much higher price?
  • Is this project a scam? If no, then the low price may make it a good deal.
  • Are lots of people working on the project?
  • Is the marketcap significantly lower than other projects on the market?
  • Has a respectable fund invested in the project?

All these questions had a yes answer but each yes answer does not answer the difficult question posed above. Simply recognizing this forces deeper analysis and reduces false confidence.

Availability heuristic: Here the analyst attempts to measure frequency of an event or future probability of its occurrence by how easily it can be called from memory. Results in highly skewed understanding of event frequencies. E.g how often do people get 100x baggers in crypto space? Examples easily come to mind because people who make a lot of money tend to shout loudly (people who lose a lot dont say much) → analyst thus overestimates the lottery winners and underestimates the large pool of losers.

The availability heuristic causes the analyst to over-weigh recent events when making a judgement on a project as recent events come to mind more easily, the influence of these events are also magnified in the mind of the analyst. To prevent exaggeration of importance of recent or more memorable events the project has engaged in, each event is recorded along with its proposed effect on the probability of success of the project (the score). The score can be compared to other scores given to similar events, also the score will not automatically diminish over time (as occurs mentally) allowing past events to maintain their influence and more recent events to be judged in comparison to past events.

I collect all available objective data on Polyswarm and assign a score to the significance of the piece of data (e.g project partnered with company X, score 1%). The score indicates the additional probability that the project will achieve some goal such as becoming a top 10 crypto currency).

Be weary of placing too much weight on the easy to obtain information. Our brain likes this lazy, easy info and wants to base most of the decision on this. Also easy to process and highly available information may be more ‘priced in’ and it would be wise to ‘discount’ it somewhat.

A note, the large collection of data by the analyst will skew his perception of its relevance and importance. Here I am collecting huge amounts of data about cyber crime which skews my ability to put cyber crime into context. I’ll start to overestimate its relevance and therefore the importance of the project and its likelyhood of succeeding.

Framing effect: When choosing between different options, an analyst will look at if the options are framed positively or negatively to inform the decision (framing should not matter). Ties into loss aversion where the analyst will try to avoid loss and keep gains (ie. conservative with gains and risky to avoid a loss).

Framing also relates to how the information is being presented and its context. This will influence the analyst to be more bullish or bearish.

Frames which may exist when evaluating a crypto project involve:

  • The fact that its a crypto project, the prospects of the project will be judged in the current atmosphere of the market (bullish or bearish). The project should be valued on its own merit not on a general market.
  • The blockchain technology used has a very promising atmosphere which may provide a bullish frame through which the analyst views all blockchain projects.
  • The project is framed as a new and innovative technology project which makes it more exciting with more potential. Instead it should be viewed in terms of its ability to provide a quantity of service to clients at a certain cost and margin. Or, from a more innovation focused standpoint; to what extent may the project be able to reduce the cost of some good or service.

Representative: How well does a scenario or situation represent what is expected, a kind of stereotype. Stereotypes often work but when superimposed on a situation can often skew the reality.

Avoiding representativness: A scenario with lots of colorful assumptions and requirements (eg. government X will do Y leading to Z which will cause ABC) makes for a good story which may seem to make sense because its representative of what we expect. We should therefore list the assumptions that need to be met for the forecast to be true. Be cautious if the list of assumptions and requirements becomes long and has many dependencies.

What is required for PN to successfully provide a value add service to clients and for this to translate to an increase in token value?

1. Network economics need to incentivise the more accurate attestation of fringe artifacts.

- The use of bounties, staking, bidding and reward/punishment systems are expected to incentivise accuracy. To be seen if this will work.

2. Technical specifications need to allow for a system which provides more value than the costs of obtaining the value (decentralization and operating costs cannot be too high).

- Use of side chains (governance mechanisms unknown).

- Use of state channels?

3. A sufficient number of microengines need to operate on the network to provide value to paying clients.

- There are currently about 25 useful engines operating on the testnet. This would need to be increased to at least 60 to compete with VirusTotal.

4. PN needs to gain traction with regular paying clients.

- My understanding is that there are currently no paying clients.

5. The network economics and market psychology need to allow for price increase as demand for network services increase.

- There is an incentive for microengines to hold their earned tokens for future attestation and for obtaining arbitor status.

- Arbitors are incentivised to hold tokens for reputation purposes.

- Tokens are held in escrow once an attestation is made until arbitor verdict is provided. Thus higher volumes of attestations means more locked up tokens.

- Microengines run by companies may not be interested in the revenue aspect and may leave tokens locked up.

- Smaller microengine operators may be interested in speculating on the project by holding tokens (like cryptocurrency miners).

- Speculator interest in holding tokens.

- Alternatively engines may always dump excess tokens imminently.

- Speculators will be too scared to hold tokens due to dumping by engines.

There are several requirements for project success which is worrying as the failure of any one of them could be detrimental. However, the requirements dont seem to produce a typical story so I dont believe there is any significant stereotype bias here. Further, the requirements are relatively stand-alone and not dependable on other requirements.

What you see is all there is: When making a judgment, we cannot make use of any information which is not known to us. We will always be limited by what we see where we will completely exclude what we do not see. It is difficult to know how to attack this problem but a solution may be found in working within groups where members are able to bring forth varied options, ideas and information. Another may be to always ignore conventional wisdom or gut feel, but this may have as much downside as upside. For example you may assume that all dogs behind walls bark, but thats only because you are not hearing the dogs that dont bark. The percentage of dogs that bark could therefore be anywhere between 1% and 100% but according to your experience it will be 100%.

To avoid this problem list: Known unknowns (things you know about which may effect the project but which you dont have any information on). Unfortunately we cannot list the unknown unknowns (things we dont know about which can effect the project).

Known unknowns:

1. Unknown competitor provides a better product and has more client traction.

2. There is a code error which can be exploited to damage accounts or activity of network stakeholders.

3. There is a code error which causes a catastrophic system failure.

4. A series of mistakes by the system give it a bad reputation that it cannot recover from.

5. A malicious member of staff drains project funding and inserted a back door to steal stakeholder NCT.

6. A malicious arbitor is planting herself among the goodguys.

7. Decentralization issues in the future make a centralized model more appealing.

8. A switch to a stable coin denominated network.

9. A significant client intends to use the network.

10. A whale decides to buy a ton of NCT.

11. Network economics allow for capturing of significant network value.

Dont forget the importance of luck: A good project may not work because it was unlucky. Randomness and luck are crucial components to all success stories although it may not seem that way in hindsight. It is better to look for projects which have already been lucky and do not require much further luck to be successful.

Over confidence: The analyst will be overconfident about her own ability to beat the market despite the overwhelming evidence to the contrary. If the analyst believes she is better than the average analyst she needs to provide evidence of above average behaviour which may have a causal relationship with above average results (such as spending more time than the average analyst researching a project or has a better than average understanding of the basic technology). It would be more accurate to assume you will achieve average or the base rate and adjust from there based on evidence.

From over-confidence, Confidence intervals: The analyst believes he is more accurate than he actually is. It can be useful to add confidence intervals to a judgement to say that you are 90% confident that the probability of an outcome lies within a certain range. It turns out that humans make this range too narrow.

To assist with over-confidence: be weary of increased confidence in your judgement because you have lots of information. Evidence shows that people become more confident but not more accurate when more information is provided. Probably due to confirmation bias.

Wishful thinking and avoiding bad thoughts: The analyst may engage in wishful thinking where he overestimates the probability of a positive outcome. Also the analyst may underestimate the probability of a bad event occurring.

I am not overly positive about the project and think there is a good chance of failure, although expected value is still probabilistically positive.

To avoid wishful thinking, think of reasons why you could be completely wrong about the success of the project:

  • The team have never taken on a project of this nature and scale before.
  • Enterprise are just not ready to bring crypto into the business even if some benefits are realized.
  • Enterprise does not want their attestation results publically available.
  • PN just cannot scale to meet demand making the system too slow and expensive to be usable.
  • NCT price volatility is unacceptable.
  • Friction in acquiring and selling NCT.
  • Concerns about privacy and need for working agreements between parties cause it to make more sense to have the network be centralized.

Vividness and story telling: A easily understood story explaining why something may occur, especially if that something is made vivid causes the analyst to overestimate its likelihood of occurring. When we are able to ‘see’ causal relationships and patterns we are more likely to believe in it. Projects which provide a product which is more vivid and captures the imagination or which is able to be easily explained will seem more real and more likely to occur. E.g projects targeting end consumers such as a food security logistics project will seem more likely to succeed than a back-end tool to help businesses.

The project does not lend itself to being easily understood or for its product to be easily imagined.

Optimistic bias and illusion of control: The analyst tends to be overly optimistic about the future of a project where a turn-around is just around the corner. This behavior is probably related to loss aversion and sunk cost fallacy. It is important for the analyst to understand that he has no control over the success or failure of the project (rolling a die softly does not result in landing lower numbers).

I try to see the project as an experiment which will or will not work.

Sunk cost: If the analyst has already invested in the project and the price has subsequently gone down, there may be an aversion to admitting he was wrong. The analyst will then be extremely biased in his belief that the project will do well as the alternative is accepting a loss and dealing with regret.

Some interesting things to note:

Loss aversion: Sure gains are preferred over probabilistic gains. If expected value of probabilistic gain is higher than sure gain, analyst should go for probabilistic gain as this will be better in the long run.

Probabilistic loss preferred over sure loss. If probabilistic loss is higher this will erode the account over time. This prevents the analyst from selling a bad project that has lost money.

Missed gains are not seen as equal to losses: Given equal outcomes for behavior A vs behavior B. The analyst would put in more effort to avoid a loss than to obtain an equal gain. Gains should be treated equally to losses. eg. $10 discount to return a rental car on time (normal price is $100 and with discount, $90) not as effective as equivalent amount penalty to return car late (normal price is $90 and $10 penalty if late. In both returning car on time cost $90 and late cost $100).

Peak end rule: When judging something over a period of time the analyst will take a look at a major highlight over that time and what the end point looked like which largely ignoring events over the rest of the period. E.g Project A had a major partnership announcement during the year and launched mainnet at the end of the year. Project B had several major partnerships during the year and launched mainnet mid year, they were very quiet toward the end of the year. Project A may look more appealing when doing a cognitive summary of the year’s performance.

The note taking on all project events and information points as mentioned under ‘availability heuristic’ takes care of this.

From optimistic bias: The planning fallacy: is the tendency to overestimate benefits and underestimate costs, impelling people to take on risky projects.

The team has certainly been over optimistic regarding planning and so have I. The project may take significantly longer than expected to provide good returns.

In short: The use of systematically going through each heuristic and biases forces me to lose the story I was developing inside my head and view the project more objectively. The result of the objectivity is a significantly less optimistic self who sees the many ways the project may not work. However I still recognize the potential of the project to succeed and the various bits of evidence which indicate the team is on the right track.

Thats part 2. Please feel free to comment and critisise below. I welcome feedback as this broadens my understanding.

--

--