<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Oshiogwe M. Braimah on Medium]]></title>
        <description><![CDATA[Stories by Oshiogwe M. Braimah on Medium]]></description>
        <link>https://medium.com/@oshiogwe?source=rss-4159e6ecf711------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Thu, 07 May 2026 19:00:22 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@oshiogwe/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Eagles and Algos]]></title>
            <link>https://oshiogwe.medium.com/eagles-and-algos-74b4cfbb5d9f?source=rss-4159e6ecf711------2</link>
            <guid isPermaLink="false">https://medium.com/p/74b4cfbb5d9f</guid>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[data]]></category>
            <category><![CDATA[robotics]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[neural-networks]]></category>
            <dc:creator><![CDATA[Oshiogwe M. Braimah]]></dc:creator>
            <pubDate>Wed, 12 Feb 2020 21:17:04 GMT</pubDate>
            <atom:updated>2020-02-18T22:18:29.414Z</atom:updated>
            <content:encoded><![CDATA[<h3>Eagles and Algos: a Non-Technical Primer on Robot Learning</h3><p><em>This article is targeted at the casual observer looking to acquire a basic understanding of machine learning models applied to robotics.</em></p><p>Machine learning has recently become somewhat of a buzz word. The appendage of “ML” to any capability lends it an immediate halo of sophistication. But why has it really captured the zeitgeist? The answer lays in what computers have historically been able to do. For most of their existence, computers have done just two things: <em>store</em> copious amounts of data and <em>compute</em> complex calculations, with the latter being a tightly programmed output of the former. While impressive, there was always a missing piece: <em>learning</em>.</p><p>The belief was that if computers could learn, they could help out with menial tasks around the home or office and free up human resources for higher level tasks. After several false starts, sprinkled with spasmodic progress, the last two decades have yielded some seminal research in making this aspiration a reality. This article is an attempt to take a closer look at some of these works, particularly three emergent learning models and one novel technique being used to help computers become better learners. We will also briefly explore how these models are applied to robots and finally, examine the implications of real world use cases.</p><p>Golf can be a challenging sport to learn, let alone master. Early forays are riddled with embarrassing swings and misses, followed by the humbling realization that you perhaps could use some coaching. These coaching sessions usually come with a barrage of seemingly contradictory instructions: <em>keep your head down, square your shoulders, tilt your knees, be firm in your grip but loosen your wrist, follow through…and keep your head down!</em> All these directions just to hit a stationary object sitting on a perfectly manicured patch of grass. You start off by barely making contact with the ball. But after hundreds of practice shots, you begin to consistently ping the ball over 150 yards.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*z2sJbxI5CmKBP8DGfJDM9A.jpeg" /><figcaption>Source: eagleriver.org</figcaption></figure><p>This evolution from clueless golf enthusiast to respectable amateur is analogous to how a decision making entity, also known as an <a href="https://medium.com/@ombraimah/glossary-for-machine-learning-92aced9bf2f7">agent</a>, acquires new skills in machine learning. Data (coaching instructions) is fed to an agent (golfer) to train it. The agent uses an <a href="https://medium.com/@ombraimah/glossary-for-machine-learning-92aced9bf2f7">algorithm</a> (brain and muscles) to make sense of the data. This training process generates a <a href="https://medium.com/@ombraimah/glossary-for-machine-learning-92aced9bf2f7">model</a> (muscle memory) which is subsequently utilized to complete a specific task (hitting a golf ball into a hole).</p><blockquote>It is important to point out that the analogy between human and machine learning is an overtly simplistic abstraction of what is otherwise a complex subject area, with aspects that sometimes offer no human parallels. Also, a number of terms used in this article are rather nuanced, with meanings that change depending on context. A <a href="https://medium.com/@ombraimah/glossary-for-machine-learning-92aced9bf2f7">glossary</a> has been provided to define them as used in the context of this article. These definitions should NOT be assumed to be universal.</blockquote><h3>Teachable Machines</h3><p>Since the advent of language, it has been the cornerstone of instruction. It provides a reliable connective tissue between teacher and student. However, it has its limitations. Even for a fluent speaker, there are certain things that are difficult to communicate through language. For example, let us imagine a golfer’s first class. The instructor not only explains the intricacies of an ideal golf swing but also complements her instructions with several demonstrations. From listening to and observing the instructor, the student is then able to imitate her movements with increasing degrees of precision over time.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*kAPbxLmiX363j2u8xR1aCA.jpeg" /><figcaption>Source: dice.com</figcaption></figure><p>A similar conundrum exists in machine learning but the workarounds are not as straightforward. <a href="https://medium.com/@ombraimah/glossary-for-machine-learning-92aced9bf2f7">Natural Language Processing</a>, the medium through which computers synthesize human language, has its unique syntactic shortcomings. These include an inability to capture visual context. For example, if I say to my iPhone virtual assistant: “hey Siri, what is that?”, I will get a meaningless response from Siri like “you got me”. The reason being the pronoun “that” could refer to anything and without visual input, Siri is stumped. Unlike humans, who have an inborn intuition to augment their comprehension with imitation (or demonstration), machine agents do not innately possess this level of cognition. Instead they must be taught how to learn continuously. While there are a myriad of ways in which this learning occurs, we will focus on three models: <em>model-agnostic meta learning, imitation learning and reinforcement learning.</em></p><h3>Model-Agnostic Meta-Learning</h3><p>The process of learning how to learn is known as <em>meta-learning</em>. The primary objective of this field is to use an algorithm to train an agent on a specific task, such that it can perform a variety of related but new tasks on a specific model. This is achieved by constantly expanding the algorithm’s key parameters to enable it handle unfamiliarity and complexity. Researchers such as <a href="http://people.idsia.ch/~juergen/diploma.html">Jurgen Schmidhuber</a>, <a href="http://proceedings.mlr.press/v48/santoro16.pdf">Adam Santoro, Sergey Bartunov and Matthew Botvinick</a> have done some amazing foundational work in this field.</p><p>Let us return to your golf-learning journey class for a moment. After acquiring your new motor skills over several classes, you are now ready to play a full 18-hole course. The instructor teaches you how and when to use <a href="https://golftips.golfweek.com/many-golf-clubs-can-carry-1416.html">the 12 different clubs in your bag</a>(Irons, Woods, Wedges and Putter). Each time the ball lands in an unfamiliar scenario, you must learn which club is ideal for advancing it. Soon enough, you become pretty adept at navigating a full course. The specific body shape you assume for a particular type of shot, coupled with the motion your hands go through, is the model. The clubs, along with their use cases, are the parameters in meta-learning; and you must constantly update your knowledge of these parameters in order to keep the model competent.</p><p>Now imagine if the instructor limited you to use only 4 clubs (1 of each type), every time you played 18 holes. Sure, it is easier to carry 4 clubs around the course but it also requires more manipulation for every shot since you no longer have access to specialized tools for each shot. You also have to assume a different type of body shape and motion (new model). In order to do this, you must learn to adapt your limited club selection to a greater array of shots by tweaking variables such as your grip, angle of attack and force of impact. These adjustments are collectively known as <em>model-agnostic meta-learning.</em></p><p>In the paper, <a href="https://arxiv.org/abs/1703.03400"><em>Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks</em></a>, <a href="https://ai.stanford.edu/~cbfinn/">Chelsea Finn</a>, <a href="https://people.eecs.berkeley.edu/~pabbeel/">Pieter Abbeel</a> and <a href="https://people.eecs.berkeley.edu/~svlevine/">Sergey Levine</a> present an algorithm for achieving meta-learning, irrespective of the underlying model architecture. Building on the work of Schmidhuber and co, their research focuses on deep neural networks as a preferred model, while also demonstrating the algorithm’s amenability to different model architectures and problem sets, including image <a href="https://medium.com/@ombraimah/glossary-for-machine-learning-92aced9bf2f7">classification</a> and <a href="https://medium.com/@ombraimah/glossary-for-machine-learning-92aced9bf2f7">regression</a>. In addition, the algorithm bypasses the introduction of new parameters by simply fine-tuning the existing ones to achieve optimal results. The end result is a model that enables an agent to learn on its own to solve new tasks, with minimal training input.</p><h3>Imitation Learning</h3><p>Let us check in on your golf-learning journey. So far, you have had an error-free first attempt at playing a full course, save for the occasional <a href="https://en.wikipedia.org/wiki/Mulligan_(games)">mulligan</a>. All that is left is to complete the final 18th hole. You strike the ball off the tee and make clean contact. It travels 180 yards and lands…in the <a href="https://www.dictionary.com/browse/sand-trap">sand trap</a>! Now this is one scenario you have never encountered in training. Your instructor, without saying a word, grabs your sand wedge, drops another ball right next to yours and shows you a singular demonstration of hitting a ball out of the sand trap. You follow suit with the sand wedge and do the same, perfectly lifting your ball out of the sand trap and onto the putting green, just like she did moments ago. You have just demonstrated imitation learning.</p><p><em>One-shot</em> imitation learning is a facet of the meta-learning framework, applied to robots. It equips robots with the ability to solve new tasks, upon seeing a demonstration of a different but related task. This idea was proposed in the paper, <a href="https://arxiv.org/abs/1703.07326"><em>One-Shot Imitation Learning</em></a>, from a team of researchers at the <a href="https://bair.berkeley.edu/">Berkeley AI Research Lab (BAIR)</a> and the <a href="https://openai.com/">Open AI</a> research lab. While somewhat similar to the model-agnostic meta-learning model, one-shot imitation proposes a neural network that can learn from a single demonstration. The research team demonstrates this proposal by training a <a href="https://www.youtube.com/watch?v=A-VR21PfJnA">Fetch robotic arm</a> (video demonstration below) to stack various number of cube-shaped blocks into predetermined configurations, having observed a single demonstration of block-stacking by a human trainer.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FA-VR21PfJnA%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DA-VR21PfJnA&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FA-VR21PfJnA%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/6ccc2e3a795021b65e8bda51ba16c513/href">https://medium.com/media/6ccc2e3a795021b65e8bda51ba16c513/href</a></iframe><h3>Reinforcement Learning</h3><p>One-shot imitation learning can be considered to be both an alternative and a complement to <a href="https://medium.com/@ombraimah/glossary-for-machine-learning-92aced9bf2f7">reinforcement learning</a>. First explored in great depth by <a href="https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf">Richard Burton and Andrew Barto</a>, reinforcement learning offers skill acquisition through trial and error, without any human intervention. Beyond the time-intensive nature of ‘trial and error’ learning, reinforcement requires the specification of a reward function to define the task’s optimal end state. Intuitively, this is more arduous than a simple demonstration of the task. However, the real promise is in combining both models. Imitation learning can be used to kick-start an agent’s learning process and then subsequently replaced by reinforcement learning. This creates an input-output asymmetry of resources involved in training an agent.</p><p>By way of analogy, remember when your golf instructor showed you how to get out of a sand trap? Next time, you might have to hit a ball out of the <a href="https://www.nationalclubgolfer.com/news/golf-gossary-rough/">rough</a>, while she is nowhere to be found. You might fail a few times before you successfully make a meaningful enough impact to dislodge the ball from the rough. Your ability to figure this out is based on your reinforcement learning model which has now been triggered by an imitation learning model. One-shot imitation learning presents an opportunity to drastically reduce the time it takes to demonstrate a task to an agent while reinforcement learning practically eliminates the need. Together, they maximize expected performances of agents on a variety of tasks.</p><h3>Data and Dollars</h3><p>We have seen how agents can be trained to perform novel tasks by observing and imitating a single or handful of demonstrations as well as by trial and error. However, regardless of how small these requisite training samples are, they still add up to a substantive amount of data. This presents both time and capital-intensive hurdles to adequately train agents. This issue is further compounded by the paucity of robotics-specific labeled data suitable for training agents. For example, in order to train the Fetch robotic arm to stack blocks, the research team would have had to set up hundreds of different configurations of those blocks in the physical world, for observation by the agent. This takes up a chunk of time and is far from ideal.</p><h3>Domain Randomization</h3><p>What if there was a way for an agent to learn in a simulated environment and execute the model in the physical realm? Just like Neo learning martial arts in a simulator to fight Morpheus in the Matrix.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FOBxMsUxXcXU%3Ffeature%3Doembed%26end%3D66&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DOBxMsUxXcXU&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FOBxMsUxXcXU%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/77395baf2a396c48535fbe834b461282/href">https://medium.com/media/77395baf2a396c48535fbe834b461282/href</a></iframe><p>That is exactly what domain randomization does. It is a technique for agents to learn a <a href="https://medium.com/@ombraimah/glossary-for-machine-learning-92aced9bf2f7">policy</a>, in a virtual reality simulation, that seamlessly transfers to the real world. In robotics, policies can be broken down into three primary types:</p><p><strong>1. Perception: </strong>ability to see its environment<br><strong>2. State (pose) estimation:</strong> ability to accurately locate objects relative to each other<br><strong>3. Control:</strong> ability to pick, grasp and drop items</p><p>Impressive research has been done to better understand how to simulate each one of these policies and transfer the learnings to the real world. Some amazing work has come out of Berkeley and Open AI, focusing specifically on <a href="https://arxiv.org/abs/1703.06907">perception</a>, <a href="https://arxiv.org/abs/1903.03953">state</a> and <a href="https://arxiv.org/abs/1710.06425">control</a></p><p>It is worth mentioning the high degree of difficulty in tuning the parameters of a simulated environment to accurately represent the real world. It is a time-intensive exercise that is also susceptible to errors. And there are some physical effects, such as fluid dynamics, that still cannot be modeled in existing simulators. This discrepancy between a simulator and the physical realm is called a reality gap and it presents a non-trivial hurdle to using simulated data to teach robots that have to operate in the real world. Domain randomization bridges this reality gap by exposing the agent, during training, to a wide spectrum of variations of the simulated environment. Given a high enough number of variations, the real world appears to the agent as just another instance of the simulator. When combined with the previously discussed learning models, this offers the possibility of building teachable robots that can be trained by anybody, regardless of technical competence. All you need is a VR headset with a few demonstrations and your newly trained robot is on its way.</p><h3>Putting It All Together</h3><p>As the field of machine learning and its applications grow exponentially, there is increasing need for a cohesive approach for training agents. The models and technique discussed in this article should be seen as key joints that make up a lattice framework for optimal learning, all connected by a need for data efficiency. It is equally important to understand their core differences to get a better appreciation for how they might complement each other. The table below offers a snapshot of these differences:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/633/1*u51Zrp3G23cPwVreo-v4tw.png" /></figure><p>Finally, we must acknowledge that despite all the progress made in machine learning over the past twenty years, there is still work done to be done. Nowhere is this more evident than in environments that possess a high degree of variability. Under such circumstances, machine learning tends to break down. Additionally, while machine agents are good at observation and imitation (replicating the <em>what</em>), they still lag in intuition (understanding the <em>why</em>) behind actions. This presents one of the next frontiers in machine learning. <a href="http://groups.csail.mit.edu/EVO-DesignOpt/groupWebSite/uploads/Site/DSAA_DSM_2015.pdf">Max Kanter and Kalyan Veeramachaneni</a> have done fantastic work on machine intuition if you care to read more.</p><h3>Why This Matters</h3><p>According to <a href="https://www.mckinsey.com/~/media/mckinsey/industries/advanced%20electronics/our%20insights/growth%20dynamics%20in%20industrial%20robotics/industrial-robotics-insights-into-the-sectors-future-growth-dynamics.ashx">McKinsey</a>, in 2017, the global installed base for industrial robots was just over 2 million units. By 2021, that number is expected to double. From automotive to pharmaceuticals, apparel, electronics and agriculture, many industries will experience an accelerated rate of automation. This hardware demand must also be accompanied by the development of <em>brains</em> for the robots. The development of these learning models will only benefit from increased availability of training data. Meta-learning, imitation learning, reinforcement learning and domain randomization can help turbocharge this flywheel effect to bring on the fourth industrial revolution.</p><p>Up until a few years ago, most of the progress in machine learning for robots was theoretical. However, recent breakthroughs have seen these cutting-edge technologies married with real world applications. Companies such as <a href="https://covariant.ai/">Covariant</a>, <a href="https://www.dexterity.ai/">Dexterity</a>, <a href="https://www.kindred.ai/">Kindred</a>, and <a href="https://www.righthandrobotics.com/">Right Hand Robotics</a> are all building specialized hardware and software for making robots smarter. The low hanging fruits for these applications are in industrial and warehousing applications. The primary reason for this is the predictability of these environments. They involve low variance, repeatable tasks, such as picking and stamping, performed in perfectly controlled environments.</p><p>This automation will almost certainly precipitate a need to re-skill substantive chunks of the global workforce as we know it today. We will explore the social costs and policy implications of this trend in a subsequent piece. However, the immediate focus, within the scope of this article, is at the enterprise level. Corporate leadership teams should be proactively incorporating machine learning adoption into their long-term strategy. Failure to do so will only bring on competitive disadvantages. And if your business is logistics or manufacturing intensive, this takes on heightened urgency. Automation of these operations should be viewed, not under a banner of hostility, but rather as a force multiplier with material returns on investment.</p><h3>Acknowledgements</h3><p><em>Thanks to </em><a href="https://www.linkedin.com/in/karchopra/"><em>Karan Chopra</em></a><em>, </em><a href="https://medium.com/@ogbadebo_smith"><em>Orinola Gbadebo-Smith</em></a><em>, </em><a href="https://www.linkedin.com/in/devin-guillory-78528958/"><em>Devin Guillory</em></a><em> and </em><a href="https://medium.com/@dodopat"><em>Patrick Oladimeji</em></a><em> for lending me their expertise and helping me sort out my thoughts.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=74b4cfbb5d9f" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Glossary for Machine Learning]]></title>
            <link>https://oshiogwe.medium.com/glossary-for-machine-learning-92aced9bf2f7?source=rss-4159e6ecf711------2</link>
            <guid isPermaLink="false">https://medium.com/p/92aced9bf2f7</guid>
            <category><![CDATA[algorithms]]></category>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[robotics]]></category>
            <dc:creator><![CDATA[Oshiogwe M. Braimah]]></dc:creator>
            <pubDate>Wed, 12 Feb 2020 18:50:51 GMT</pubDate>
            <atom:updated>2020-02-14T18:11:11.932Z</atom:updated>
            <content:encoded><![CDATA[<h3>Glossary for Robot Learning</h3><p><em>This is a compact list of contextual definitions for terms relevant to machine learning applied to robotics, as used in my </em><a href="https://medium.com/@ombraimah/eagles-and-algos-74b4cfbb5d9f"><em>article on robot learning</em></a><em>.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/518/1*SuZAsLnnx_0sG48oCYieng.png" /><figcaption>Source: ctemag.com</figcaption></figure><p><strong>Agent: </strong>anything that makes decisions e.g software or a piece of machinery. The two defining characteristics of an agent are the ability to perceive its environment (through sensors) and the ability to act upon that environment (through actuators)</p><p><strong>Affordance</strong>: an object’s property that shows the possible actions a user can take with it</p><p><strong>Algorithm:</strong> a set of rules or instructions executed by an agent with the objective of solving a specific problem</p><p><strong>Classification: </strong>a type of supervised learning where the output variable is discrete e.g. “Green”, “Yellow”, “Circle”, “Triangle”</p><p><strong>Degrees of Freedom:</strong> the number of independent variables that affect the range of states in which a system can exist or the number of directions in which motion may occur</p><p><strong>Environment: </strong>the world through which an agent moves</p><p><strong>Gradient Descent:</strong> an optimization algorithm used in training a model by progressively estimating its local minima</p><p><strong>Imitation Learning:</strong> a machine learning model where an agent learns a general ability by observing a human demonstration</p><p><strong>Model: </strong>a mathematical representation of an agent’s learning process. Examples include supervised and unsupervised learning</p><p><strong>Natural Language Processing:</strong> a branch of artificial intelligence that helps computers make sense of human language</p><p><strong>Policy:</strong> the strategy used by an agent to take an action based on its current state. Also known as state-action mapping, a policy can be stochastic or deterministic.</p><p><strong>Regression: </strong>a type of supervised learning where the output variable is a real value e.g. “weight”,”height”</p><p><strong>Reinforcement Learning:</strong> a machine learning model where an agent learns from its own trial and error to arrive at a goal, without any human intervention or training data.</p><p><strong>State:</strong> a representation of an agent’s idea of the world</p><p><strong>Supervised Learning:</strong> a machine learning model where an agent learns a desired task from labeled data. The agent is trained with input-output data pairs that contain the correct solution for the desired task. It can either be applied to either <strong>classification</strong> or <strong>regression</strong> problems</p><p><strong>Training: </strong>the process of teaching an agent by feeding it data</p><p><strong>Visual Motor Skills:</strong> skills that require continual visual feedback to execute an action</p><p><em>Thanks to the following sites for providing source material for this definitions:<br>- </em><a href="http://www.stackoverflow.com"><em>www.stackoverflow.com</em></a><em><br>- </em><a href="http://www.stackexchange.com"><em>www.stackexchange.com</em></a><em><br>- </em><a href="http://www.interaction-design.org"><em>www.interaction-design.org</em></a><em><br>- </em><a href="http://www.lexico.com"><em>www.lexico.com</em></a><em><br>- </em><a href="http://www.geeksforgeeks.org"><em>www.geeksforgeeks.org</em></a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=92aced9bf2f7" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[FOMO: Friend or Foe]]></title>
            <link>https://oshiogwe.medium.com/fomo-friend-or-foe-6ad5364f7b29?source=rss-4159e6ecf711------2</link>
            <guid isPermaLink="false">https://medium.com/p/6ad5364f7b29</guid>
            <category><![CDATA[cognitive-bias]]></category>
            <category><![CDATA[psychology]]></category>
            <category><![CDATA[startup]]></category>
            <category><![CDATA[fomo]]></category>
            <dc:creator><![CDATA[Oshiogwe M. Braimah]]></dc:creator>
            <pubDate>Sun, 29 Jul 2018 21:33:05 GMT</pubDate>
            <atom:updated>2018-07-30T23:04:25.798Z</atom:updated>
            <content:encoded><![CDATA[<p>In the summer of 1965, Blue Ribbon Sports, a struggling startup, had an existential issue. The company was in the business of retailing modified running shoes. Its founder/CEO had bulk-ordered a substantial amount of said shoes but struggled to find buyers. In an attempt to ignite the company’s fortunes, he penned a letter to college track coaches across the US. The letter highlighted the achievements of the few athletes who had already used this shoe. He went on to articulate the structural attributes which made the shoe unique, before throwing in a ringing endorsement from…his co-founder! Concluding the letter, he delivered the clincher:</p><p><em>Each model sells for $7.95. TIGER is not only better–it’s less expensive. As one runner said, “The only people who will be left wearing German shoes will be either uninformed or idiots.”</em></p><p><em>You are no longer uninformed.</em></p><p>Sales did not exactly take off following this calculated display of bravado. However, it was enough to grab the attention of a few coaches who promptly placed orders for their athletes. In that letter, the CEO managed to strike a delicate balance of taunting its recipients, while tapping into their social anxiety.</p><p>The latter is a phenomenon known as FOMO (fear of missing out). This tactic, though subtle in its application, is stunning in its efficacy as a marketing tool. If you have ever sold anything: an idea, a product, promoted an event, or even tried to convince your 5-year old kid to eat her vegetables, chances are you have leveraged FOMO as a persuasive strategy. And the leadership team at Theranos, the once heralded health technology company, was no different.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/398/1*9dPi-TtpspMWYiYwz6hEDQ.jpeg" /><figcaption>Source: www.readthespirit.com</figcaption></figure><p>Theranos set out to revolutionize blood testing. Most blood tests are performed by first diluting blood samples obtained through venous draws. This dilution is necessary to suppress the interference of substances that could distort the test reading. For example, some tests involve passing a light signal through the blood sample. It follows that any light absorbing pigments in the sample have to be suppressed prior to testing.</p><p>Theranos had claimed to develop a cutting-edge technology that could run a battery of tests on a blood sample obtained through a finger prick. One of the challenges with this approach is volume. A finger prick does not yield anywhere as much volume as a venous draw. As a result, blood obtained through a finger prick would first have to be substantially diluted in order for the starting work sample to be usable. The danger here is that the excess dilution throws off the subsequent chemical reactions required to yield an accurate test reading. Despite its inability to solve this conundrum, Theranos went ahead to license its technology.</p><p>In his book, <a href="https://www.amazon.com/Bad-Blood-Secrets-Silicon-Startup/dp/152473165X"><em>Bad Blood: Secrets and Lies in a Silicon Valley Startup</em></a>, John Carreyou brilliantly captures the tale of the meteoric rise and subsequent fall of Theranos. <em>Bad Blood</em> weaves a narrative of an enabling board of directors, intimidating executives, skeptical employees, vindictive friends, feuding families and a journalist determined to unearth the truth.</p><p>Reading through this book, it is hard to miss the persistent thread of FOMO at every turn. When Walgreens decided to make a bet on Theranos’ untested technology, despite a lack of peer-reviewed data, FOMO was at play. In this case, Walgreens was desperate to steal a march on its larger competitor, CVS.</p><p>When Theranos raised its largest funding round (over $430 million), it was able to convince some of the most prominent businessmen and women to part with non-trivial sums of money. It is fair to assume FOMO was also at play in a lot of these investors’ decision making. By the time the round closed, the list of investors read like a pantheon of the business world: Rupert Murdoch (News Corp), John Elkann (Fiat Chrysler), Robert Kraft (Kraft Group), Carlos Slim (Grupo Carso), the Walton family (Walmart) had all thrown in their lots with Theranos. Carreyou also cites how an investor convinced a friend to invest in an earlier round by using the well-worn refrain of <em>this could be the next big thing</em>.</p><p>With the benefit of revisionism, it is easy to question the judgement of these individuals. But it is equally important to note that none of these people attained financial success by lacking discernment. What transpired is hardly unique to Theranos. Seasoned Silicon Valley entrepreneurs have become adept at playing the “get in now or forever live in regret” card with potential investors. Before them, investment bankers perfected this as they helped clients raise capital in blockbuster IPOs. Nightclub promoters do this as they try to sell out VIP sections for their events. Real estate brokers? They are Zen masters of the art. The list goes on. And this should give us cause for pause because on a daily basis, we all fall prey to this tool.</p><p>As social animals, it is common to approximate our biases. We underestimate our flaws and overestimate our competencies. Following from this, we embrace the notion that we would never be as gullible as these investors. In the worlds of Charlie Murphy: <a href="https://youtu.be/ddIz-ydl6Yk?t=188">Wrong, Wrong!</a></p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FddIz-ydl6Yk%3Fstart%3D188%26feature%3Doembed%26start%3D188&amp;url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DddIz-ydl6Yk&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FddIz-ydl6Yk%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/42133ea95082c2f8db0dc030603f74db/href">https://medium.com/media/42133ea95082c2f8db0dc030603f74db/href</a></iframe><p>This is what behavioral economists call the Dunning-Kruger effect. It is a cognitive bias that brings about an illusory superiority with regards to our abilities. And the first step to hedging against it is acknowledging our susceptibility to it. Failure to do so can be ruinous. In the case of the aforementioned investors, while this setback did not place them on the food stamp line, it was still a costly misstep. The repercussions of making FOMO-based decisions, induced by the Dunning-Kruger effect, can land us in a heap of trouble. One of the many side-effects of FOMO is that it short-circuits our otherwise trusted decision-making apparatus. And replaces it with signals that infer an increased probability of a successful outcome.</p><p>Let us look at the Theranos situation closely. A rational approach to investing in a business would involve some sort of financial analysis of the company’s historical and projected performance, due-diligence on the leadership team and trying out the product. All of these take time and effort.</p><p>Now, let’s introduce the concept of opportunity ephemerality, which I define as the sudden elusiveness, real or perceived, of a value-capture event. It is a critical catalyst in leveraging FOMO. And it manifests in one of two forms:</p><ol><li><strong>Exploding Offer:</strong> <em>“this sale is only available for 3 weeks”</em></li><li><strong>Crowded Interest with Limited Supply:</strong> <em>“35 people are looking at this hotel room so book now”</em></li></ol><p>In Theranos’ case, Carreyou does not suggest what the catalyst was. For the sake of juxtaposition, let’s assume the use of an exploding offer as catalyst.</p><p>As you ruminate on making an investment, you get word that the second richest person in the world, the Chairman of News Corp, and the owners of Walmart have already committed to investing. That pop sound you hear is the first fuse in your decision-making circuit getting fried. But you still have some reservations, albeit little time, so you look to the Company fiduciaries to glean some information.</p><p>Its board contains two former United States secretaries of state, a retired military general and an accomplished litigator. At this point, any remnant concern evaporates. And just like that, a thorough analysis of the company’s financials seems redundant (narrator: it’s rarely ever redundant). So you cut a check because the last thing you want to be is the person who forfeited their chance to back a once in a lifetime technology. In a blink of an eye, the short-circuit is complete.</p><p>Let’s be clear: leveraging FOMO or falling prey to it is not necessarily a bad thing on its own. However, it is important to become cognizant of its latent presence in our everyday lives. This awareness equips us to make better decisions, especially in a world where we are increasingly saturated with information.</p><p>As for Blue Ribbon, they turned out okay. They managed to stave off a few more near-death experiences and emerge as the global powerhouse now known as Nike. That precocious CEO was a lad named Phil Knight and it did not hurt that his co-founder was the renowned University of Oregon track coach, Bill Bowerman.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6ad5364f7b29" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>