I truly believe AI is life-changing, and I want to highlight three final characteristics that AI-as-a-technology is introducing. First of all, AI is disrupting the traditional IoT business model because is bringing analytics to final customers instead of centralizing it.
Second, it is forcing business to keep customers in the loop — and the reasons why are manifold:
i) establishing trust into the product and the company;
ii) increasing the clients’ retention building customary behaviours;
iii) improve sensibly the product through feedbacks.
The shifting focus on the final user as part of the product development is quickly becoming essential, to such a point that it represents a new business paradigm, i.e., the “Paradigm 37–78”. I named this new pattern after the events of March 2016, in which AlphaGo defeated Lee Sedol in the Go game. In the move 37, AlphaGo surprised Lee Sedol with a move that no human would have ever tried or seen coming, and thus it won the second game. Lee Sedol rethought about that game, getting used to that kind of move and building the habit of thinking with a new perspective. He started realizing (and trusting) that the move made by the machine was indeed superb, and in game four he surprised in turn AlphaGo at Move 78 with something that the machine would not expect any human to do.
The Paradigm 37–78 is indeed a way to acknowledge that the users are the real value driver for building an effective AI engine: we make the machine better, and they make us better off in turn.
The last feature AI is changing is the way we think about data. First, AI is pushing business to question whether the information is always good and if the benefits linearly increase with a higher volume. This aspect is really important because AI is trained on data that have to be high quality to be effective (and this is why Twitter turned Microsoft’s bot into a Hitler-loving sex robot).
It is also forcing us to reflect on storing data that matter (rather than storing just for the sake of doing it), and to use correctly data exhaust, i.e., those data generated as a by-product of online actions — in other words, they are not business core data, and they are by definition multiplicative with respect to the initial information (and thus much bigger).
Finally, AI necessities are clearly underlining the cost-benefit trade-off of the inversely related relationship between accuracy and implementation time (either time to train the model, or time to produce the results and provide answers). The discussion on this specific topic is highly dependent on the sector and problem tackled: there are cases in which it is better the dollar cost of learning is largely overcome from higher accuracy, while others in which faster and responsive answers are way better than an incredibly accurate one.
Data is by far the perfect good: it does not deteriorate over time and can be reused; it is multipurpose; it multiplies by using or sharing. It is clearly up to date one of the greatest sources of competitive advantage for any machine learning firm, which represents also a problem: data polarization might result into few companies that channel and attract most of the data traffic, and other ones being (almost) completely excluded. In few years, this exponential trend might generate an enormous barrier to entry for the sector, compelling companies to create strategic partnerships with incumbents.
Fortunately, there are already stealth-mode companies working on reducing the dependency of AI on extremely large datasets (such as Vicarious or Geometric Intelligence for example): machines should indeed be able to learn from just a few instances as humans do. It is also not a coincidence that they are led by academics, because if the solution for the business is feeding the model with more data (narrowing down the bottleneck), for academics is instead focusing on transforming the algorithms for the better, and laying the foundation for the next evolutionary step.