Navigating the AI Landscape: Insights from McKinsey and Lessons for AI Startups

Mark Monfort
NotCentralised
Published in
4 min readAug 3, 2023

McKinsey’s recent report, “The state of AI in 2023: Generative AI’s breakout year”, provides valuable insights into the current state of AI, particularly generative AI. As the report highlights, the adoption of AI technologies is rapidly increasing, with one-third of surveyed organisations using generative AI in at least one business function. However, the report also raises concerns about the risks associated with AI, including inaccuracy, cybersecurity, and regulatory compliance.

For AI startups, these insights are crucial. The AI landscape is evolving rapidly, and while the opportunities are immense, so are the challenges. The recent layoffs at major tech companies, as well as the growing skepticism around AI hype, underscore the need for caution.

One key takeaway from the report is the importance of expertise. In the current market, many organisations are attempting to build products that are essentially thin wrappers over existing APIs. However, without specific expertise, these solutions are easily copied, leading to a lack of differentiation and competitive advantage.

At NotCentralised, we believe that the key to success in the AI space is to build solutions with inherent intellectual property (IP) and deep expertise. Products that we are building for clients are a testament to this approach. We’ve been creating AI solutions that are built on a deep understanding of financial operations, AI and blockchain technology.

The McKinsey report also highlights the impact of AI on the workforce, with organisations anticipating workforce cuts and large reskilling efforts. This underlines the importance of human expertise in the AI space. While AI can automate certain tasks, it cannot replace the need for deep domain knowledge and strategic thinking.

There were some key business concerns when it comes to AI that were highlighted in this report

As we can see, Inaccuracy and Cybersecurity are two significant concerns in the AI landscape, as highlighted by the McKinsey report. At NotCentralised, we take these concerns seriously and below we suggest some strategies to address them.

Inaccuracy: Inaccuracy in AI can lead to incorrect predictions or decisions, which can have significant implications, especially in fields like finance and healthcare. To mitigate this risk, we focus on the following:

1. Data Quality: We ensure the data used to train our AI models is of high quality and relevant to the problem at hand. This includes rigorous data cleaning and preprocessing steps.

2. Model Validation: We employ robust validation techniques to evaluate the performance of our AI models. This includes using separate datasets for training and testing, cross-validation, and other statistical techniques to ensure our models generalise well to unseen data.

3. Continuous Monitoring: AI models can drift over time as the data they interact with changes. We have systems in place to continuously monitor model performance and retrain models as needed.

Cybersecurity: AI systems, like any digital system, are vulnerable to cyber threats. At NotCentralised, we address this risk through:

1. Secure Development Practices: We follow secure coding practices and conduct regular security audits of our codebase to identify and fix potential vulnerabilities.

2. Data Encryption: We use strong encryption for data at rest and in transit, ensuring that even if a breach occurs, the data is unreadable to unauthorised individuals.

3. Access Control: We implement strict access control measures, ensuring only authorised individuals can access sensitive data and systems.

4. Incident Response Plan: We have a robust incident response plan in place to quickly identify, respond to, and recover from any potential security incidents.

By addressing these concerns proactively, we aim to build trust with our users and stakeholders, and ensure our AI solutions are both effective and secure.

Additionally, here’s some further thoughts from us

  • The rapid adoption of gen AI is a testament to its potential. However, organisations must be mindful of the risks associated with it, particularly inaccuracy. Robust validation and verification processes should be in place to ensure the reliability of gen AI outputs.
  • The impact on the workforce is a critical aspect to consider. While generative AI can automate certain tasks, it’s essential to have a plan for reskilling employees whose roles may be affected. This not only helps in managing the transition but also ensures that the organisation can fully leverage the potential of gen AI.
  • The report highlights that high performers are leading the way in generative AI adoption. This suggests that having a strong foundation in AI capabilities can provide a competitive edge in leveraging emerging technologies like gen AI.
  • Lastly, the fact that generative AI has become a focus for company leaders indicates its strategic importance. This underscores the need for leaders to understand AI and its implications to make informed decisions.

In conclusion, while AI will continue to be a critical driver of innovation, startups need to be mindful of the risks and challenges. Building solutions with inherent IP, deep expertise, and a focus on addressing real-world problems will be key to navigating the evolving AI landscape.

--

--

Mark Monfort
NotCentralised

Co-Founder NotCentralised — data analytics / web3 / AI nerd exploring the world of emerging technologies