The Evolution of modern AI tools

Samy
3 min readJun 13, 2023

--

AI has always been the holy grail of Computer Science. Since the early days, researchers have been striving to create intelligent machines that could mimic human cognition.

Warren McCulloch and Walter Pitts
  • 1943: Warren McCulloch and Walter Pitts introduced the concept of artificial neurons and proposed a computational model for neural networks.
  • 1956: The Dartmouth Workshop, organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, marked the birth of the field of AI. Researchers explored topics like problem-solving, learning, and neural networks.

For a long time, AI has been limited to university researchers and some rare R&D labs.

That was until AI models (based on deep neural networks) got so advanced that they surpassed human performance in a lot of different tasks (like image recognition, speech recognition and lately, language understanding). Then a huge community of software engineers started to get involved with AI.

Evolution of AI performance

A lot of engineers started enrolling in university courses (online and offline) to get the new hyped “data science” and “machine learning” degrees.

Very powerful deep learning frameworks like Keras, TensorFlow (2015) and PyTorch (2016) saw the light of day and got widely adopted by the community.

Rapidly, almost every company started to get very interested and recruited Data Scientists and Machine Learning engineers to keep up with the race.

New Data Platforms like DataRobot and Dataiku started to be the new thing to use in production. They hit the billion dollar valuation by 2018–2019.

But all these companies trying to adopt AI faced some hard issues as they failed to do so 85% of the time (gartner) due to :

  • Lack of Data / Poor quality : It took weeks to clean data (when it existed) to try and make something out of it
  • Lack of qualified profiles : companies trying to do everything with one “data scientist” who was supposed to run a POC in one or two weeks, hit 95% precision deploy in production right after that.
  • Lack of understanding : management just not understanding how AI models work and expecting “magic” to happen and from A-Z process being handled.

Some DataLabeling and MLOps tools are trying to address some of these problems but it seems like, today, the new thing that could better handle them are these “AI APIs” that are popping up like mushrooms! They offer a promising alternative for streamlining AI adoption processes.

These APIs are developed by companies that are training deep learning models on their own data and exposing them through APIs for anyone to use. No need for a degree in AI or Machine Learning, you just need to know how to make an API call. It’s the era of AI as a Service.

Recently, some AI APIs providers like Cohere, DeepL and OpenAI surpassed the billion dollars valuation.

Hundreds of AI APIs exist in the market, they expose hundreds of different AI Models (speech recognition, image processing, natural language processing, document parsing, translation …etc) for anyone to include AI capabilities into their apps.

Multiple providers proposing the same AI feature can differ in terms of :

1/ Performances: As they have different training data :

  • Data type specialization
  • Domain Specialization
  • Languages handled

2/ Cost : usually pay-per-use models but with a different overall cost

3/ Regulatory Compliance : different servers location and data privacy policies

4/ User experience : Some are easier to use than others

This huge number of providers is great news as it encourages competition and innovation. There are fair chances that the next “gpt” won’t come from Open AI.

Now you just need to try them all and make a choice! (Or do you?)

--

--