LLM API Development vs. Traditional ML Development: A Practical Perspective

Rahatara Ferdousi
4 min readJun 26, 2024

--

This blog captures the essence of the differences between traditional ML development and LLM API development, enriched with personal experiences, practical analogies, and technical details.

Over the last four years, I have had the privilege of working on various AI projects, experiencing firsthand the challenges and advantages of both traditional machine learning (ML) development and leveraging large language models (LLMs) through APIs.

Traditional ML Development: Challenges and Drawbacks

Expertise and Training Data

Traditional ML development demands significant expertise in machine learning. Developing a robust model involves understanding complex algorithms, fine-tuning hyperparameters, and often, creating custom solutions tailored to specific problems. One of the biggest hurdles is the requirement for vast amounts of labeled training data.

For instance, in a research project aimed at detecting anemia from conjunctival pallor images, our team faced substantial challenges in gathering enough training examples. The scarcity of labeled data significantly slowed down our progress and impacted the model’s performance. This is akin to trying to write a comprehensive book with only a handful of references – the quality and depth of content are inevitably compromised.

Overfitting problem occurred due to lack of training examples.

Computational Resources

Another major drawback is the computational power needed to train ML models. During an internship, I worked on a project to detect the distance between poles from a car using live stream videos. Training the models required high-performance GPUs, leading to significant expenses for computational resources. This financial burden can be a barrier for many organizations, especially startups and research groups with limited budgets. Imagine trying to run a marathon but having to pay for every mile you run; the costs add up quickly, and it becomes a considerable strain.

Deployment Difficulties

Even when you overcome data and computational challenges, deploying traditional ML models can be cumbersome. In a current project focused on identifying railway defects, we spent three years developing advanced technical solutions. However, we struggled to deploy the model due to various technical restrictions, rendering our innovative approach less effective in practical applications. It’s like building a high-tech vehicle that can’t be driven on existing roads – despite its potential, practical use becomes limited.

LLM API development benefits

Pre-trained Models and Generative Capabilities

LLM API development offers a stark contrast to traditional ML. Pre-trained LLMs, such as those provided by OpenAI or Gemini, are trained on massive datasets and possess impressive generative capabilities. This means there’s no need for extensive training examples when using these APIs, unless you are fine-tuning the model for specific tasks. Even fine-tuning has become more streamlined, often handled through system messages and instruction adjustments. Think of it as having a well-stocked library at your fingertips, where you can pull out any book you need without having to write it yourself.

Reduced Need for Computational Power

One of the most significant advantages of using LLM APIs is the reduced need for computational power. When integrating an API like OpenAI’s into your custom application, the heavy lifting is done on the provider’s end. This eliminates the need for high-performance GPUs and reduces operational costs. For instance, if we had used an LLM API for the pole detection project, we would have avoided the hefty expenses associated with GPU usage. It’s similar to outsourcing a heavy-duty task to an expert, saving you both time and resources.

Ease of Deployment and Innovation

Adopting generative AI technology has revolutionized our approach to AI projects. Since last year, we have successfully integrated multimodal LLM chatbots into our systems and even generated synthetic samples using the LLM itself. This shift has not only streamlined our processes but also opened up new avenues for innovation. The ease of deployment and flexibility offered by LLM APIs have been game-changers, allowing us to bring our solutions to market faster and more efficiently. It’s like switching from a manual typewriter to a modern computer – tasks that once took hours can now be completed in minutes.

Conclusion

In summary, traditional ML development, while powerful, comes with substantial challenges in terms of expertise, data requirements, and computational demands. On the other hand, LLM API development offers a more accessible and efficient approach, leveraging pre-trained models and reducing the need for extensive resources. My journey over the past four years has shown that adopting generative AI technology and LLM APIs can significantly enhance productivity and innovation in AI projects.

During my Masters Thesis I mostly worked in the field of AI in healthcare. To be specific for early stage disease risk prevention. Those projects use simple textual data, structured data , still training a handy tool was not feasible to build due to complex requirements of ML Development. So here’s that tool now using LLM. Try out this symptom checker for your well-being.

Which approach do you prefer? The detailed grind of traditional ML or the sleek efficiency of LLM APIs? Feel free to share your thoughts in the comments!

--

--

Rahatara Ferdousi

Doctoral Researcher at the University of Ottawa. Exploring AI integrated Digital Twin to Automate Railway Defect Inspection and Maintenance.