Sitemap
AI4SM

A Medium publication for sharing concepts, ideas, projects and codes related to AI for smart mobility systems and services. This publication contains easy-to-read articles showing AI’s roles in transportation infrastructures, people mobility and logistics.

“1 to n” Innovation Strategy: A Paradigm Shift in GenAI

--

0 to 1 → 1 to n [Image by the author]

China’s approach is often contrasted with the “0 to 1” innovation philosophy, which focuses on radical breakthroughs. Instead, China follows “1 to n” innovation, meaning it takes proven technologies and optimizes them for mass production, cost reduction, and market expansion.

You can easily notice this strategy in main sectors that China is currently dominating or will dominate in the future. For example, in the automotive industry, China’s “1 to n” strategy is evident in its rapid advancement in electric vehicles (EVs) and software-defined vehicles (SDVs). Instead of pioneering entirely new vehicle concepts, China has focused on scaling, refining, and commercializing existing automotive technologies to add value and achieve dominance.

This “1 to n” approach is also evident in GenAI, where DeepSeek R1/V3 models exemplify this strategy to a certain extend. Their success with model distillation — achieving strong performance with smaller 7B and 14B parameter models — is particularly noteworthy. As part of this incremental innovation, DeepSeek-R1 leverages pure reinforcement learning, enabling models to solve problems step-by-step through chain-of-thought reasoning, without the need for supervised training. Moreover, their open-source approach will enable the research community to develop more efficient and smaller models in the future. This contrasts with the proprietary approach taken by OpenAI and others, which, while effective, limits transparency and collaboration.

However, the training data for DeepSeek models remains unknown, and OpenAI claims to have evidence that DeepSeek distilled knowledge from its models, violating its terms of use and infringing on its intellectual property.

In large language models (LLMs), knowledge distillation is a machine learning technique where a smaller “student model” learns from a larger “teacher model” to replicate its capabilities.

While many LLMs — such as OpenAI’s GPT and Google’s Gemini — are proprietary, distilling them without permission may breach terms of service or copyright laws. Even open-source models, like Meta’s LLaMA, maintain a balance between openness and proprietary control, often enforcing strict licensing terms that may prohibit distillation for commercial use.

This creates a moral paradox of double wrongdoing — while DeepSeek faces allegations of unauthorized knowledge transfer, OpenAI, Google, Meta, and others have also been accused of misusing copyrighted data to train their own models.

DeepSeek’s developers showcased that AI models can operate on less advanced chips at just 1/30th of the typical cost. However, the reported $5–6 million training cost is somewhat misleading. This figure is based on claims that 2,048 H800 GPUs was used for training. Yet, the actual cost of 2,048 H800s ranges between $50–100 million, raising questions about the true scale of DeepSeek’s resources given the fact that they are backed by a large Chinese hedge fund, which likely possesses far more than 2,048 H800 GPUs.

In summary, the recent advances demonstrated and shared by DeepSeek have opened Pandora’s Box, potentially triggering a paradigm shift — from Large Language Models (LLMs) to Small Language Models (SLLs), from proprietary to open-source approaches, and from the “0 to 1” innovation strategy to the “1 to n” strategy.

--

--

AI4SM
AI4SM

Published in AI4SM

A Medium publication for sharing concepts, ideas, projects and codes related to AI for smart mobility systems and services. This publication contains easy-to-read articles showing AI’s roles in transportation infrastructures, people mobility and logistics.

Alaa Khamis
Alaa Khamis

Written by Alaa Khamis

AI and Smart Mobility Professor at KFUPM | Ex-GM Technical Leader

Responses (1)