Nim Network Bridging the Funding Gap of Open-source AI

NIM Network
7 min readMay 13, 2024

--

Open-source AI is a powerhouse of brilliant engineers working collaboratively to tackle AI’s complexities. Their goal? To offer a competitive alternative to closed-source options, build unrestricted applications, and make AI more useful for everyday users and application builders.

But there’s a catch. Despite their technological and social contributions, open-source AI often lacks the robust funding and business models needed to expand and reach a broader audience.

This is where Nim steps in. Our mission is to fill this funding void and become the go-to funding source for open-source AI, making it easier to onboard talent on a large scale, and supporting it with our ecosystem partners, applications, and community.

We’re rolling out a novel framework for AI ownership and funding. This framework unites the Nim community, crypto builders, and AI innovation into a single, cohesive entity. And the excitement is building, with the first yield sale of an AI model scheduled for next month.

Yield Sale

To create a renewable source of liquidity and ensure fair distribution, AI model ownership tokens will be distributed based on staked asset deposits. For example, users can exchange their stETH yield for AI ownership tokens, leveraging an underutilized source of liquidity.

The yield from these deposits will be reinvested in low-risk on-chain strategies, including re-staking and lending, to optimize yield generation. This yield can then cover the operational costs of the AI models, token buy-backs, and general liquidity provision.

The conversion of deposited assets into AI ownership tokens should follow these parameters:

  • The size of the deposit
  • The lock time or time until withdrawal
  • The amount of NIM staked by the user at snapshot time

Final details will be shared in the upcoming weeks.

The Hub for Open-Source AI

Tapping into the open-source innovation cycle

Open-source AI is growing rapidly. According to the AI index report, the number of AI-related projects on GitHub has grown consistently since 2011, from 845 to about 1.8 million in 2023. In 2023 alone, the total number of GitHub AI projects saw a sharp increase of 59.3%. The projects also gained popularity, with the total number of stars more than tripling from 4.0 million in 2022 to 12.2 million in 2023.

In the same year, 149 foundation models were released, which is more than double the number in 2022. Interestingly, 65.7% of these models were open-source, compared to 44.4% in 2022 and 33.3% in 2021.

Open-source models can compete with, or even outperform, larger, more expensive closed-source models. For example, fine-tuned open models like OpenChat and Mistral-Hermes are on par with larger closed-source models like GPT-3.5. The flexible nature of these models allows us to combine components from different models, improve fine-tuned models with additional training, and create specialized AI.

Simultaneously, the business model for open-source AI presents significant challenges. The primary issue is distribution, or the usage of the model. While open-source AI models provide excellent accessibility and foster swift technological advancement, closed-source counterparts have established strong distribution moats. They achieved this by partnering with cloud providers and enterprises, and offering inference at an operational loss.

The second issue is the resource requirements for inference serving and training models. Open-source AI models often require significant computational resources for training and inference, which can be a barrier for smaller organizations or individual contributors who may not have the necessary resources or funding.

Lastly, the open-source nature of these AI models means they can be easily copied. This poses a challenge in terms of protecting intellectual property rights and, consequently, in monetizing the models. The creators of these models may not receive adequate compensation for their work if the models can be freely copied and used without any restrictions.

The Key to build an ecosystem of AI applications is Increasing the supply of specialized AI models

As a user, you might have come across different AI interfaces and platforms such as chat-GPT for text and code tasks, or Dalle for image generation. However, a key aspect that’s often missing in these models is customization, the ability to run these models reliably and consistently for a specific task. Too often, the user experience is filled with hacks and endless iterations, which can frustrate the end user and make it challenging for applications to integrate these sophisticated models.

While closed-source model platforms like Open-AI do offer the ability to customize the model via fine-tuning (a type of training that adapts the model to a specific domain), or provide the ability to insert external data such as a knowledge base or interactive search into the model, they are inherently limited in their offerings. In contrast, fully open-source models provide a canvas for AI builders to merge, re-train, and feed new data to the model without limits, using cutting-edge techniques for customization.

For applications, the future lies in fine-tuned models which have been adapted to specific use-cases. Open-source AI enables the fastest innovation cycle, with state-of-the-art research and engineering being utilized instantly and tested with models.

To truly empower crypto AI applications, we believe there must be an influx of AI model supply. This will drive innovation and specialized models from the web2 AI engineers directly to crypto applications, creating a custom-made funnel to incentivize the creation of models in a permissionless manner.

Specialized AI models

Providing ownership via funding

Crypto offers an unparalleled pathway for monetizing open-source AI, thanks to its unique properties. These properties foster a compounding interest in software, network effects, and liquidity, opening up new possibilities for AI development and deployment.

One of the key properties is the composability of AI models and software. By making exceptionally useful AI software available for on-chain use, we can drive the implementation and usage in various applications. This allows AI core components to be utilized by other apps, even without an in-depth understanding of their inner workings. This composability facilitates reuse in different ways and allows for the fulfillment of functions in manners that were previously unimaginable.

Consider an example where a highly skilled AI engineer creates a stable diffusion model for personalized community image generation. Providing access to such a model on-chain can spur a multitude of applications. Anyone is able to build on top of the model to power their decentralized app (dapp) or protocol. This could pave the way for the creation of virtual artists, AI NFT collections, or even games. In essence, this creates a permissionless discovery engine around cutting-edge AI models, enhancing their accessibility and usability.

Furthermore, establishing a liquidity layer and economic value is vital to bootstrap AI. Early ecosystem participants provide liquidity in return for ownership, usually through token distribution. The initial liquidity pool is then used to cover the operational costs of developing and deploying the model for on-chain use. As the ownership token accrues value, liquidity can increase. This is further amplified by the fees collected from the growth in usage and the increasing number of applications powered by the model, thereby driving more demand.

AI access on-chain

A vital aspect of tokenizing AI models and ownership is the capacity to confirm that AI models are being utilized to generate on-chain responses. For this purpose, we are adopting an oracle approach. Although running AI programs on-chain may not be feasible due to their intensive computational requirements, the blockchain can function as a source for verifying the authenticity of the computations.

Our AI oracle approach is rooted in three different components:

Ability to validate off-chain AI compute — We’ve been working to create the best benchmark in the world for AI ZK verification with Ligero. We achieved the world’s first proof for the Llama 2 7b model, one of the most popular open-source models, and achieved a 1024x improvement in memory and a 66x improvement in speed performance compared to previous attempts. Other solutions such as opML, championed by Oracle Protocol, provide a low-cost solution to bring AI on-chain through an optimistic approach.

Optional privacy — While the rapid innovation cycle of open-source AI makes it virtually impossible to have one model configuration leading for a while with no alternatives, there is room for model privacy. This enables initially better protection of the IP of specific fine-tuned models. a solution can be found with ZK verification that enables parts of the model to run in a private manner without fully revealing it.

Compatibility of software and revenue — Using an oracle can enforce economic rules of how AI models can be used on-chain in different applications. For example, each AI call will have a fee that can be directed into a fee pool governed by the token holders of the model. Or, factory NFT contracts will need to provide royalties for a pool of a virtual artist created on-chain, and many other plug-ins can be added and customized for example using specific standards such as eip-7007.

We will breakdown all the different components of our vision and framework in the following weeks.

Follow us on X @nim_network and Discord

to get the latest on NIM’s partnership, launch timing and other updates.

--

--