Theta EdgeCloud:
Ushering in a new era of AI Computing.

Theta Labs
Theta Network
Published in
4 min readFeb 26, 2024

The Theta engineering team has been heads down for the greater part of this past year developing the Theta EdgeCloud, one of the most advanced decentralized software platforms in edge computing. Its mission is to provide developers, researchers and enterprises large and small with unlimited access to GPU processing power for any AI or video task, at the most optimal cost. This approach brings the best of Cloud computing to a decentralized system, powered by the Theta Edge Network.

This blog shares a sneak peek of the upcoming EdgeCloud phase 1 estimated to launch on May 1, 2024. The foundation supporting EdgeCloud’s AI computing infrastructure has been in development for many years.

  1. The Theta Edge Network launched in 2021 with Mainnet 3.0 focused on GPU intensive video processing required for encoding, transcoding and distribution. Today, Theta’s global network of nearly 10,000 active edge nodes run by community members comprises one of the largest clusters of distributed GPU computing power in the world. High performance GPUs (~1000 nodes) deliver 36,392 TFLOPS, Medium tier GPUs (~2000 nodes) deliver 28,145 TFLOPS and Low end GPUs (~7000 nodes) deliver an additional 13,002 TFLOPS, for a total approx 77,538 TFLOPS or about 80 PetaFLOPS, roughly equivalent to 250 NVIDIA A100s, always available.

This vast processing power along with Theta’s access to an additional 800+ PetaFLOPS through our strategic cloud partners can deliver upwards of 2500 equivalent NVIDIA A100s, enough to train and serve some of the largest language models (LLMs). GenAI text-to-image, text-to-video models such as Stable diffusion and Llama2 and cutting edge new text-to-3D models can easily run on EdgeCloud’s hybrid cloud decentralized architecture.

2. Access to distributed GPU processing power in itself is not sufficient to transform the AI computing landscape. In 2021, Theta first filed its patent for the “Edge Computing Platform supported by Smart Contract enabled blockchain network”. This set in motion the capability to build a next generation hybrid computing architecture, where computation tasks are registered on a blockchain, and assigned to an edge computing node within a decentralized computing network through a secure peer-to-peer connection. Further, the solution may be verified on-chain by a smart contract and a token reward may be given to the participating node.

This patent was granted in September 2023 while Theta core engineering team was already building the core foundation of EdgeCloud.

3. Lastly, the Theta team has always believed in building technology to solve immediate customer and market needs. As early as 2022, Theta began working with a number of AI partners and experts in natural language processing (NLP) and machine learning. This led to collaboration with Lavita.AI, FedML and most recently with Google Cloud developing an ‘AI Model Pipeline for Video-to-text Applications” with applications in semantic video search, esports game highlight video generation and many other AI applications.

The Theta team is thrilled to share a sneak peek into EdgeCloud’s first release this Spring.

In the screenshots below, AI developers can easily select and deploy popular models such as Stable diffusion, Llama 2 and many other out-of-the box models with just a few clicks and build their AI-powered apps on top. This library of templates can be expanded over time to capture all of the most popular AI models including today’s chat bots, generative AI and any new models in the future. AI developers can use familiar tools including Jupyter notebook to launch their AI development.

In a second release in the Spring-Summer, Theta will unveil its upgrade to the Edge node software with Elite+ Booster feature for EENs that have the full 500,000 TFUEL staked. This enables node operators to participate and share in the rewards from all EdgeCloud AI compute tasks.

Moving into H2 2024, advanced AI developers will be able to manage their entire AI pipeline development from prototyping using Jupyter notebook to training AI models including hyperparameter tuning and neural architecture search as well as model fine-tuning. These models can then be easily deployed and served across EdgeCloud’s network of GPUs — optimized, immediately accessible and scalable. These can eventually be run on ray clusters or raw machines, for AI experts looking for even more control.

We are excited to share updates along the way as we push toward the Phase I release of EdgeCloud, join us on Discord and Twitter for further discussions.

--

--

Theta Labs
Theta Network

Creators of the Theta Network and EdgeCloud AI — see www.ThetaLabs.org for more info!