EdgeCloud for Mobile — unleashing AI computing on 3.9 billion Android devices

Theta Labs
Theta Network
Published in
5 min readMar 4, 2024

July 15, 2024 Update: In one of the biggest leaps forward in Theta’s history, the mobile version of the Theta Edge Node for Android devices is scheduled to launch on Sept 25, 2024. For the first time ever, the Theta team has implemented a video object detection AI model (VOD_AI) that runs on consumer grade Android mobile devices, delivering true computation at the edge and enabling unparalleled scalability and reach. VOD_AI is a computer vision technique that uses AI to analyze video frames to identify objects by scanning video frames, looking for potential objects and drawing bounding boxes around them. This process is similar to how the human visual cortex works.

In the example above, VOD_AI can be used to label objects in a video, such as “cup”, “sandwich”, or “fork and dining table”. In the future, each label could include the object’s location in the frame, as well as a timestamp that indicates when the object appeared in the video. VOD_AI can also be used for video object tracking, which involves tracking an object’s position throughout a video by analyzing each frame and drawing a bounding box around it. This technology has significant applications in the media and entertainment industry, particularly for user generated content. Users that run the Edge Node for mobile app will earn TFUEL rewards for sharing their mobile computation power.

Users will also have the option to only run mobile edge jobs when the device is on WiFi and is plugged in and charging, for instance, overnight while asleep. By having a global footprint of thousands of Android devices, effectively means having 24x7 coverage to complete even the largest video object detection jobs. For example, if 30,000 mobile devices around the world join the Theta edge network, and each device is working for 8 hours overnight, this yields 240,000 hours of computation power in a single day. By splicing a source video or a set of video into as much as 14 million segments and parallelizing across these 30k devices, even ultra high resolution, complex and lengthy video can be processed seamlessly and effectively. This is truly groundbreaking computation at the edge.

The Theta team invites you to participate in the pilot launch of Mobile Edge Node beginning in September, and provide us feedback across social channels. In the meantime, we encourage you to download and run the latest version of Edge Node for Windows, Mac and Linux, and join the Theta EdgeCloud community.

Theta EdgeCloud and the 10,000+ active Edge Nodes around the world are the backbone that allows Theta to deliver fast, cost-effective compute work for AI and video applications over a decentralized network.

The goal of EdgeCloud for Mobile is leveraging up to 3.9 Billion active Google Android devices for certain types of AI computation workloads. Android today accounts for 70% of the mobile operating system market share across 190 countries, and provides an ideal platform to evaluate mobile CPU/GPU capabilities.

This also sets up the opportunity to target over 150 million active Android TVs globally that run the AndroidTV operating system from Sony, Sharp, Phillips and many more brands. While the average daily consumption of television today is around 3 hours, this leaves the device available the majority of the time for other types of computation and data sharing, especially as these always connected Smart TVs increase CPU and GPU capabilities over the next decade.

While intensive compute work is generally not feasible on mobile devices today, that is changing quickly. For example, TensorFlow Lite now enables on-device machine learning by allowing developers to run their trained models on mobile, embedded, and IoT devices and computers. It supports platforms such as embedded Linux, Android, iOS, and MCU. In the near future, these devices will be able to process more and more complex computations, and therefore become significantly more valuable as nodes on Theta EdgeCloud. Today’s devices can already run certain types of jobs on the Theta Edge Node, making the 3.9 billion Android devices worldwide a massive potential addition to Theta EdgeCloud. In the future, with the ability to serve GenAI, LLM inference, and text-to-image among other use cases, mobile devices running Edge Nodes can serve an AI market valued at $200 billion today, and projected to be worth more than $1 trillion in 2028.

The opportunity is to shift compute-intensive AI workloads to the edge as the cost of centralized GPU resources skyrocket from all the major cloud service providers, but this remains a challenging technical problem. For example, NVIDIA recently announced a private AI chat bot for desktops running RTX on Windows PCs equipped with NVIDIA GeForce RTX 30 and 40 Series GPUs with at least 8GB of VRAM. Theta EdgeCloud when fully launched in 2025 aims to seamlessly integrate high-end GPU capability from cloud providers with billions of mobile and smart devices in addition to desktops and laptops into a unified AI infrastructure layer. This opens up a vast market to process on-demand AI workloads optimized for cost, quality of service, privacy and device capabilities.

The Theta engineering team is excited to initially test with Google’s new experimental MediaPipe on device text-to-image generation solution for Android in the pilot launch of mobile Theta Edge node. The team is evaluating text-to-image generation based on text prompts using standard diffusion models as well as customized text-to-image generation based on text prompts using Low-Rank Adaptation (LoRA) weights. “Creating LoRA weights requires training a foundation model on images of a specific object, person, or style, which enables the model to recognize the new concept and apply it when generating images,” according to Google developer site.

--

--

Theta Labs
Theta Network

Creators of the Theta Network and EdgeCloud AI — see www.ThetaLabs.org for more info!