EdgeCloud GenAI Showcase adds Image-to-Image Art Transfer and Editing

Theta Labs
Theta Network
Published in
3 min readJul 29, 2024

The Theta engineering team continues to deliver new cutting edge GenAI models with today’s addition to the EdgeCloud GenAI Showcase: Image-to-Image editing for art transfer, image upscaling, background removal and more. Since the launch of Theta EdgeCloud about a month ago, the platform has empowered AI teams from large academic institutions such as KAIST in Korea to individual Hackathon indie developers with unrivaled GPU price-to-performance. Theta’s long-term vision continues to be releasing the first intelligent hybrid cloud-edge computing platform for AI, video, rendering and other GPU-intensive jobs, powered by over 30,000 globally distributed edge nodes in addition to cloud partners like Google Cloud and Amazon Web Services.

Today’s Image-to-Image AI model has been added to the EdgeCloud GenAI Showcase and Model Template Explorer featuring several cutting edge GenAI techniques in image editing:

  • Art style transfer
  • Background removal
  • Object erasing
  • In-painting
  • Image upscaling

Art style transfer allows you to take a content image and a style reference image, and combine the two into a pastiche of the first image in the style of the second. This can be used to emulate the style of your favorite cartoon, Renaissance painter, or any other style you choose from. This uses neural style transfer (NST) to capture the statistical properties of each image and quantity how well the style is being applied.

Image upscaling uses algorithms to sharpen and define lower resolution images, by predicting the composition of additional pixels based on their analysis of large image sets. This typically is done via a convolutional neural network (CNN), a form of artificial neural network that uses three-dimensional data for image classification and object recognition tasks.

Background removal uses computer vision techniques to detect and separate the main image from its background, instantly completing a task that used to be done manually in vector graphics editors.

Object erasing uses algorithms to segment an image into distinct regions for each object and background. Once the unwanted object is removed, the more difficult task of filling in the missing pixels is done by convolutional neural networks (CNNs) performing complex calculations and pattern recognition, taking into account lighting, perspective, and depth.

In painting works in similar ways to object removal, by segmenting and inserting new objects into a given area of an image. Generative Adversarial Networks (GANs) are useful here, by using two competing neural networks that collaborate with one generating new pixels, and another “critiquing” it to refine the output until the object is seamlessly integrated into the original image.

While these techniques are incredibly useful, the GPU power required to complete them grows daily as well. EdgeCloud is positioned to provide that GPU power with a seamless network of decentralized edge nodes and data centers. This allows EdgeCloud’s GenAI Showcase to continue to grow in depth and capabilities, adding new features and models that are needed for AI customers.

--

--

Theta Labs
Theta Network

Creators of the Theta Network and EdgeCloud AI — see www.ThetaLabs.org for more info!