(In)Dependency of an Art-Tool with Z-Workstations

Based on real events

Merzmensch
Mer-Z-perience

--

Latent Space Browser, RunwayML, ML Lab (RIP)

Can you sneak into the memory of a machine? You can — or: could. Back to 2020, a new epoch of creative collaboration with machines emerged: with prominent platform-based tools like ArtBreeder or RunwayML, one could benefit of Generative AI, even without to have a hardware setup or Python knowledge.

ArtBreeder was based on GAN-Models and became in quick time a huge popularity among artists and creatives (see my detailed essay). The application evolved and is still under ongoing developments.

Another platform was RunwayML — a company which recently became famous for creating a videomodels Gen-1, Gen-2 and since June 2024, GEN-3. These models allow to create convincing videosequences via textual prompts.

But this, recently. Before, RunwayML was a very popular system among creatives thank to their ML Labs.

What is (was) ML Labs?

This platform provided a huge growing collection of open source models, which were implemented bit by bit after they’ve got published on GitHub.

With time, it emerged to a profound tool by artists working with images, texts, videos, object detection etc. (I wrote a lot about the features of RunwayML here).

And artists appreciated the functions. Especially the possibility to train GAN models on your own dataset.

s.myselle did it with Schiele’s works, which became her long-time art project:

Eryk Salvaggio worked with historical photography based on GAN:

A multimedial artist Scizors_Eth even provided an interactive way to explore the Latent Space of GAN with his famous Latent Walk:

The possibility to have direct view and insight into the Latent Space of the GAN checkpoint was the main strength of RunwayML, untill…

RunwayML sunsetted their ML Lab.

They just decided to delete it, probably because of refocussing on their revolutionary GEN-video models.

But artists were devastating by losing integral tools their art project were dependend on.

Way out of dependency. My Z-Art vision.

As I wrote previously, there are two Types of Enablers: Online Services and Hardware.

Meanwhile there are so many online services implementing GenAI models. Some of them are proprietary (like MidJourney or DALL-E), but most are available as open source.

Artists are paying hundreds of USD for subscriptions monthly, and they cannot be sure that on the next morning the service is still available.

And here comes our story in play.

After I joined Hewlett Packard Ambassador program, I am focussing on liberating the artists from their dependencies on online softwares.

My goal is to co-develop together with Z for HP an ultimate art workstation:

  • which can work offline
  • which uses the GPU power to speedup AI workflows
  • which allows even not tech-savvy creatives to work, train and generate without issues
  • which brings tech and art together.

And then we can realise the vision of Umberto Eco to create new cultural Epoch.

--

--

Merzmensch
Mer-Z-perience

Futurist. AI-driven Dadaist. Living in Germany, loving Japan, AI, mysteries, books, and stuff. Writing since 2017 about creative use of AI.