Introducing: MIXONIUM Mind’s Eye 24.032

MIXology
MIXONIUM Blogverse
Published in
5 min readJun 5, 2024

The Intermodal AI Maker Space

June, 2024 — MIXONIUM’s exceptional file type handling and information curation capabilities have moved recently to center stage in the global advancement of AI toward a new generation of productivity in enterprises.

As the foot race toward LLM foundation models and application stacks unfolds around the world, one thing has become clear, and that is the need for a Ux paradigm that reimagines the iterative query and knowledge-building potential of these systems.

At MIXONIUM we have seen this wave coming for several years. First with the advent of Watson, the IBM sentiment analysis system based on early Natural Language Processing science. In 2018 Watson could evaluate a web page’s content, summarizing intent and substance. Then came Machine Learning applications that could make a 3D model of terrain by analyzing video from a drone. Watson beat Jeopardy! champion Ken Jennings in 2011, just like IBM’s Deep Blue beat Garry Kasparov in 1997. Today, we have systems that can take a voice audio input and generate text or audio, or even images, as a response.

The trend is from tailored, task-focused code systems, to broader “General Intelligence” systems that have a more expanded scope of parameter processing.

Today, the General Intelligence systems are increasingly trending toward what has been labeled “Multimodal” processing. MIXONIUM has been Multimodal since day one. That’s why we called it “Ultra Media.”

But here’s the amazing thing: today’s systems are actually configured at their core to handle multiple file types simultaneously. ChatGPT-4o doesn’t process voice to text in one module, then pass the result to another process; it handles the tokenization immediately at the audio level. This is a new level of synthetic cognition we call “Intermodal” Language Processing (IMLP).

These systems all align with the core values of MIXONIUM, which is that what matters is the Story of an information dataset, not the file types. This is how humans interact, how we think, communicate, and flourish. It is a new world beyond the segmented life in file type silos like YouTube, SoundCloud, or in 3D warehouses like SketchFab. It is a sort of Meme-ification of media, if you will. Ideas and concepts and narrative win the race over static formats.

So what we have created is a new construct, something we call Ultra Media, where the same Persistent Iterative Relational Dashboard (PIRD) that is used to store generative outputs can then be used as a structured prompt token index for further enrichment.

It is important to emphasize that the Ultra Media MIX construct still contains the same search-based drag and drop no-code Ux that the platform has had since day one. It is just that the “Fetch” function (our term for “search”) is now an option next to “Fab,” which is our term for “Generate.”

But here’s the kicker — with this MIX Dashboard structure, a post can not only transport static data files out from Fetch and Fab, but each cell can contain active application structures — via API or via native scripting. Scripting that is empowered by AI code generation tools.

In the generative mode, the MIX allows the user to select not only which cell to publish to, but it allows for the option to have AI generate code that executes when the cell is touched once published. AI Minority Report touch panels — in your own pocket.

Of course, the MIX, when completed, can be published to social media, to proprietary MIX galleries, to email, or to a web page embed, and each post can be tagged for future classification.

Add this all together and you get a patented, proprietary platform.

And as a reminder, our MIXONIUM Studios team is versed and fully up-skilled in the use of Generative AI tools to support our services offerings in content creation, curation, migration, and publishing, helping our clients explore what is possible. It is Imagineering of the agile and most exciting order — resulting in a new kind of media, and all of it can be published to any mobile device today. No need for an app. MIXONIUM lives in the browser.

And that’s when the fun begins — users having access to new and “Live” content packages — engaging with Ultra Media in settings ranging from warehouses, hospitals, and boardrooms to homes, schools, and even new kinds of resorts.

As the agile grid format evolves, individual cells can run entire applications via API intercom, taking the dialog with AI systems to new heights, driving automation, productivity, and just-in-time relevance to temporal circumstances.

But stay tuned for further developments in this area. The speed of cognitive acceleration and the collaborative synthesis is poised to change industries, from military intelligence and planning packets to entertainment publishing and product or service marketing. Imagine a MIX that is showcasing a home for sale. Imagine a MIX with Taylor Swift’s latest song and video. Imagine a new virtual artist AI singer-songwriter who performs her song “live” when you click on that cell of the MIX.

Our dev team is cooking away on this vision — making AI today look like MS-DOS. We like games, great characters, music, and immersive experiences, and we’re bringing that sensibility to this challenge. Wanna know more? Contact us. See the demos, and be a part of the revolution.

We are using AI to build this system. Sounds recursive? Well that’s because, you know, it sort of is.

We are not in Kansas anymore, folks — welcome to Ultra Media c. 2024!

p.o.c — hannah@mixonium.com

--

--