Runway could forever change the landscape of film and video production.

Vipin Singh
B8125-Spring2023
Published in
4 min readApr 10, 2023

In 2018, three visionary individuals — Cristóbal Valenzuela, Alejandro Matamala-Ortiz, and Anastasis Germanidis — established Runway, a trailblazing AI company that specializes in generative AI. The company’s journey began as a small thesis project at New York University’s Tisch School of the Arts, but has since grown into a formidable player in the AI industry.

Unlike most AI companies, Runway is headquartered in New York, even as many other companies in the field flock to the tech hub of California. Despite its unconventional location, Runway has gained substantial recognition for its impressive text-to-image software, Stable Diffusion, which has generated a whopping ten million images in just a few months since its launch in 2022. The software works by allowing users to input text, which the software uses to generate an image, providing users with instantaneous creations.

However, Runway’s success did not come without challenges. To scale up the product and bring it to more people, the company relied on Stability AI, which provided the necessary computing power. This led to increased popularity but also scrutiny due to Runway’s decision to openly source the Stable Diffusion model. Although transparency in research is essential, not all companies in the AI space follow this principle. For instance, OpenAI, has shifted away from openly publishing and sharing code, with its chief scientist and co-founder, Ilya Sutskever, stating that the previous idea of sharing research was “flat out… wrong.”

Despite this divergence in research philosophy, Runway continues to prioritize transparency, an approach that has been vital to its success. The company’s innovative solutions and dedication to openness have earned it a well-deserved reputation as one of the leading AI companies globally. With a team of brilliant minds, Runway is poised to continue breaking barriers in the AI industry.

Although Runway gained widespread attention for its Stable Diffusion text-to-image software, the true power of the company lies in its AI Magic backend editing suite. This set of tools is incredibly versatile, allowing users to generate subtitles, modify images using text, and even apply a bokeh filter to videos. While most users are individuals creating art from the comfort of their homes, Runway has also collaborated with major companies, such as ‘The Late Show with Stephen Colbert’ and ‘Everything Everywhere All at Once.’

Despite all of these impressive achievements, the real reason for discussing Runway lies in a particular event that occurred in December 2022, one month after ChatGPT’s launch.

In December 2022, Runway achieved a significant milestone by raising $50M in Series C funding. The funding round was led by Felicis Ventures, with several previous investors also participating due to the company’s enormous potential for success. One such investor, Lux Capital, had previously invested in every round from Series A to C, and Compound and Amplify Partners also did the same thing. This level of support is a rare occurrence in the VC industry. Still, Runway’s unique offering made it a compelling investment opportunity. The company was valued at $500M after the funding round, and this amount was expected to increase rapidly.

Following the investment, Runway introduced its new product, Gen-1, in February. Gen-1 is a video-to-video software that comes equipped with five significant modes: Stylization, Storyboard, Mask, Render, and Customization. Stylization mode generates a new video in the style of a shared image by using a driving image and pre-existing video. Storyboard mode turns mockups into renders, while Mask mode modifies video subjects by adding spots to a dog or applying other effects. Render mode can turn untextured renders into realistic outputs by applying an input image or prompt. Finally, Customization mode allows users to customize the model, providing more flexibility and control. With Gen-1’s introduction, Runway has taken a significant step forward in the field of AI, with its advanced suite of tools that can transform the way videos are created and edited. This sounds great and could be considered innovation all by itself but, Runway wasn’t done. A few days ago, it came out with Gen-2.

Generative AI has made the impossible possible, with breakthroughs happening at an astounding pace. While this is not a new observation, the speed of progress remains remarkable, particularly when considering the advancements in Gen-2. It is worth noting that these systems have been in development for years, but from a public perspective, the rate of change is breathtaking. Compared to OpenAI’s timeline of just under six months to develop GPT-4, the gap between Gen-1 and Gen-2 was a mere forty-two days.

But what makes Gen-2 so important? It is the first publicly available text-to-video tool, meaning that it has the potential to revolutionize the way we approach video production. The power of words can now bring art to life, with any scene imaginable now possible to create. Whether it’s a tranquil mountain landscape with a car driving through it or a blue-colored fox invading a fifth-floor apartment in New York City, Gen-2 can turn these ideas into reality.

While these scenarios may seem far-fetched, the fact remains that the technology is advancing rapidly, and the possibilities are limitless. With such a short development cycle, it’s only a matter of time before we see even more incredible breakthroughs that will forever change the landscape of film and video production.

--

--