Last Week on AI — no. 38

Leniolabs_
Leniolabs_
Published in
3 min readApr 23, 2024

--

This summary

This week’s AI newsletter is packed with breakthroughs!

Meta AI has launched Llama-3, hailed as the most advanced open-source model to date, less than a week after the release of Mixtral-8x22B. Microsoft swiftly followed up with the announcement of Phi-3, their open lightweight models. On top of that, Research at Microsoft revealed VASA-1, a new step into deepfakes. Meanwhile, Boston Dynamics has introduced an all-electric version of their Atlas robot, and Adobe has presented VideoGigaGAN, capable of upscaling videos by 8x.

🚀👇🏽 Dive into our weekly selected AI news!

MetaAI has released Llama-3, and it made a resounding news in the open-source space. They say it is “the most capable openly available LLM to date”. It comes in two formats: 8B & 70B models.

Microsoft research released VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time. Read the paper 👇🏽

TL;DR: single portrait photo + speech audio = hyper-realistic talking face video with precise lip-audio sync, lifelike facial behavior, and naturalistic head movements, generated in real time.

https://www.microsoft.com/en-us/research/project/vasa-1/

Adobe research dropped VideoGigaGAN: Towards Detail-rich Video Super-Resolution

Read the paper 👇🏽

It allows you to upscale video by 8x with enhanced details. Video super-resolution (VSR) approaches have shown impressive temporal consistency in upsampled videos.

Microsoft just dropped Phi-3, less than a week after the release of Llama-3 from Meta. It comes in 3 different sizes: mini (3.8B), small (7B) & medium (14B).

It is trained on 3.3 trillion tokens and is reported to rival Mixtral 8x7B and GPT-3.5. Has a default context length of 4K but also includes a version that is extended to 128K

Last week on AI is a weekly recap of the most significant #ai news from the past week, curated by the team at Leniolabs_

👇🏽 Learn what we can do for you:

--

--