The story of SMPL

Naureen Mahmood
Meshcapade
Published in
6 min readJun 8, 2024
Taking a little break from the paper writing to celebrate Aggeliki Tsoli’s PhD defense! —taken at the Max Planck Insitute for Intelligent Systems. Perceiving Systems Department.

In the summer of 2014 I was the one scheduled to give a talk at our weekly department meeting. I used to dread these meetings. I often got away without having to present because a PhD student or PostDoc would want the spot to practice their presentation for a rapidly approaching paper submission deadline. I was happy being an engineer with no rapidly approaching deadlines. It wasn’t because I didn’t have things to talk about. I had tons. I was always on 3 different projects at any given time! I just really hated public speaking (maybe I still do. Can you tell? 🙂). Michael, who was our department director, insisted that everyone working on a project should have a chance to talk about their work.

Well, it finally happened. This time I couldn’t get out of it. Darn it.

The dreaded moment arrived and I talked about the beginnings of an idea that Matt Loper, Michael and I had been noodling around in our heads. A standard 3D body mesh with a standard skeleton with blend weights and pose corrective blendshapes. All learned from 3D scans. The novelty was in learning the deformations on vertices instead of triangles, as done in older models like SCAPE and BlendSCAPE (our lab’s own, very heavily used parametric body model). Making this switch to vertices meant this model could finally work with standard 3D blend weight rigging used in all graphics engines. The second important thing was to make the deformations linear. Making them linear meant the model could finally work with standard 3D blendshape deformations used in all graphics engines.

We called this concept the “character compiler”. This was the first time we talked to our group about the idea that evolved into what we now call SMPL. We wanted to train a 3D shape space of humans and the deformations due to motion, make it differentiable and also make it compatible with standard graphics so that we could animate it easily in Maya or create a game-compatible rig for Unity and Unreal.

The teaser image for our SMPL: Skinned Multi-Person Linear Model publication. The grey side of each body is a 3D scan registeration, the bronze side is the SMPL model’s reconstruction of the scan.

My talk was just focused on the 3D character creation aspect. How 3D rigs are used in games and how blend weights work, how blendshapes work. To show our team how the eventual SMPL rig would be used in a 3D engine, I rigged a couple of our 3D body meshes from our older BlendSCAPE model. Fun fact, I remember one of the 3D meshes was Eric Rachlin, co-founder of Body Labs, the ever willing participant for cool projects! I loaded up some motions onto these 3D rigged meshes using Mixamo. I put them in a dummy game level in Unity to show everyone how a 3D animation with SCAPE-like models would work. “3D Eric” could run around, climb up stairs, jump off with a somersault from great heights and walk through walls. Only his blend weights were a bit off.

Rocko, our beloved department dog, reminding me always to take lots of breaks. He should have been a co-author!

In total, it took Matt just around 2 months to train the core model. Yes, Matt is really quite brilliant. But it wasn’t a straight line. We were aiming for a SIGGRAPH submission. For the uninitiated, a SIGGRAPH submission means missing Christmas and working in a sad, cold lab through December. The first version of the model, which we still called “character compiler”, was completed by early January 2015. Matt, Michael and I worked on it through the new year doing and re-doing all the math, testing, running evaluations, putting together all the paper sections, creating the videos. It was a race. SIGGRAPH submissions are no walk in the park!

Dec 16, 2014 — just a few weeks of tests and renders. Poof :’(

But, then came the lows. Matt realized the results weren’t good enough yet when testing the new model with a new dataset. With one week to go until the submission, we decided to pull out of SIGGRAPH. Matt took a weekend to be depressed. Another week to think it over. By mid-February, we had a new model, performing better than anything before! Now we had more time to try even more things. We pulled in Javier Romero for even more meticulous evaluation and testing. We also brought in Gerard-Pons Moll, to learn soft-tissue dynamics from his earlier work, DYNA, so that SMPL models not only had pose-dependent soft-tissue motion, but also tertiary soft-tissue movement — you know, jiggliness!

“Okay, here’s the plan!” — Our storyboard of the video accompanying the SMPL submission.

At this point, Michael had a brilliant idea: let’s lean in on the fact that we’re trying to make this model as “simple” as possible for 3D graphics engines. We were literally adding a machine learning component into the super-standard 3D graphics equation that had been used for inside Maya, Blender and other graphics engines for years. So we decided to give the model a new name: SIMPLE — Skinned Multi-Person Linear model. We all loved the idea and we high-fived each other for such a perfect name to describe what we wanted. It stuck for just about a couple of days. We couldn’t easily find each other’s emails about this project using “SIMPLE” as a search term anymore. It was too common a word.

So, SIMPLE then became SMPL just to make it easier to search :)

My favorite set of images in the paper. I spent far too much time rendering them. The images show which vertices on the surface influence which joints and how we’re able to reduce them to a sparse set them after optimization.

With the new name and incredible performance, we decided to submit it to SIGGRAPH Asia. Out of 5 reviewers, only 2 voted it as a clear accept. Others were on the fence. We managed to convince them with the rebuttal, and so SMPL finally made it to the ACM Transactions on Graphics (TOG), the foremost peer-reviewed journal in the field of graphics.

Matt and his PhD celebration hat after his own PhD defense a few years later. SMPL was the last publication in his PhD.

And now, as I write my little history of the model, SMPL has moved to the top 3 most cited publications in the history of all of ACM TOG publications! https://dl.acm.org/action/doSearch?fillQuickSearch=false&SeriesKey=tog&sortBy=cited

This may not mean much to most , but it’s a pretty big deal to me. ACM publications include SIGGRAPH, SIGGRAPH Asia and many other top-rated 3D graphics journals, publishing research since 1982 from giants in the 3D graphics field and companies like Disney, Pixar, Weta and Epic.

When we started working on the character-compiler project, little did we know we’d be starting a whole new movement of 3D technologies built around our work for 3D human understanding!

The first tutorial about SMPL at ICCV 2015 in Santiago, Chile.

If you’d like to read more about the paper, you can find the project website here: https://smpl.is.tue.mpg.de/

If you want to see the latest capabilities built on SMPL for commercial use, try out the Meshcapade platform here: http://meshcapade.me/

There are also a few newer versions of SMPL, you can find them here:

MANO & SMPL+H: http://mano.is.tue.mpg.de/

SMPL-X: https://smpl-x.is.tue.mpg.de/

STAR: https://star.is.tue.mpg.de/

SUPR: https://supr.is.tue.mpg.de/

--

--