Crunchyroll’s Eurekathon: A Story From a Participant Perspective

Ivan Parmaclii
Crunchyroll
Published in
5 min readSep 14, 2021

Hey everyone! My name is Ivan, I am a software engineer at Crunchyroll. Today I will be sharing a story with you about an internal hackathon I participated in with my friend Ilya. The event went on from 4/20/21 at 9am PST through 4/22/21 at 8:59am PST. Keep in mind that the event was global with multiple time zones, and we happened to be in the EET.

It all started earlier in April, when Ilya asked me to join Eurekathon: Crunchyroll’s two day coding challenge of an internal hackathon. We had over 30 teams participate with over 120 participants involved. Since we’ve worked remotely for over a year now, the whole event was designed to be remote as well. The only reliable way to present each idea was via pre recorded video with a time limit of 3 minutes per presentation.

Though I tried a few times to think about the idea and what I really wanted to implement, I had none. Right up until the start of the Eurekathon, we had only one objective: have fun.

It turned out that time is very important to us. It was 11am EET / local time on 4/21. The event had been running for 16 hours already. We had to skip the first night of coding as there was no way we could have stayed awake that long. So, before the first caffeine crash we were already tossing some ideas around. I had suggested a “meme button”: a tool that would help posting screenshots with captions to the subreddit or something, right from our player. Imagine watching JoJo Bizarre Adventures and seeing something clearly worth a meme:

You would click on this button and a preview pop up would appear with a share icon, just like the Mac OS screenshot window…

Well, I went on making my second cup of coffee. At the same time, it seemed like we moved on to discussing NFTs, trying to fit it in the context of anime. This was a mood killer so we went back to the original idea that Ilya suggested as a joke just to get me into this whole event.

“Let’s have a face swap!”

The idea goes as follows. While watching a video, you would inevitably want to impersonate the main character, for various unspoken reasons.

In recent years, a technology called deepfake has emerged. It is capable of replacing one face with another in a photo or video. It is so effective that it’s actually considered to be dangerous as it can be misused in various ways you wouldn’t imagine.

Here we go, we have finally agreed on something. We want to achieve the same kind of face swap, just with anime.

First, we tried state of the art deepfakes/faceswap. It is a deep learning tool made to simplify the use of a really complicated machine learning algorithm. After a few more cups of coffee and reading through instructions back and forth, installing countless additional softwares we were able to run the library…which it turns out could produce a good quality image only if you ran it on a machine with a good video card…but it would take 12–48 hours. Needless to say, we had no video card (it is 2021 after all!) or time for that matter. We had no idea how it would work with anime, and given that it takes forever to iterate we moved on to a different technology.

We found the NVlabs/SCOPS: SCOPS: Self-Supervised Co-Part Segmentation (CVPR’19). Now, this library is not specialized for the face swap but it can detect different parts of the face. It is also much less time consuming. Here you can see how it works on my face:

Next, we had to do the same for the anime character.

As you can see the segmentation isn’t quite accurate. But hey, it detected the face and the yellow part looked promising! At this point it was way too late in the day and we had decided to continue in the morning.

Around 10:30am the next day, we met up again. While we were having coffee together we made our plan for the day. We had around 6 hours left before we needed to submit the presentation. We had to make a short video with a replaced face, then improve it as much as possible until 2pm. The rest of the time would be spent shooting the presentation!

We were very lucky that using SCOPS was pretty straight forward. We were able to detect parts of the anime character’s face. Then we replaced it with our face and were officially impersonating one of the most famous memes.

is this an anime?

At this point there was no time left for improvement. Any new solutions could easily take us back into the rabbit hole so we had to stop. Once again we had a coffee while discussing how we would film our video. I must say we usually don’t drink as much coffee as we did during those two days :)

In hindsight, it is obvious to me now that having fun was the right decision. We weren’t fixated on one technology; there was no bad part that we had to deal with. Everything we did was from the perspective of having fun which I wish I could do more of!

--

--