The Annual Blue Sky Hackathon

Rebecca Hallac
Blue Sky Tech Blog
Published in
4 min readOct 16, 2018

Growing and maintaining a culture of creativity and innovation is crucial to movie making. One way the Blue Sky technology departments foster this culture is through our annual hackathon. A hackathon is a social coding event in which engineers come together to create and build out-of-the-box ideas in a short amount of time. At Blue Sky, we hold a 48 hour long event filled with snacks, food, caffeine, more snacks and fun!

Starting on a Wednesday afternoon, our teams worked well into the night — and the following night — all with the goal of demoing their work on Friday. Teams were made up of members from Production Engineering, the Technical Directors, Research and Development, and Systems. Together we collaborated to create some really fun and innovative projects.

Overall 8 teams participated and demonstrated their work. Here’s a look at 3 award-winning entries.

ComicAI

Eszter Offertaler, Grace Kumagai, Michael Reed, Adam Burr

Our team identified a comic dilemma: we want to write comics, however, minor problem is we cannot draw. Luckily, there’s a lot of great artwork already out there, such as the graphic novel Nimona, the basis for our 2020 release! What if we could leverage that preexisting art and throw our dialog on top of it? On the night of the Blue Sky hackathon we did just that!

Our comic art retargeting application had two phases: offline preprocessing, and online generation.

During preprocessing, we built a library of edited and labeled comic panels from the original Nimona pages. We used OpenCV to segment the pages into individual panels, and then a combination of Mathematica and the Google Cloud Vision API to locate, extract, and mask the text from the speech bubbles. To maintain comic coherence, we gauged the emotional tone of each bubble using the IBM Watson Tone Analyzer, which returns a weighted list of possible sentiments, e.g. ‘Confident’: 0.96, ‘Joy’: 0.54, etc. This static library of panels and metadata was stored and reused for fast, online comic generation.

During comic generation the user supplied a text-only script file. Each line of dialogue from the new script was again sent to the Watson Tone Analyzer. Using the resulting sentiments, we selected panels from our library, row by row, that collectively matched the sentiments of the new script and had the correct number of speech bubbles. We then composited the new dialogue into the empty speech bubbles using Pillow, wrapping and aligning the text to match the bubble shapes. To preserve the feel of the original comic, we used a custom Nimona font made in Calligraphr. Finally, we assembled the panels into a cohesive page, again using Pillow, applying geometric constraints to maintain visual appeal.

Dark Sky Yolo

Pranay Patel, Danny Rerucha, Georgi Todorov

A lot of images are generated at our studio every day, and regular review of these images is the engine that propels our production forward. It’s important for us to always be able to find the imagery we need whenever we need it. In service of this, all frames that are rendered are automatically registered with our media service with the appropriate metadata, and users may optionally add arbitrary tags to any media items they like.

This system works well, but it has a few shortcomings. Most notably, adding tags is a manual process, so if someone forgets to add relevant tags, expected media items might not appear in search results.

Dark Sky Yolo is a project that uses machine learning to automatically identify objects within a frame and apply the appropriate tags for more robust media searchability. We achieved this by incorporating an open source real-time object detection system called YOLO into our media submission process. YOLO comes pre-trained with a small number of classifiers that represent basic nouns like person, car, dog, and bicycle. Thanks to this pre-trained dataset and a very simple API, we were able to start generating automatic tags almost immediately.

Generic tags like “car” applied consistently across an entire show can be really useful, but a cool thing about working in CG animation is that we have perfect data for training new Blue Sky-specific classifiers capable of detecting and tagging specific assets (e.g.,“ferdinand” instead of just “bull”), or even specific revisions of assets. New classifiers are trained by curating many images that contain the object you want the algorithm to detect, and the turntable images that are generated for assets during our asset publish process are perfect for this task.

In the end, we were able to create a simple and non-intrusive automated tagging system that has a lot of potential to be expanded on in the future!

Slack Shot

Alena Volarevic, Jake Richards, Steven Song, Tracy Priest

We are constantly sending each other feedback on the imagery that we create at Blue Sky. On top of comments, this feedback is often most helpful in the form of draw overs on top of imagery or video captures. Having recently integrated Slack as a key part of our communication pipeline, we wanted to see how we could harness the tool for our visual feedback needs.

Before this tool, artists would have to screen shot, annotate and upload each image in separate steps which is timely and inconvenient. Slackshot is an application that streamlines the process of communicating these visual feedback directly from the software the artists use to create these drawovers (RV) to Slack.

We are planning on open sourcing more of our hackathon projects, so stay tuned for a later blog post!

--

--