GSoC 2017 with AppleseedHQ

First of all, I would like to thank my mentors François Beaune (franz) and Esteban Tovagliari (est77) for guiding me through the process, pointing out ways for improvement, providing me with priceless input and, above all, maintaining the incredibly friendly and positive atmosphere within the whole Appleseed community. I would like to thank Nathan Vegdahl (cessen), author of the Psychopath rendering engine and the algorithm being implemented, for staying in touch with us, monitoring my progress and providing very useful comparisons with Psychopath, when needed. Also, I would also like to thank my boss Markus Rauhut and professor Hans Hagen for letting me fully devote myself to Appleseed instead of working on my PhD over these three months.


tl;dr;

If you are reading this because it is GSoC 2018+ and considering whether to submit your proposal to Appleseed — I will tell it to you straight — apply as fast as you can! Having a chance to work alongside these guys will be an invaluable experience!

If you are reading this because you have to read my report, and you don’t feel like reading the whole wrap up of my impressions, just hop directly here.

Hitting it Off With Appleseed

It was early March 2017, I realized I missed the organization announcement which was a few days ago already, so I hurried to see which projects were on this years menu. As I was scrolling through the sea of projects which I had little interest in, I started to loose hope that there will be anything for me. I was looking for computer graphics and computer vision projects. Then…there it was, Blender, Appleseed and OpenCV…my top choices. Google allows up to five project applications, however, I knew it would prevent me from fully immersing myself into the code base and working on the proposal. I am cutting down on my chances to be accepted, but again…quality over quantity.

Both Blender and OpenCV I was familiar with from before, whereas Appleseed was completely new to me. I decided to focus on the computer graphics because of my PhD topic so the next natural step was to google away the offered projects. After couple of hours of clicking and reading every possible link, I made up my mind for Appleseed — everything about them sounded so right. They are an aspiring rendering engine developed by a group of really talented people. Their Internet trace had a very positive ring to it. Blender, of course had an impressive record and giant community around it, but it is mostly the GSoC reports that made my mind - they seemed strictly focused on the code, with seldom mentions of the community, development process or learning. On one or two occasions I have even found someone mentioning he/she didn’t communicate with their mentor for a longer period of time. Overall, Appleseed sounded better for me.

I made an introduction in the Appleseed’s Google Group, and got an instant invitation to their Slack channel. I couldn’t respond and start working right away because of my work, so when I checked in about week and a half later, I got overwhelmed by the amount of applicants. There was some serious competition. However, franz and est77 were handling everything perfectly, providing the applicants with very positive reinforcement and help when needed, paying attention to everyone, without showing preference. Really professional!

Application Period Coding

Every organization has its own rules for application. Appleseed requires you to contribute by working on an issue or a feature before submitting the project proposal. IMHO, it is a good approach because that way you get acquainted with the code base before actually trying to propose your implementation, which is very important. Though huge, Appleseed’s code base is extremely neat and consistent, making it really easy for a noob to find his way around. Also, their code base seems to be written really modular making it easy to use something which was developed for another purpose.

I took a week of vacation to come back from Germany to Croatia and spend it with my boyfriend Damjan in the mountains, it ended up with me coding most of the week on the couch in front of the fireplace. At least the view was amazing ;). My contribution was implementation of the isotropic STD microfacet distribution function (Issue #1262). Also there was a celebratory dance around the living room as I got my first PR merged. (I’ve never contributed to an open source project before).

The view I got to enjoy during the coding breaks :)

Upon the successfully implemented feature, I was good to go and work on my project proposal. I chose to work on the implementation of a really cool algorithm on many light sampling proposed by cessen. My knowledge of computer graphics was somewhat limited, as I introduced myself to it only recently with the start of my PhD half a year before, otherwise, I come from computer vision field. However, I understood the need for this algorithm as well as (what I then thought) most of the implementation requirements. It took me about two weeks to get the proposal right. I carefully read everything there was on cessen’s algorithm, went through the code base to try and identify the code I will be working on, contacted Nathan with questions about his algorithm (and quickly received a really friendly reply) and arranged a Slack discussion between cessen, est77, aytekaman (Aytek Aman, also an applicant) and me.

cessen, 16spp before LightTree
cessen, 16spp after LightTree

Waiting For The Results

Even though I planned on contributing to Appleseed during the Application Review Period, I wasn’t able to because of some very nasty deadlines at work so I spent my every conscious moment thinking about that. The GSoC application competition was tough, over the few weeks I spent on the Slack, I have seen a lot of quality work being done by the applicants. It was surely not going to be easy for the organization to decide. Even though I really wanted it, I was somewhat reluctant to anticipate my acceptance with the program as I felt I was lacking knowledge.

Imagine my happiness as the results came! IIRC, Appleseed was supposed to get only two applicants this year, however, they accepted three! :D HOOORAAAY!

Code Away

Being a member of the GSoC WhatsApp group, I’ve had a chance to hear about many different experiences from other companies. Some good, some bad. What came across as the most important difference between Appleseed project and most of the other projects was — work flow organization, consistency and, above all, humane approach. Those treats encourage you to give your best at all time, which is exactly what I did.

I communicated mostly with franz, on a daily basis. He seems to be around 24/7, or at least whenever anyone needs him. In fact, I am not sure if the guy ever sleeps :). Appleseed development has a practice of making smaller but self sustained pull requests, rather than just one giant PR. I find it to be a really good practice, as it makes it easier to track the progress and plan ahead.

Throughout the whole coding period, I tried to keep the weekly progress report. It is not particularly pretty, but it helped me clearly communicate to my organization what I have done, what I am working on and what I plan on doing next. Artem Bishev, a fellow GSoC student tracked his work through GitHub project which is also a great idea. If I had a chance to do it all over again, I would keep both weekly reports and GH project. (Yes, I really like progress tracking and organizing tools).

I started off easy, by reading different articles on the topic, learning terminology and trying to pinpoint the place where to start the implementation. My task was to improve the Appleseed’s light sampling method by switching the implemented CDF with cessens light tree method.

By the beginning of the third week I had added the LightTree into the LightSampler. It was able to handle non physical lights (NPLs), more precisely, point lights, with a bug which wasn’t a bug, but a faulty test scene. I didn’t know that at that point which caused me to spend almost two weeks trying to figure out what was wrong with the probabilities.

Bug (left) and the fixed bug (right) render, both at 1 spp

In general there were two really nasty bugs over the whole coding period — one was caused by the faulty test scene and the other, actual bug, caused by my lack of understanding how MIS works.

Second month came quickly. One of the painful areas for me was test scene creation — Appleseed is a rendering engine, not a scene editor, and without Maya or 3ds Max, I wasn’t able to create my own test scenes. In order not to bug others to create scenes for me, I wrote a python script which used the existing Appleseed python bindings to create my own test scenes.

CDF (top left triangle) to LightTree (bottom right triangle) performance comparison for 10 000 point lights scene generated by a python script. Left: 25 spp Right 1 spp

It is interesting to see the light cluster patterns appearing over the LightTree. For a long time we believed it was due to a light probability error. However, it is actually a pattern coming from our BVH tree structure as we used only square distance approximation for node probability (point being illuminated to center of the bounding box). With use of approximation of node as a hemishpherical light source, the pattern was greatly reduced

During the second month, I was working on the emitting triangles (EMT) integration, chasing the before mentioned MIS bug and refactoring of the LightSampler into two different samplers based on used tracing method. The SPPM uses the ForwardLightSampler whereas, path tracing uses the BackwardLightSampler method. That also allowed me to implement a LighTree/CDF switch in the Rendering Settings of the Appleseed studio. That way we can develop the LightTree without breaking the master, carefully integrate it and use whichever method is more appropriate at a given moment.

The MIS bug happened because I had passed the shading point on the light itself to the MIS evaluation, rather then the point being illuminated. It was a nasty bug to catch because both shading points are valid shading points and one had to keep track what was actually within the tracer shading point and I missed it. Like many other times, franz came to the rescue. He actually caught it while guiding me through the code on how to make the appropriate round-trip test.

Even though my implementation of the LightTree showed significant improvements for NPL scene over CDF sampling, with EMTs it was not the case. Even though we could see noise reduction when comparing same spp renders, the LightTree took longer to render. Even with evaluating node probability only by square distance. Within the third month, I added the hemispherical light source approximation for LightTree nodes. It made the node probability computation even heavier and introduced new rendering overhead. It did, however, improve the render at the same number of spp. franz has no worries about it as he claims we can optimize it to get it work faster.

Cool thing is that, during my GSoC work period, blenderseed got resurrected — Appleseed plugin for Blender. It was a game changing moment and a lifesaver. I was finally able to produce my own many light scene which is not particularly repetitive! I also got to work with blenderseed and had fun creating an EMT test scene containing 50,340 EMTs.

Appleseed basement workshop scene. Table by sniefly, shelf by Ernesto Becerra
Left: CDF at 16spp, Right: LightTree at 16spp

As I implemented the cessens approach of hemispherical light source approximation for light tree node, I noticed a line of code that offered a potential improvement. In the proposed approach we treat each node as a hemispherical light source of a finite radius. Therefore we first perform a check whether the point is within the node at all. In case it is, hemispherical light contribution is computed and divided by the inverse node surface area. However, in case it is not, the contribution was set to 1.0, leaving only node inverse surface area as a weight of the node intensity. I decided to switch the inverse surface area with the square distance we used before for that special case, thus making the node contribute only based on it’s distance. It lead to a significant noise reduction.

100x100 NPLs 16 spp Left: before square distance Right: after using square distance as intensity weight

cessen tested the proposed improvement in Psychopath and confirmed we are on to something here. However it produces fireflies which disappear with clamping, leading us to believe it is just a good approximation of an another mathematical function. Definitely worth investigating further.

Original Psychopath test scene 64spp
Improved Psychopath test scene 64spp

Even though my work and implementation during GSoC increased the efficiency of many NPL scene renders and reduced the noise at lower spp numbers for many EMT scenes, there is a lot of room for further improvement, on which I plan to work.

Future Work

I would specify 5 areas for improvement:

  1. Optimization
  2. Node probability evaluation based on the overall light orientation within the node
  3. Implementation of an enhanced proprietary LightTree partitioner (currently we always partition in the middle)
  4. Approximation of a tree node as a volumetric, rather than hemispherical, light source
  5. Extension of LightTree for handling textured lights and use with OSL shaders

Optimization

The tree should be working faster with greater accuracy, as I didn’t manage to get it to perform as good as the Psychopaths implementation. Beside going through the code to find potential room for code optimization we should make the tree nodes aware of their own bounding boxes. franz also suggested storing the lights within the light tree nodes.

Light Orientation Based Probability Evaluation

Current implementation has no knowledge of light direction and thus assumes every light source emits in all directions equally when evaluating probability of a node contribution.

Tricky part is to get the orientation approximation right for nodes of a higher level. It can easily happen that a single parent node contains several lights of opposite directions. Therefore, if we would just make a simple averaging and describe a node with only one direction, we would soon end up evaluating such nodes with zero contribution for every point of the scene.

est77, had a great idea on how to tackle the problem — using cone of normals as orientation descriptor or K-means of children vertices.

Proprietary LightTree Partitioner

Implementing a partitioner with only LightTree in mind, we can use light attributes for partitioning such as orientation in combination with bounding box surface and position and make trees which produce even more accurate results.

Volumetric Light Source Approximation

Currently a LightTree node is approximated as a hemispherical light with intensity being the sum of all the child lights intensities. If we approximated the node as a volumetric light, as cessen proposed, and child light sources as light particles we would likely be much more precise.

LightTree Extension

All lights are assumed to be of constant intensity over their surface. For NPLs that is ok, however, EMT intensity can vary over the surface. That case is currently not handled.

What I Will Bring With Me From This

Experience of working remotely with a group of extremely talented people

Code formatting (thank you franz for making me more precise and detail oriented)

Reusing preexisting code for my own needs

Identifying refactoring needs

Writing code with performance in mind

Working with light sampling

Extended knowledge about lights in computer graphics

Great starting ground for many more contributions