Integrating Conduit in Houdini

Wayne Wu
Blue Sky Tech Blog
Published in
10 min readSep 24, 2020


(Need a refresher on Conduit? Check out: Conduit: Introduction)

By the time I joined Blue Sky Studios in the summer of 2019, Conduit was already a mature piece of software: it had a robust API, Git-like CLI tools, and a set of established rules on how the pipeline data should be managed. However, the workflows around production work, and Conduit’s integration into DCCs, were still a work-in-progress. Even today, we strive to make Conduit easy for artists and TDs.

In the beginning, as part of the FX team, I was using the early Conduit tools in Houdini to try to develop FX workflows. However, it quickly became apparent that those tools didn’t exploit the power or proceduralism of either tool. Therefore, I decided to get my feet wet and untangle the bridge between Conduit and Houdini.

So what exactly was the problem?

The problem stems from the systematic change of using Conduit: no more file paths; everything is a Product and version controlled. This basically means that any node that deals with File I/O had to convert a Product to a path. For example, if you want to write a USD file, you would select a Product (i.e. PRI), declare it as an Output to your Workspace, and the Output would contain the actual path of where the USD file should be written to. While artists can technically work with file paths directly, it requires extra work for them to manage the dependencies explicitly and correctly.

It was forgiving at first. There were only a few nodes that dealt with File I/O, so the solution at that time was tool-specific. But as the complexity of workflows grew, especially with Solaris being able to read and compose USD files directly, more nodes had to communicate with Conduit directly. The process required first creating a wrapper-HDA then a Python wrapper-class that uses the Conduit API. It was a lot of unnecessary coding and could only be done by TDs. Different departments also had their own takes on how to talk to Conduit in Houdini, which only meant more pipeline divergence.

So I began the quest to find a solution to unify all the Conduit tools in Houdini. While USD is a vital part of Conduit, I tried to focus on the core principles of Conduit with the goal to build a system that is ultimately agnostic to file format and Houdini context.


Before addressing the actual I/O issues, there was something more fundamental that I wanted to take care of. For a while, manipulations of Products required Conduit API or CLI directly. Things like committing and publishing assets (i.e. Products) were all done by artists through predefined GUIs without much ability to customize.

I wanted to change that as a first step to understanding the system. I wanted to bring Conduit closer to technical artists, and provide them a mechanism to manage Products at their own will, using PDG.

One of our GUIs already used PDG internally to export USD files and commit Products in parallel. However, all the PDG nodes and the PDG graph itself were set up in Python. At that time, I was studying PDG for some of our FX workflows like wedging. Since they were all intertwined with Products, I thought: why not expose the PDG logics to be more user-facing?

So we started exposing the few existing PDG nodes, such as commit, checkout, etc. and added a variety of new Product nodes that would complete them as a useful toolkit, known as ProductPDG. Each PDG node was wrapped as a TOP HDA, allowing the graphs to be built in Houdini directly. Furthermore, a definition was given to establish the relationship between PDG and Conduit: each work item represents a type of Product item, and the Product’s data is serialized into the work item’s attributes. These attributes can then be used to build the dependency graph, just like any other attributes in PDG.

Users can easily traverse a graph and see the Product data by double-clicking on a work item.

The result was very promising. Departments like Crowds and Fur were able to build PDG graphs in Houdini to automate processes like fur attachments that can be scaled and parallelized across assets, without writing Conduit-related code. Meanwhile, our farm team was also developing our own PDG Scheduler. That meant, when we eventually completed the custom Scheduler, we could send those graphs to the farm as simple as a switch from Local Scheduler to our own Scheduler.

An example PDG Graph that uses ProductPDG to create, process, and commit multiple Products.


To properly refactor and converge our artist-facing tools in Houdini, it came down to these basic principles of Conduit:

  • A hipfile corresponds to a Workspace
  • A hipfile will read data from Input Products
  • A hipfile will write data to Output Products

The goal is to provide flexibility in reading and authoring any kind of data (e.g. usd, vdb, bgeo etc.) for multiple Products, therefore we needed a solution that allows us to easily establish the input and output connections to Conduit from Houdini, hence the whole ProductIO framework. We started by defining what Input and Output should look like in Houdini.

An Input contains version information to indicate which version of the Product it is reading from. An Output has no version, but it is typically associated with some output process that generates data to the Output, which is then committed. In both cases, they inherit from a ProductItem object which means they are associated with a Product and therefore they share many similarities. This is a simple yet critical observation since they were previously treated and implemented as two separate entities. In Conduit, they are derived from the same object and thus ProductIO implements it as such. Back in Houdini, a node would either be an Input or Output and we would call the node a Productized node.

So how does one Productize a node?

Product Properties

Initially, I looked at creating a low level node that can be added to any HDA to Productize it. This is analogous to the Inheritance concept in programming, where Productization would be inherited by higher level nodes. However, this does not solve the problem of having to wrap around nodes just to Productize it, and not to mention, we would have to create low level Product nodes for each Houdini Context (i.e. LOP, SOP, etc.). The alternative that we ended up going with is to make Productization a Mixin rather than Inheritance. This means that Productization is an add-on feature to a node, which is done using Product Properties.

Product Properties are a collection of parameters that can be added to any node in Houdini to Productize it as an Input Node, Output Node. It works the same way as Houdini’s built-in Node Properties which allow users to add additional spare parameters to expose a certain feature. In our case, by adding the Product Properties, the users now have access to the common Product parameters like a PRI field coupled with a Product Browser to easily browse for the desired asset in the Conduit database. Once a Product is selected, the I/O connection is managed automatically behind the scene, and an instance of the Input or Output is cached on the node to avoid unnecessary service calls if the Product has not changed. Using Product Properties, anyone can easily make Productized HDA, or Productize built-in Houdini-nodes regardless of their context.

Example of adding Product Properties (Input) to the Sublayer LOP.

The only attribute that would typically be needed from an Input or Output is the path to where the Product data lives. However, a Product can also store additional custom attributes that may be useful for specific workflows. That’s why instead of just caching the path, we store the whole PyObject on the node to have access to it at any point for any attribute. One of the default parameters that’s added to a node is an expression that will determine the path dynamically from the Product cache. It is important that the path, as well as any attributes, are determined dynamically so that they resolve accordingly when the PRI field is driven by an expression. This makes parameters programmable and enforces proceduralism, which we lacked before.

Productized nodes have custom node info related to the Product it’s associated with.

Output Graph

As mentioned earlier, an Output typically has an output process associated with it, which can be anything from writing a USD file to rendering a sequence of frames. It is an optional but important characteristic of a Productized Output node that connects to the concept of Output Graph. If an Output node contains an output process, this process would be defined by a TOP graph within the node. We leverage the Target TOP Network parameter (a Node Properties collection shipped with Houdini) to promote a TOP graph with its fancy PDG UI and associate it as the output process of the node. In short, each Output node will have a self-contained PDG graph to write out the desired data.

In addition to being able to run the output process per node, we also needed a way to run multiple output processes in parallel with specific execution dependencies. Indeed, this sounds exactly like what PDG is built for. Unfortunately, most artists were finding it overwhelming to use TOPs directly, so we were given the challenge to allow artists to execute them in-context of where they are working (mostly LOPs). This means that we had to translate a non-PDG interface implicitly to a PDG graph, which we call the Output Graph.

An overview of how different components interact with each other.

As a high-level concept, Output Graph is a generated master graph that runs multiple Output Nodes together based on specific dependencies. Since each Output Node already contains the PDG network that it needs, we reuse the same PDG network inside the Output Graph. Thus, it is basically a graph that contains multiple sub-graphs. The dependencies of the Output Graph is first determined based on implicit dependencies between outputs (i.e. what can or cannot be run together) and finally based on explicit orders defined by users through an interface.

ProductIO Panel

By standardizing all the Productized nodes and channeling them through the same framework, we were then able to build higher level tools to manage them. One such tool is the ProductIO Panel. As more nodes were being Productized, by artists and tool developers, we needed a way to see the Product-Node connections in one place and manage them. For this, we built a spreadsheet-style Python Panel, based on Houdini’s node search functionality, that will display all the Productized nodes in the scene with the option to filter based on properties like node type, node name, and PRI.

For all the Output nodes, this panel allows users to see all the nodes that are writing to an Output and build the Output Graph to run the output processes of selected nodes. It is where users can manage basic dependencies between nodes that are translated to the final Output Graph. To make it easier to tweak a few parameters on the nodes upon execution, we implemented a mechanism that allows parameters on Productized nodes to be tagged and dynamically added to the panel without having to extend the Qt code.

ProductIO Panel for Outputs (Top) and Inputs (Bottom).

Finally, since version control is fundamental to Conduit workflows, the panel also allows users to commit multiple outputs and manage the versions of all Input nodes. While we have a Qt interface built for Conduit as a standalone application to manage a user’s workspace, ProductIO Panel is specifically meant to allow users to visualize and manage the Conduit dependencies at the node level within the scope of a scene (i.e. hipfile).

Future Work

ProductIO has greatly simplified the process of reading and writing in Conduit for Houdini, and streamlined most of the tools that are now used in production for our upcoming feature film. Nevertheless, it is actively evolving as we look for areas to improve upon. A lot of the current design is driven by hiding the unfamiliarity of new technologies and concepts away from users and bringing back as much of what artists were used to, in hopes of a smoother transition, but sometimes at the cost of a truly elegant system. This translates to technical debt and burden that can be unhealthy to a pipeline.

As artists become more comfortable with Conduit, we may look into reducing the parameters in Product Properties to reduce front-end complexity. Ultimately, the fewer parameters users have to worry about, the better. A lot of the parameters are also currently tied together implicitly behind the scene which makes them rigid in implementation. On the other hand, PDG will continue to be an integral part of our Houdini workflows, therefore we may start transitioning our workflow to be more TOP focused and empower artists to take full control of it with the help of ProductPDG.

We look forward to sharing more fun stuff that we’ve been doing in Houdini, especially with Solaris/USD! Stay tuned!


Much love for all the artists and developers who have helped me design and test the system, especially my awesome teammates Kenji Endo and Chris Rydalch!