Technically Speaking: Building B.I.L.L. — pt. I

Last year we launched one of our most ambitious projects to date, the Bot Initiated Longevity Lab (B.I.L.L.), a robot-augmented system to clean and repair sneakers.

PCH Innovations
10 min readFeb 14, 2023

The culmination of two years of close collaboration with Nike’s innovation team, B.I.L.L. was built as much to extend the life of pre-worn sneakers as to bring the topic of circularity closer to consumers in an exciting way. As B.I.L.L. continues its journey from Niketown London to Nike’s European Headquarters in Hilversum, we sat down with two of the PCH engineers behind its development, Florian Born (hardware extraordinaire) and Christian Kokott (software superpower) to talk about their unique processes, challenges, and the tech that made it happen.

Florian, you’re a roboticist with an arts background. Does that different perspective influence how you work?

FB | I think coming from a different background keeps you more open-minded about custom solutions and not restricting yourself from the first sketch on.

With many of our projects, I think this kind of openness has led to more playful, unique and multi-purpose ideas that are maybe not so common in more traditionally trained engineering studios.

Can you tell us a bit more about the creative process with the B.I.L.L. project specifically?

FB | Right from the beginning, it was very hands-on and experimental. We started by diving into the deconstruction of like, What is the anatomy of a sneaker? How does it come together and what are its weaknesses?

We sourced a lot of used shoes from 2nd-hand retailers and local shops, and asked our team to bring in their old beat-up sneakers as well, and we started identifying a lot of common flaws already by visual inspection. But by dis-assembling these sneakers in our workshop and trying to put them back together again using traditional cobbler methods and new techniques (like hot glue for patching), we started to see new and interesting ways of repairing a shoe.

And, as we did these manual, creative interventions, it became clear to us that the project wouldn’t be about recreating a new shoe, but about augmenting the sneaker so the customer would value it more — to embrace the repair and not hide it.

An early exploration of what would become B.I.L.L. (Sketch and animation by PCH
Industrial Designer + Robotics Engineer, Ingur Boettger)

Once we selected the interventions we wanted to focus on, it was an iterative prototyping process: quickly sketching an idea out, seeing how we can achieve it, what tools we need, then starting on the digital prototype in Fusion 360, our 3D CAD software, sketching out the key mechanisms and getting a better understanding of how feasible this solution could be.

“[It was] about augmenting the sneaker so the customer would value it more — to embrace the repair and not hide it.”

Then we started machining simple components and 3D-printing to validate the digital prototype, test it, adapt it and then repeat this optimization process. It helped to have the machining capabilities in-house because we kept evolving each version throughout the project. Both to achieve the desired repeatability and stability of the system and then also to be able to add our weird design-y touch to it, which might not happen in a normal engineering studio. The special location we’re in [our Berlin ex-Kindergarten studio], where we built the B.I.L.L. prototypes was also great for keeping this element of modularity and compactness, because the space is limited. So our machines are sort of always a reflection of who we are but also where we are.

Within such an experimental process, do you have to kill a lot of darlings? Or is it very clear which intervention or idea you’re going to pursue?

FB | Mmm. I mean, it happens sometimes, but usually we all manage to sneak in our favorite features. Or, if we don’t incorporate it in the first version, we keep it in the back of our minds to maybe add later to a different version. Because of the modular concept, we could always isolate systems and experiment with different ways of, for example, cleaning that might have died in the process of iterative prototyping and bring it back to life at a later stage. But, in the end, what makes it into the final system is what delivers the best outcome in terms of function.

CK | There’s still an exciting, dryer concept in the drawer…

Manual tools and interventions

Christian, as the mastermind of the software architecture, could you tell us a bit more about what informed the architecture?

CK | It was really a mix of requirements. On one hand, we have an industrial automation setup with PLC [Programmable Logic Controller] it’s controlling lots of input/outputs, motors; it’s getting a lot of information from sensors, including the two robotic arms it’s instructing, and all this is a fairly standard kind of robotic work cell setup. But on the other hand, we wanted to have a smart modern web UI. Because the person operating it will not necessarily be a mechatronics engineer, right? It was going into a retail space, so the operator might be a person from that retail space, like a customer, who’s used to snazzy app UIs that have a certain standard of flexibility or responsiveness. So, yeah, it was kind of like two worlds coming together in this project.

“…it was kind of like two worlds coming together in this project.”

From a software architecture point of view, what was super important was that, while we have a machine with a lot of set modules [scanning, cleaning, drying, patching], the path the shoe takes through the whole machine has to be very dynamically configurable. So, it’s a bit different to more traditional automation setups where the object being worked on goes through the same sequence of steps over and over again and you can optimize for that.

In our case you can put in a shoe and say, ‘This shoe only needs to be cleaned and not patched’ or if the drying time needs to be adapted for different shoe materials, these different options are configurable by the customer. So that meant we needed the system to be extremely flexible.

How did you reconcile your more modern software engineering background with the more traditional automation challenge?

CK | In software development these days, we’re used to very flexible development environments that come from decades of collaborative iteration, like front end development, backend development, video games. But for automation systems like B.I.L.L, PLC programming is usually the norm, which in most cases are just bespoke state machines. There are often these standard approaches, and it seems like for every automation project people basically digitally recreate the standard automation circuit and adapt it to their needs. It’s like implementing electronic circuits in software by saying ‘Okay if I press this button, this thing happens, then the conveyor turns and when this thing arrives, and then this thing happens’ and it’s kind of very literal, very linear, and because of that, not reusable.

What was interesting for me, was deciding how much of the logic should still stay on this industrial computer, on the PLC, and how much I can move into this modern software environment. My initial instinct was to move as much as possible to this modern world. But, then, of course, there are good reasons why these systems are a little more elaborate or clunky… they have different constraints. Like, automation systems are developed for reliability, reproducibility, in real time, and for robustness rather than developer experience and speed of prototyping.

My goal with the architecture was to create a hybrid responsive state machine which is able to tell the PLC, the robots and the stations what to do very dynamically. For example, we have multiple stations in the machine, they all have their task lists, and I wanted to be able to just distribute these tasks and the machine figures out the priority and what to do next.

My goal with the architecture was to create a hybrid responsive state machine which is able to tell the PLC, the robots and the stations what to do very dynamically.

This had a lot of advantages: it allowed us to dynamically change how the shoe goes through the system, and it also helped a lot with iterations. So, every time we wanted to make a change, we didn’t have to re-architect the system, we just changed the tasks I gave to the different stations in the machine. Then, later, I could give the customer or operator control over this journey.

So, in the end, we managed to architect this system in a way that is flexible and dynamic while still keeping true to its industrial reliability focus. And the system is also general enough to be reused for future projects because programming a machine with lots moving parts that perform tasks in sequence will happen again. So that’s nice.

What software did you end up using for this modern automation hybrid?

CK | So, we opted for a kind of typical modern web stack, which in our case meant a node.js backend with Vue.js on the front end, that in the background communicates with a Beckhoff PLC, using the awesome open-source ads-client package (https://github.com/jisotalo/ads-client). The common industrial kind of automation environments come with an HMI [Human Machine Interface] solution but the provided tooling often lacks flexibility and was not really capable of doing what we wanted to do — both from aesthetic and responsive point. Also, in terms of development speed and being able to tap into the web development community to see what plugin libraries you’re able to use, or how others are solving some of the communication challenges we addressed, having a modern webstack was super useful…

If we go into the process of the sneaker-cleaning journey, the first step is the scan of the shoe. Can you talk us through why it was integral to the process?

CK | From a software and a UX perspective, this was actually the first technical risk to address. Before we even thought about robotic arms, we were evaluating whether we could scan a shoe to a sufficient quality. Because this scan of the shoe is important for multiple reasons.
Firstly, to understand the shape of the shoe in order to guide the robotic motions, for, say, cleaning, which requires a certain precision and specific orientations to deliver the best cleaning outcome.

Secondly we needed to have an accurate representation of the shoe’s color and material so that the customer is able to put patches on the sneaker and see, digitally, if it matches their shoe in the real world.

Then, thirdly, we had to consider the overall quality of the final 3D model. Because, at the end of the B.I.L.L. experience, the customer also receives their scanned sneaker as a 3D asset to take home: to share or to 3D print it as a keychain, or maybe even at some point to put it into Fortnite and have your character wear your sneakers.

On top of this is the speed factor, because this scan happens inside a customer journey and they don’t want to stand there for three hours, waiting for the shoe to get reconstructed.

How did you end up implementing it?

CK | In the end, the combination of constraints made photogrammetry the most reasonable choice. We initially tested it with a consumer DSLR camera and a self-made, turntable setup to take pictures automatically from all angles of the shoe. Then we fed it into Reality Capture [photogrammetry software] and it was looking good enough that someone could recognize their shoe.

Scan output using photogrammetry with a set of cleaning points for the robotic arm to follow.

And so, after confirming that this approach could work, we started programming robot motions, but, instead of a turntable setup, we would get a robotic arm to actually hold the shoe in front of the camera. We switched to an industrial camera that gave us more options for triggering it, to have these two things happening in sync, and after that, it was all about tweaking and optimizing the setup to reliably scan different shoe types.

Pitch black shoes, white shoes, reflective shoes…that was also a big challenge and source of uncertainty. So, to illuminate key areas of the shoe and eliminate shadows, we installed spotlights at strategic points and dynamically adapted the exposure of the camera depending on the color of the shoe. Luckily the shoe models we needed to support for this installation had enough non-reflective features to work without having to resort to reflection elimination solutions, like scanning sprays or covering reflections of the shoe.

In a field where everything’s changing all the time, does the sudden emergence of new technology de-rail the process or do you at some point just commit to the methods you started with?

CK | We obviously work with, and on, exploratory tech, so it’s always important for us to keep up with the latest developments. There were times when we thought about replacing the photogrammetry approach, for example, as there is all this new capturing tech coming out — Apple came out with a new Lidar sensor that was integrated into the iPad. But the alternative solutions we explored were either worse quality or didn’t meet one or other of our key requirements.

There are always a lot of unknown variables, but yeah, throughout the whole process of this development, everyone in the team was looking out for, like, new scanning solutions. One of those we actually ended up using to create a video for the Niketown London storefront. (see below)

Video created using Nvidia’s NeRF technology

This interview will be continued in Part II.

Looking to explore new ways of applying technology for circular experiences and manufacturing? Get in touch!

Interview conducted and edited by Gabriella Seemann and Dev Mishra.
PCH Innovations is a Berlin-based, creative engineering studio for exploratory technology.

--

--