CrystalBot: Building my Crystallography Robot

Henry Hollis
Aug 4 · 4 min read

As I recently began working in a lab, I became aware of the problem that biochemists encounter all too often. Namely, hunting under a microscope for crystallized proteins in a tray, each one with hundreds of wells. This rote action seemed to me like a perfect application for automation and machine learning, two of my favorite things! I thought to myself, what if I could build a contraption to take pictures of every well in the tray and have the computer simply report back to me which wells have crystals? This is my attempt to do just that (note, it is a work in progress):

A typical crystallography tray where each circle is a well with a potential crystal.

I want to emphasize that there is actual utility to such an idea. These trays are used universally in scientific research and industry. With each tray taking a researcher 10+ minutes to analyze, reducing the overhead of such a tedious procedure could let our best and brightest get back to focusing on the actual science, not a game of eye-spy.

My initial idea was to create some sort of frame that would universally attach to microscopes and would move back and forth, using a standard microscope camera to snap pictures. From my previous experience with Raspberry pi controllers I knew I could use one to control the motors I would use.

My first attempt at controlling stepper motors using the Raspberry pi. I also thought I could use the Raspberry Pi cam to take pictures instead of relying on the user’s microscope-camera setup.

I quickly discovered the smaller motor was tragically under powered so I switched to using two larger NEMA motors:

Look at how much of a rat’s nest the wiring is :P

I’m fortunate in that I also am a 3D printing hobbyist and that I can fabricate my own parts, which is what I did to create the frame of CrystalBot:

I designed the first prototype in Fusion360:

It was at this point that I decided to not rely on the user’s microscope at all. Having a dedicated camera greatly simplifies the engineering. After some looking on Amazon, I found a handheld microscope camera perfect for this application:

After all, the wells are small enough that you don’t need a super powerful microscope to look at them.

Now I had my first prototype. The robot moves the tray and takes the 288 individual pictures:


So how do I get a computer to do all the hard work for me? This is where I get to use machine learning!

As all machine learning practitioners know, the basis of a good model is data. Fortunately, the MARCO (MAchine Recognition of Crystallization Outcomes) organization has already gathered the perfect dataset! (40,000+ training images)

Here is an example of a protein crystal under the microscope. Giving the computer thousands of these images, and thousands of images of empty wells, we can get the computer to recognize the differences! Image courtesy of the MARCO organization.

In fact, MARCO has already trained a model and published their methods in this brilliant article.

The MARCO model is a convolutional neural network based on the popular Inception-v3 model. Using this model I can achieve 95% correct identification of crystals in my wells.

But can I do better? I have been playing around making my own CNN but with limited success. Currently I’m trying to build a model based on the NASnet architecture, but my results don’t come close to the accuracy of MARCO’s model.

Even though I haven't been able to make my own model as successful as MARCO’s, I have reduced the time of analyzing an entire crystal tray to just over 2 minutes!


Now that I have a working prototype, I am continuing to experiment with new neural network models for image classification. With the rapid advancement of CNN’s, I’m excited to keep learning and improving my machine learning techniques!

Additionally, the current prototype is just that, a prototype. I am now looking into making CrystalBot a more durable machine, potentially using milled aluminum instead of plastic and wood. I also want to make a better user interface, and solder a permanent circuit board instead of the bread board.

My intention is to release the designs and code and keep this project open source so that any researcher is free to try and replicate/improve my design. The current code, written in python, can be found in my github repository.

Henry Hollis

Written by

Computer Science Student at Wake Forest University

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade