Project North Star Calibration: Pixel Hacking Trial and Error
Project North Star is an open source Augmented and Mixed Reality headset designed by Leap motion. I was extremely excited by Project North Star when it was release in the middle of 2018. I was looking for something exactly like it for my research. For the past 2 and a half years, I’ve been working on prototyping AR for space suit helmets with Prof. Dava Newman at the Human Systems Lab. It started with the HoloSEXTANT project where we built Hololens navigation and exploration tools to use on the field for NASA Analog missions. We got resounding positive feedback and decided to build more. After hitting hardware limitations on the Hololens, my goal was to build a more open, easier to experiment with AR toolkit for these mission analogs. Finding Project North Star made that goal much easier.
The reference design for North Star provides a large field of view, light weight, flexible design, and has easy to fabricate parts. Having built mine over January, my next milestone was calibration. While the hardware design has gone through a few iterations, there’s next to no software for the North Star. A simple Unity project was made by Leap Motion for early calibration and that’s all that is available. The difficulty is further aggravated by the lack of documentation.
Through the process I referenced two sources to understand calibration better. Psychic-VR-Lab wrote up a blog post on how they preformed calibration. This was an excellent source but a little sparse with descriptions. After seeing numerous people in the community struggling with calibration Tasuku Takashi made a great video where he walks through his calibration.
But after going through calibration myself, I figured I’d write up an account of my own process.
Setup
The unity project from Leap Motion can be found here:
After cloning the ProjectNorthStar repo, open up a new Unity Project and drag in the North Star unitypackage. The unity package is in the Software folder.
The NorthStar scene is setup ready already. It now just needs a few tweaks before calibration. The reference design from Leap Motion uses BOE displays. If you’re using different displays, you will need to make some modifications.
After plugging in the displays ensure that your display settings have been set to the appropriate orientation and resolution. In my case, I used the raspberry pi displays and set the resolutions to 1080x1920. With the raspberry pi display setup, one display is reverse the orientation of the other. This is to ensure that both sets of wires are connected on the outsides of the headset. In the end the headset must have the displays showing an image like below. One of the displays will need its orientation to be flipped in display settings to get the end result of the desktop being inverted when viewed from outside.
Now for Unity modifications: The resolution and size of the displays you use the critical parameter to tweak. Click on the ARCameraRig prefab and change the X offset of the WindowOffsetManager. I set mine to 1680 for the RasPi displays. In playmode clicking on the “Move Game View to Headset” button will move the game view the intended number of pixels over. This will move the view over into the displays in the headset. For me, this didn’t work perfectly. My game view would move less than 1680 pixels regardless of the number. So I ended up manually moving the game view over to fit into the two displays.
Next, the resolution must be adjusted if the BOE displays are not used. There are 3 locations that it must be changed:
The numbers represent (2 x resolution height, resolution width).
For the raspberry pi which has a resolution of 1080x1920 instead of 1440x1600. Thus, the values should be replaced by (2160, 1920) in all three locations.
The last step is to change the parameters of the Game objects that represent the displays.
After finding the LeftScreen and RightScreen Game objects, change their scales to match the new resolution ratios of the displays that you’re using. For the raspberry pi displays, this is a scale of (1.1, 0.833333, 1).
Calibration
Now we can actually start manual calibration. Don the headset, move the game view to the headset and make sure your cursor has been moved to the game view. This can be tricky to do because clicking on the Gameview can place a calibration control point where you click. A trick that I use is to drag my cursor into Gameview and right click, which has no unwanted side effects.
Pressing C will bring up the calibration bars, a set of vertical and horizontal bars that can be used to line up both eyes. In my usage, I’ve found that these bars are not that useful. Similar to Tasuku, I worked only with the skeletal view of my hands. The arrow keys and bracket keys allowed me to line up the skeletal hands close to my real hands in size, angle, and position. It wasn’t perfect, but it was close enough.
The final step is to add calibration points. My approach is to add one calibration to the center of each eye. When clicked into Gameview the cursor should now be a green dot a few pixels in size. Close one eye, and place the dot in the middle of the other eye’s field of view. With your non occupied hand held up, move the mouse cursor while holding down. I would do one screen at a time by closing one eye. I line up the skeleton with my hand, then work on the other eye. After doing this, I open and close eyes one at a time to make sure both virtual hands are aligned. In his video Tasuku recommended adding points at the edges of the field of view as well to account for the edge distortion as well. I found that I didn’t notice it much, and for simplicity’s sake kept to only 2 distortion points. However, you might want to experiment with adding calibration points around the outside edges of the view as well.
After this manual calibration process, make sure you save your files. While in game mode calibration files can be stored with the “Serialized Calibration” component under the ARCameraRig Game Object. Make sure the correct folder is selected in the Calibration Folders box. Files can only be saved and read from the Streaming Assets folder. Pressing S when the cursor is in game mode will allow you to save your calibration settings. This will be useful for future uses, although I’ve seen that minor variations can require re-calibration.
One final note is that mechanical failures can be an issue with calibration. Through experimentation, I found that pushing the bottoms of the frame together, made some of the distortion and other artifacts dissapear. My guess is that the frame was warped when printed. A simple solution as suggested by @noahzark was to use zipties to push the frames together. A stronger frame design could help mitigate this as well as potentially pulling the back side of the frame together on the side of the face. Experiment with pushing your frame while wearing the headset to see if you have this issue. Placing a small cube a few feet in front of you can help you notice any differences as you push the frame.
This calibration process is long and a bit tedious, but it can definitely be rewarding to see the final results with your x-ray vision skeletal tracking hand following your hand at varying depths and positions. Hopefully, we can develop better calibration methods, and more automated methods in the future as the community and project develop further.
Lots of thanks to the North Star community and several awesome folks who help people along the way.
Shoot me a note @EswArVr on Twitter if you have any thoughts, corrections, comments. Follow the project along with the #projectNorthStar hashtag.