Stargazing and self driving cars

Lately I have been thinking a lot about what kind of camera sensors my self driving car should have and have had many thoughts about camera placement. A lot of it revolves around thinking the car is the earth and what is going on around it is a bunch of stars we are trying to make sense of.

What comes below is just a brain dump for me to look at later:

  1. The space around the car can be thought as a sphere, what we can capture with our sensors can live inside a 200 meter radius sphere.
  2. I should look into what image format people who count stars use: For example FITS. So that I can encode metadata like per pixel semantic segmentation, direction of movement, magnitude of movement, distance.
  3. Using Waterman’s P5 projections with a square fixed right in front of the car and the 4 pentagon pairs, this has been used successfully to unwrap the earth in a way that makes it plain but preserves a lot of the information.

4. I can use an FPGA that is designed to be used as a 100 gigabyte router to be the central hub from all the cameras and output a single coherent view of the world (see point 5). Different parts of the world the car sees could be encoded a different resolutions.

5. I should probably use the same type of lens gopro uses (M2 — S-mount), there are several decent quality lenses that I can use for a dozen dollars.

6. I should learn to buy CMOS sensors and create the board myself, this way I can buy dozens of sensors for dozens of dollars each. The one I am very interested in is:

7. For the front and back I should pair high fps cameras (wide 720p at 300fps) in stereo mode with high resolution (narrow, above 4K, 4:3 aspect ration, 12-bit) in the middle.

8. I could create a couple of roof bars that comes with a lot of cameras on the front and side and replace the radio antenna hole with a GMSL output (birdseye or the waterman butterfly) and can replace the regular backup camera on many cars (backwards compatible).

9. The IMX298 sensor used in the oneplus 3 can be bought only for only 6 dollars and a single one of them is being used by CommaAI on their openpilot solution. Xilinx provides a reference 9 resistor MIPI/CSI interface so there is nothing stopping a person from connecting hundreds of those cameras to a single FPGA.

10. I could start generating a baseline with gopros and either an omni rig or drones to benchmark future solutions against.

Thanks to the people at the ND013 and OSDCC Slack channels for their super interesting conversations and reality checks. A lot of what I think are my ideas could just be rehashes of what others say in passing. God bless open source and free information sharing.

Like what you read? Give Ariel Nuñez a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.