Here at Supervisely we spend a lot of time developing annotation tools for machine learning. While 2D labeling (i.e. images or videos) is still the most convenient and well-known source of data for machine learning, recent advances in tasks like robotics, self-driving vehicles, augmented reality and urban planning require other type of training data — point cloud labeling.
The problem is, despite the fact, that LIDAR and radar sensors have become more available than ever, it’s not like there are tons of tools for 3D labeling on the internet. Most advanced tools are hidden from the public usage by large workforce providers and other tools are mostly offline and not really designed for machine learning.
So today we are happy to announce the launch of our point cloud labeling tool available for everyone from the day one!
This is a beta version and more cool features are yet to come, but you can already use it now at 3d.supervise.ly.
Let’s check out what’s inside.
Dashboard & formats
Login to the dashboard here. Use the same login & password you made on supervise.ly (create one if you haven’t already, it’s free!).
As you can see, at the moment we do not support the wide variety of features of Supervisely: you can actually just manage existing sets of cloud points and related info (images, cuboids, …) called Projects and upload new.
But as in Supervisely, you can define your own types of objects for annotation (called Classes) and object keywords (called Tags).
On the Import page you can upload your 3D cloud points. At the moment we support KITTI format, as well as .pcd files. That means that your can import:
- Series of Cloud Points (in .pcd or .bin format)
- Photo context (.png or .jpg files)
- Photo context calibration
- Cuboids (KITTI xml tracklets)
Wanna try it yourself? Download “synced+rectified data”, “calibration” and “tracklets” from KITTI Vision Benchmark Suite and unpack it into a single folder.
Then select “KITTI Vision Benchmark” and drag and drop folder content, provide some name for the future project and click “Upload”. Don’t worry — if there is something wrong with the format, we will detect it.
And — boom! — now you have your first Project.
Now the most important part, where you would spend most of your time during the annotation.
We have splited workspace into a several areas — scene is on the left (with different views) and information panels are on the right (you can hide any panel or the whole sidebar completely).
Let’s go through every area.
Use perspective view to observe point cloud of the current frame. It’s important to freely operate with camera so that you can better understand objects on your scene.
That’s why we’ve implemented 3 ways of navigation:
- With mouse (hold left button to rotate camera and right button to pan)
- With keyboard (WASD to move the camera, Q and E to lift and lower, and arrow keys to rotate)
- “Video game” mode (combination of two methods above — use WASD to move around the scene and mouse to look around)
To enable navigation mode, select first tool in the left panel.
One of the most challeging tasks in 3D labeling with cuboids is how to make an accurate annotations in a very sparse point cloud. Obviously, it’s nearly impossible to move points in 3D perspective using a 2D screen. Because of that, you edit objects in projections.
Check out the gif above. First, we choose the second tool called “Selection tool” in the left panel and click on object we want to edit (you can also select object in the right “Objects” panel). Now we can rotate cuboid or change its shape.
Because you see selected object in three different perspectives, you can easily make a precise shape.
An important option you may want to change during editing is “Orthographic frustum far plane”. Check the example below.
Unfortunately, in some cases, it’s not enough to have only cloud point — you may also need a photo context to better understand your scene, especially if your cloud point is sparse.
You can import additional images (one or more) with cloud point frames (just name your image files the same as your frame, i.e. “frame0.pcd” and “frame0.jpg”) and we will show it in the top right corner. Having such an image is really helpful, but it becomes a killer feature if you also have a calibration for you photo (information of camera position and orientation in space). Because in that case we can also project your objects onto the image.
Now we know how to edit existing objects, but how to add a new one?
Select the third tool (cuboid). Select a class of your future object from the dropdown above (you can also use assigned hotkey to quickly select interesting class). Place cursor in perspective view where you want to create a new cuboid. You can hold mouse button and drag cursor to quickly set cuboid size. Then use side views just like in Editing section above.
Don’t forget to check Settings panel on the right! Default values should cover most of the cases, but you may want to customise your experience. For example, you can alter amount of points we display (in case your PC cannot handle the whole cloud point), enable grid or change how we paint cloud points.
We hope you will find it as a convenient tool for you labeling tasks, but there are a lot of things we want to release soon:
- Object tracking
- Semantic segmentation
- New types of objects (i.e. “polyline”)
- Support of other import formats
- Intergration with existing Supervisely ecosystem (i.e. “labeling jobs”)
Your feedback would really help us to prioritise our tasks — so send us your feedback and ideas.
Ready to try? You don’t even need a cloud point to start. Sign in to 3d.supervise.ly — we already have off-the-shelf examples at the Import page.
You can alse learn more about our 3D Labeling Tool or Supervisely in general here.
You are free to use the tool for any purpose, including commercially or for education. And we are always very pleased to get some user feedback, so don’t hesitate to contact us here in Slack or drop us email at email@example.com.