Announcing FiftyOne 0.17 with Grouped Datasets, 3D, Geolocation, and Custom Plugins
Voxel51, in conjunction with the FiftyOne community, are excited to announce the release of FiftyOne 0.17!
Wait, What’s FiftyOne?
FiftyOne is an open source machine learning toolset that enables data science teams to improve the performance of their computer vision models by helping them curate high quality datasets, evaluate models, find mistakes, visualize embeddings, and get to production faster.
- If you like what you see on GitHub, give the project a star.
- Get started! We’ve made it easy to get up and running in a few minutes
- Join the FiftyOne Slack community, we’re always happy to help.
Ok, let’s dive into the release!
What’s New in 0.17?
This release includes enhancements and fixes to the FiftyOne App, core library, annotation integrations, three new datasets/models in the FiftyOne Zoo, and updated documentation. In total, there are 16 new features and 6 bug fixes. You can check out all the details in the official release notes.
Here’s a quick tl;dr to highlight some of the new features in this release.
Grouped Datasets: Working with Multiview Data, Including 3D!
FiftyOne now supports the creation of grouped datasets, which contain multiple slices of samples of possibly different modalities (image, video, or point cloud) that are organized into groups. Grouped datasets can be used to represent multiview scenes, where data for multiple perspectives of the same scene can be stored, visualized, and queried in ways that respect the relationships between the slices of data.
Grouped datasets may contain 3D samples, including point cloud data stored in .pcd format and associated 3D annotations (detections and polylines).
As expected, you can work with grouped datasets in the FiftyOne App!
In the FiftyOne App you can perform a variety of operations with grouped datasets:
- View all samples in the current group in the modal
- Samples can include image, video, and point cloud slices
- Browse images and videos in a scrollable carousel and maximize them in the visualizer
- For point cloud slices, you can make use of a new interactive 3D visualizer
- View statistics across all slices
Check out the docs to learn how to get started with grouped datasets and interact with them in the FiftyOne App.
Visualize and Interact with Geolocation Data
The FiftyOne App has a new Map tab that appears whenever your dataset has a GeoLocation
field with point
data populated.
You can use the Map tab to see a scatterplot of your dataset’s location data:
import fiftyone as fo
import fiftyone.zoo as fozdataset = foz.load_zoo_dataset("quickstart-geo")session = fo.launch_app(dataset)
What can you do with the Map tab? You can:
- Lasso points in the map to show the corresponding samples in the grid
- Choose between available map types (dark, light, satellite, road, etc.)
- Configure your own custom default settings for the Map tab
Check out the Map tab docs to learn how to get started with visualizing and interacting with geolocation data.
Custom App Plugins
FiftyOne now supports a plugin system that you can use to customize and extend the App’s behavior! For example if you need a unique way to visualize individual samples, plot entire datasets, or fetch FiftyOne data, a custom plugin just might be the ticket!
Check out this tutorial on GitHub that walks you through how to develop and publish a custom plugin.
Training Detectron2 Models in FiftyOne
Detectron2 is Facebook AI Research’s next generation library that provides state-of-the-art detection and segmentation algorithms. It supports a number of computer vision research projects and production applications in Facebook. New in this release is a tutorial that shows how, with two simple functions, you can integrate FiftyOne into your Detectron2 model training and inference pipelines.
Community Contributions
We’d like to take a moment to give a few shout outs to FiftyOne community members who contributed to this release.
OpenAI’s CLIP Model Now in the FiftyOne Model Zoo
Rustem Galiullin contributed PRs #1691 and #2072, which added a CLIP ViT-Base-32 model to the FiftyOne Model Zoo for zero-shot classification and embedding generation. The CLIP model was announced by researchers at OpenAI in 2021 and is a breakthrough in efficiently learning visual concepts from natural language supervision. The model can be used in FiftyOne, for example, to classify images according to an arbitrary set of classes in a zero-shot manner:
Additional Community Contributions
Shoutout to the following community members who contributed the following PRs to the FiftyOne project over the past few weeks!
- George Pearse contributed #2068 — Update install.rst
- Odd Eirik Igland contributed #2066 — Bugfix: task_map expected string got int
- Victor1cea contributed #2016 — Fix Issue #1903 path variable and #1884 — Eliminate non-XML or non-TXT files from CVAT, KITTI, CVAT Video
- Geoffrey Keating contributed #1973 — CVAT Annotate attribute documentation update
- Idow09 contributed #1909 — Fix custom_parser implementation in recipe
FiftyOne Community Updates
The FiftyOne community continues to grow!
- 1,000+ FiftyOne Slack members
- 1,900+ stars on GitHub
- 1,000+ Meetup members
- Used by 166 repositories
- 36 contributors
See FiftyOne 0.17 in Action!
Join me (Voxel51 Co-Founder and CTO) for a live webinar, where I’ll give an interactive demo of FiftyOne 0.17 and introduce our new FiftyOne Teams offering. Sign up here!
What’s Next?
- If you like what you see on GitHub, give the project a star!
- Get started! We’ve made it easy to get up and running in a few minutes.
- Join the FiftyOne Slack community, we’re always happy to help!