Creating a new benchmarking framework for robotic pick-and-place systems

The Ocado Technology benchmarking framework

Ocado Technology (editor)
Ocado Technology
6 min readAug 12, 2021

--

Intro to benchmarking

Robotic pick-and-place operations constitute the majority of today’s industrial robotic applications. Companies like Ocado, Amazon, Alibaba, DHL and Costco face the challenge of how to automate the picking of items from a container in order to find an efficient solution for managing huge amounts of orders in the shortest possible time..

Despite robotic manipulation being a very popular field of research in the last decade, the comparability and reproducibility of results has remained an issue that delays further advances in this field.

There have been several initiatives to address this with a number of benchmarks and competitions for manipulation and grasping focusing on different levels of the system. However, when it comes to real case scenarios, the performance of the entire robotic system is what really matters.

Evaluation on a system level considers the entire grasping pipeline from perception and planning to execution monitoring and control. This type of evaluation is typically centred around a given task. Examples of system-level evaluations are competitions such as the Amazon Picking Challenge (APC), or the recent IROS Robotic Grasping and Manipulation competitions but their low frequency and the limited number of participants remain a drawback.

By creating an easily reproducible protocol and systematic benchmark, a wider audience in the robotic pick-and-place community could be engaged and able to evaluate and/or improve previously published approaches.

The Ocado Technology benchmarking framework

In [1] we have proposed a new benchmarking framework defined in a generic way in order to accommodate as wide a range of systems as possible, while ensuring comparability of results. In this framework, the system must autonomously pick one-by-one all objects placed in a non-mixed storage container, transport and place them in a delivery container in the minimum possible time.

To test candidate picking systems, we have identified five classes of objects as representative examples of most grocers’ fruit and vegetables product ranges.

This object set comprises a net bag of limes, a mango, a loose leaf salad bag, a cucumber, and a punnet of blueberries. Due to their varying packaging, shape and weight, these objects pose various challenges for both perception and manipulation. The mango, cucumber and punnett were chosen as representatives of general classes of objects that resemble basic geometrical shapes (sphere, cylinder and box).

The net bag of limes behaves as an articulated body, shifting its centre of mass when manipulated, while its accurate segmentation in a cluttered scene is quite difficult even for humans. Moreover, salads and blueberries have transparent, low-friction and highly deformable packaging that poses challenges when it comes to robotic manipulation.

In order to maximize the reproducibility of the benchmark and the comparability of the results, we introduce a set of standard mock-up objects and 15 predefined scenarios that specify the objects’ initial poses within the storage container.

The scenarios span different levels of clutter and test various conditions of inter-object and object-environment positioning. The high-clutter scenarios are designed to mimic the initial placement of objects corresponding to optimal packing, as commonly encountered in warehouses.

Fig 1: Benchmark scenarios and real scenes

The proposed standard object set includes mock-up objects with size and weight close to the real ones, as illustrated in the picture below. The objects are either 3D printed or built from widely available materials (for models and instructions see our github repository ) guaranteeing their world-wide availability in the future.

Fig 2: Real and mock-up objects

To facilitate adoption of the benchmark by the wider community, we commit to maintain a travelling object set that will be lent to interested research groups.

Brief experiments and results

Through our involvement in the SOMA project , we gained access to robotic end-effectors designed by the project partners in an effort to explore different dimensions of the design space for compliant end-effectors. We also actively participated in the design and development of a pick-and-place system prototype that consists of a robot arm, a vision system and a planning pipeline. Therefore, we decided to apply the proposed benchmark on four different configurations of the aforementioned system which share the same arm, vision system and planning components, but have different end-effectors.

The end-effectors we selected are: a modified version of the Pisa/IIT SoftHand [2], the Pisa SoftGripper, the DLR CLASH Hand [3] and the TUB RBO Gripper [4].

Fig 3: From left to right — Pisa/IIT SoftHand, the Pisa SoftGripper, the DLR CLASH Hand and the RBO Gripper

The results of our benchmark concern both system evaluation and introspection.

As far as system evaluation is concerned, the aim of the benchmark is to inform the user which system performs best for each class of objects and overall. Depending on the results, one might end up using different systems for different object classes or settle to a system that provides a good performance compromise for a large variety of products.

Specifically for the systems we evaluated, the Pisa Softgripper performs best for the mango, the net bag of limes and the salad bag, closely followed by the RBO Gripper, whereas the DLR CLASH Hand performs best for the cucumber. Overall, it was apparent that the performance of the grippers is better than that of the Pisa/IIT SoftHand for this task.

Another interesting aspect of our benchmark is the system introspection one. More specifically, the fact that we evaluate how the systems perform per task phase, offers insights into which system component(s) the developer should focus on to increase its performance. By performing the benchmark before and after a change in a certain component and with all the other components remaining the same, one can see what the effect is both on overall task success and on phase success.

Conclusion

In this work we compared the overall performance of four prototype systems developed within the SOMA project, across five object categories and demonstrated the use of the framework as a tool for system introspection and redesign.

Future iterations of the proposed framework will focus mainly on i) revisiting placement requirements, and ii) including a damage metric.

As far as placement is concerned, our current requirements (i.e. being above the delivery container with the object in hand) constitute the absolute minimum if a system is to achieve more complex placement objectives (e.g. place the objects at specific locations in the delivery container). We intend to look for objectives that balance generality with realism/complexity.

Concerning damage, delivering products intact is a necessity for a pick-and-place system (especially in the case of fruits and vegetables). We have already completed preliminary work in this direction during the SOMA project.

However, equipping end-effectors with sensors that offer repeatability and comparability of results is still an open problem that hinders adoption of such metrics.

References

  1. H. Mnyusiwalla et al., “A Bin-Picking Benchmark for Systematic Evaluation of Robotic Pick-and-Place Systems,” in IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1389–1396, April 2020.
  2. M. Catalano et al., “Adaptive synergies for the design and control of the Pisa/IIT softhand,” in Int. J. Robotics Research, vol. 33, no. 5, pp. 768–782, 2014.
  3. W.Friedl et al., “CLASH: Compliant low cost antagonistic servo hands,” in Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems — IROS, 2018, pp. 6469–6476.
  4. R. Deimel and O. Brock, “A novel type of compliant and underactuated robotic hand for dexterous grasping,” in Int. J. Robotics Research, vol. 35,no. 1–3, pp. 161–185, 2016.

Amazon is a registered trademark of Amazon Technologies, Inc.
Alibaba is a registered trademark of Alibaba Group Holding Limited.
DHL is a registered trademark of Deutsche Post AG.
Costco is a registered trademark of Costco Wholesale Membership, Inc.
Waitrose is a registered trademark of Waitrose & Partners.

Originally published at https://www.ocadogroup.com.

--

--

Ocado Technology (editor)
Ocado Technology

Powering the future of online retail through disruptive innovation in robotics, IoT, cloud computing, AI, big data, and beyond. http://www.ocadotechnology.com