UR10 Robot - Human Data Repository This repository contains images and videos collected when an operator is interacting with an UR10 robot platform. The platform includes both real datasets from UR10 and synthetic datasets created with a Digital Twin software based on Unreal Engine [2]. 1. Real data The real dataset is acquired by the Kinect V2 sensor in the Sheffield Robotics Laboratory. The Kinect V2 sensor is mounted horizontally on the ceiling, looking downwards over the workspace. The real data was collected under various environmental conditions, by changing illumination levels and with different motions of the human operators when working with the UR10 robot setup. The data is collected with 4 different illumination levels and 2700 images are recorded, respectively at each illumination level. Besides these, additional 1653 images are stored with two different operators. In total, this dataset contains 12453 images from the UR10 robot setup. In addition to the acquisition of the real data under various environmental conditions, the initial experimental condition is recorded. As shown in the subfolder ‘initial experimental condition’, apart from the red, green and blue (RGB) images, examples of depth image are also stored, which display the checkerboard and ArUco Markers used for camera calibration. Furthermore, for completeness, the corresponding videos are also provided separately in the subfolder ‘video’. These videos are recorded in two formats, ‘.mp4’ as well as ‘.avi’, which can be opened by applications such as ‘VLC media player’ and ‘Windows media player’. 2. Synthetic data The synthetic dataset is composed of robot images and human operator images. Robot images are generated using a Digital Twin framework based on Unreal Engine [2] whilst the human operator data is gathered using operator images from the Microsoft COCO database [1]. In detail, Unreal Engine 4 [2] (the creators license is free to use for students and educators) is utilized to simulate a physical robot and build a digital platform. Consequently, we are able to capture RGB, depth and annotation information via this digital platform. With the aim of detecting humans in the Human-Robot Interaction, we also import human images from the Microsoft COCO dataset [1] and merge them with the generated robot images into the merged synthetic images, which helps us save effort and time of collecting human data. It should be noted that the Microsoft COCO database [1] is openly available to the public under license CC BY 4.0 [3] and the information of the original creator can be found from [5]. Images in the COCO database were originally uploaded to Flickr [6] and are included in this dataset under Copyright Exception Section 29A: Copies for text and data analysis for non-commercial research (Copyright, Designs and Patents Act 1988 [7]). This synthetic dataset is randomly split into two parts, including 20823 images stored in the folder for training and 5206 images for validation. Our annotation code is also provided for groundtruth generation and model training. This data repository could be used for the purpose of object detection, classification, segmentation and other computer vision tasks. We have received permission from the human participants in the ‘real’ data files to share their images and videos publicly. The collected dataset with the UR10 robot are used to report results for the Body of Knowledge work which is published on the CSI:Cobot website [4]. References [1] “COCO - Common Objects in Context.” [Online]. Available: https://cocodataset.org/#home. [Accessed: 08-Jan-2022]. [2] “Unreal Engine.” [Online]. Available: https://www.unrealengine.com. [Accessed: 08-Jan-2022]. [3] “Superb AI.” [Online]. Available: https://www.superb-ai.com/datasets/coco. [Accessed: 08-Jan-2022]. [4] “2.6.1 – Monitoring RAS operation.” [Online]. Available: https://www.york.ac.uk/assuring-autonomy/guidance/body-of-knowledge/implementation/2-6/2-6-1/cobots/. [Accessed: 08-Jan-2022]. [5] “Microsoft COCO: Common Objects in Context.” [Online]. Available: https://arxiv.org/pdf/1405.0312.pdf. [Accessed: 24-Jan-2022]. [6] “Flickr Terms of Use.” [Online]. Available:https://www.flickr.com/creativecommons/. [Accessed: 03-Feb-2022]. [7] “The Copyright, Designs and Patents Act 1988.” [Online]. Available: https://www.sheffield.ac.uk/library/copyright/exceptionstab. [Accessed: 03-Feb-2022].