Table 1 Comparison of the proposed dataset with existing dataset on hand interaction.

From: Enhancing robotic skill acquisition with multimodal sensory data: A novel dataset for kitchen tasks

Data

Modality

Egocentric

Hand

Multi-objects

Label

hand-object

action-segment

YALE8

RGB

×

×

×

×

×

ActionSense34

RGB, Depth, Tactile, body

×

×

GRAB13

RGB, Depth, Body

×

×

ATTACH15

RGB, Body

×

×

×

ContactPose10

RGB, Depth

×

×

×

ARCTIC16

RGB, Depth

×

×

Hasson et al.12

Body

×

×

×

HOI4D32

RGB, Depth

×

×

Hernando et al.23

RGB, Depth

×

×

×

ObMan12

RGB

×

×

×

×

Dreher et al.11

RGB

×

×

Sun et al.7

RGB

×

×

×

×

Elangovan et al.28

RGB

×

×

×

KIT31

RGB, Tactile, Body

×

×

×

DexYCB25

RGB, Depth

×

×

×

HOnnotate24

RGB, Depth

×

×

×

×

HANDdata29

Radar, Tof, IMU

×

×

×

×

×

ALOHA30

RGB, Depth

×

×

×

Kaiwu Kitchen

RGB, Depth, Tactile, Body, IMU, Eye-tracking, EMG

  1. It involves different modalities, individual viewpoints, presence of both hands, presence of multiple objects grasps, and labeling types.