Abstract

A participant controls a robot arm to pick up a marshmallow. We present HARMONIC, a large multi-modal dataset of human interactions in a shared autonomy setting. The dataset provides human, robot, and environment data streams from twenty-four people engaged in an assistive eating task with a 6 degree-of-freedom (DOF) robot arm. From each participant, we recorded video of both eyes, egocentric video from a head-mounted camera, joystick commands, electromyography from the participant’s forearm used to operate the joystick, third person stereo video, and the joint positions of the 6 DOF robot arm. Also included are several data streams that come as a direct result of these recordings, namely eye gaze fixations in the egocentric camera frame and body position skeletons. This dataset could be of interest to researchers studying intention prediction, human mental state modeling, and shared autonomy. Data streams are provided in a variety of formats such as video and human-readable csv or yaml files.

Full details are available in the accompanying paper on arXiv.

Accessing the dataset

The data set can be downloaded here. The most recent version is harmonic-0.4.0.

The files available are as follows:

  • harmonic-data.tar.gz: All raw data. Does not contain additionally processed data, such as Pupil Labs directories or visualization data.
  • harmonic-sample.tar.gz: Contains all data from a single run.

To visualize the data, you may use the code available at https://github.com/HARPLab/harmonic_playback. Code for accessing the data is coming soon.

To receive updates about this dataset (including notifications when a new version is posted), join the mailing list.

Citation

The data set can be cited as follows:
Newman, Benjamin A., Aronson, Reuben M., Srinivasa, Siddhartha S., Kitani, Kris, and Admoni, Henny. “HARMONIC: A Multimodal Dataset of Assistive Human-Robot Collaboration.” arXiv:1807.11154 [cs.RO], July 2018.

BibTeX:


@ARTICLE{NewmanHARMONIC2018,
   author = {{Newman}, B.~A. and {Aronson}, R.~M. and {Srinivasa}, S.~S. and 
	{Kitani}, K. and {Admoni}, H.},
    title = "{HARMONIC: A Multimodal Dataset of Assistive Human-Robot Collaboration}",
  journal = {ArXiv e-prints},
archivePrefix = "arXiv",
   eprint = {1807.11154},
 primaryClass = "cs.RO",
 keywords = {Computer Science - Robotics, Computer Science - Human-Computer Interaction},
     year = 2018,
    month = jul,
   adsurl = {http://adsabs.harvard.edu/abs/2018arXiv180711154N},
  adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}

Changelog

0.4.0

  • Remove gaze directory
  • Remove playback info in processed directory
  • Reencode all videos as .avi with codec FFV1 for consistency
  • Add an additional field to run_info.yaml detailing when the trial starts
  • Fix many missing morsel info files

0.3.0:

  • Add skeleton tracking info in keypoints/*
  • Edit names in morsel.yaml to be pure ASCII instead of python-specific unicode

0.2.0

  • Fix morsel.yaml file export to be correct.
  • Add additional visualization data in processed/playback

0.1.0

  • Initial dataset release.