Augmentative and Alternative Communication

A close-up picture of a computer screen showing AAC pictogram images describing different words such as Up, Down, Me, You.

Speech generating alternative and augmentative communication (AAC) devices may be used to express thoughts, needs, wants, and ideas when someone cannot rely on their own speech to communicate. Some examples of these devices include specialized keyboards along with adapted controllers and text-to-speech interfaces. We study conversational agency in AAC, how people who use AAC devices augmented communicators, advance their goals in conversation under social and AAC constraints. We study how people use AAC devices with different types of conversation partners to inform new designs that can reduce user burden and favor the expression of conversational agency of augmented communicators.

For more information, please contact Stephanie.

Check out our CHI 2020 video presentation to learn more.

Relevant publications

Co-designing Socially Assistive Sidekicks for Motion-based AAC.
Stephanie Valencia, Michal Luria, Amy Pavel, Jeffrey P. Bigham, Henny Admoni. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2021. pdf


Eye-gaze for Intelligent Driving Assistance

A video showing a driver's POV in a simulator with their gaze overlaid.

Using Programmable Light Curtains - an active sensor, we can obtain 3D information about the world in a more precise, dense, and frequent manner than with LiDAR devices. We study the use of driver eye gaze to inform policies of placing and shaping the light curtain for autonomous and assisted driving.

Please contact Abhijat or Gustavo for more information.


A video showing a social navigation simulator with different navigation algorithms all trying to complete the same navigation scenario.

Evaluation tools for Social Robot Navigation

We are actively maintaining SocNavBench, a social navigation benchmark for evaluating social navigation algorithms against each other in a realistic, consistent, and scalable way. Checkout the github page!

Please contact Abhijat or Gustavo for more information.

Relevant publications

SocNavBench: A Grounded Simulation Testing Framework for Social Navigation.
Abhijat Biswas, Allan Wang, Gustavo Silvera, Aaron Steinfeld, Henny Admoni. ACM Transactions on Human-Robot Interaction (THRI) 2021. pdf


Assistive Manipulation Through Intent Recognition


An upper body mobility limitation can severely impact a person's quality of life. Such limitations can prevent people from performing everyday tasks such as picking up a cup or opening a door. The U.S. Census Bureau has indicated that more than 8.2% of the U.S. population, or 19.9 million Americans, suffer from upper body limitations. Assistive robots offer a way for people with severe mobility impairment to complete daily tasks. However, current assistive robots primarily operate through teleoperation, which requires significant cognitive and physical effort from the user. We explore how these assistive robots can be improved with artificial intelligence to take an active role in helping their users. Drawing from our understanding of human verbal and nonverbal behaviors (like speech and eye gaze) during robot teleoperation, we study how intelligent robots can predict human intent during a task and assist toward task completion. We aim to develop technology to decrease operator fatigue and task duration when using assistive robots by employing human-sensitive shared autonomy.

Reuben, Ben and Maggie are the contacts on this project.

Relevant publications

HARMONIC: A Multimodal Dataset of Assistive Human–Robot Collaboration.
Benjamin A. Newman, Reuben M. Aronson, Siddhartha S. Srinivasa, Kris Kitani, Henny Admoni. The International Journal of Robotics Research (IJRR) 2021. pdf

Inferring Goals with Gaze during Teleoperated Manipulation.
Reuben M. Aronson, Nadia AlMutlak, and Henny Admoni. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021. pdf

Eye Gaze for Assistive Manipulation.
Reuben M. Aronson, Henny Admoni. HRI Pioneers workshop 2020. pdf

Semantic Gaze Labeling for Human-Robot Shared Manipulation.
Reuben M. Aronson and Henny Admoni. Proceedings of the ACM Symposium on Eye Tracking Research and Applications (ETRA) 2019. pdf

Gaze for Error Detection During Human-Robot Shared Manipulation.
Reuben M. Aronson and Henny Admoni. Towards a Framework for Joint Action Workshop at RSS 2018. pdf

Eye-Hand Behavior in Human-Robot Shared Manipulation.
Reuben M. Aronson, Thiago Santini, Thomas C. Kübler, Enkelejda Kasneci, Siddhartha Srinivasa, and Henny Admoni. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2018. pdf


Robot Self-Assesment (MURI)

A Baxter humanoid robot uses its gripper on its arm to stack a Jenga block on top of the tower that a human is constructing.

Autonomous agents need to learn increasingly competent and complex behaviors. One way of effectively learning these behaviors is to include people in the learning process. Therefore, we are investigating human-in-the-loop strategies that are both more user friendly and lead to efficient learning. Please direct any questions to Pallavi Koppol.

When a robot is uncertain about how it should complete a task, it should ask a human teacher for help. Doing this, however, requires the robot to locate the source of its uncertainty and the most effective method of querying the teacher in order to resolve that uncertainty. We are developing methods to address both problems by modeling the robot’s expected and actual knowledge throughout completing a task or interacting with a teacher. Please contact Tesca Fitzgerald for more information.

Finally, even after autonomous agents have learned complex behaviors, we likely wouldn’t rely on them until we could reliably predict their behavior in new situations. Thus we are designing agents that can teach humans their learned behavior through demonstrations. We specifically leverage inverse reinforcement learning and human learning strategies (e.g. scaffolding) to select demonstrations that are both informative and easily understood by humans. Please contact Michael Lee for more information.

Relevant publications

Machine Teaching for Human Inverse Reinforcement Learning.
Michael S. Lee, Henny Admoni, Reid Simmons. Frontiers in Robotics and AI 2021. pdf


Recognizing and Reacting to Human Needs Determined by Social Signals

A video from a cooking show with pose and gaze points overlaid.

Being able to identify which humans need help and when they need help will enable robots to spontaneously offer assistance when needed, as well as triage how their help can best be distributed. To perform this kind of assessment requires an understanding of how humans naturally communicate their needs to others, as well as a model of individuals and their needs over time. To achieve and demonstrate these goals, this project seeks to build a waiter robot that can anticipate customer needs and respond to them both when actively hailed or implicitly needed. This environment also showcases the challenge of finding these signals while humans are also engaged in human-human group interactions and are not solely focused on their robot collaborator. Successfully implementing this system can help improve restaurant efficiency, and provide insight into how to model human thinking.

Questions can be directed to Ada.


Adaptive Coaching Via Machine Theory of Mind

In order to develop AI partners that can become effective teammates, we must endow them with basic social understanding—particularly the ability to understand humans’ intentions, knowledge, and mental states. The ability to infer other humans’ mental models and use those models to perform actions, make predictions, and evaluate outcomes is called Theory of Mind (ToM). The goal of this project is to develop an artificial agent that can adaptively assist human teams by (i) correcting suboptimal strategies, (ii) sharing knowledge across teammates when needed, and (iii) instructing human players to better align their goals to adapt to a dynamic environment. Using integrated ML approaches, including Reinforcement Learning and Behavioral Cloning (Imitation Learning), we aim to build effective robot and AI teammates with the ability to both understand the strategic preferences of their partner, and communicate their own preferences.

Questions can be directed to Michelle.

Relevant publications

Adapting Language Complexity for AI-Based Assistance.
Michelle Zhao, Reid Simmons, Henny Admoni. Workshop on Lifelong Learning and Personalization in Long-Term Human-Robot Interaction at HRI 2021 2021.


Uncertainty Estimation and Resolution in Task Transfer


Adaptability is an essential skill in human cognition, enabling us to draw from our extensive, life-long experiences with various objects and tasks in order to address novel problems. To date, robots do not have this kind of adaptability, and yet, as our expectations of robots’ interactive and assistive capacity grows, it will be increasingly important for them to adapt to unpredictable environments in a similar manner as humans.

We explore how different types of interaction enable a robot to address novel task variations. Prior work has shown how different types of transfer problems can be addressed via continued interaction between the teacher and robot. Using a variety of interaction types allows a robot to obtain different task information and then address transfer problems of various complexity, such as identifying object replacements and creative tool use. Our current work involves assessing the robot’s proficiency at a task; in order for a robot to attempt to address a novel task variation, it needs to assess what knowledge it needs and which interaction type is most likely to provide it.

Questions can be directed to Tesca.