Augmentative and Alternative Communication

A close-up picture of a computer screen showing AAC pictogram images describing different words such as Up, Down, Me, You.

Speech generating alternative and augmentative communication (AAC) devices may be used to express thoughts, needs, wants, and ideas when someone cannot rely on their own speech to communicate. Some examples of these devices include specialized keyboards along with adapted controllers and text-to-speech interfaces. We study conversational agency in AAC, how people who use AAC devices augmented communicators, advance their goals in conversation under social and AAC constraints. We study how people use AAC devices with different types of conversation partners to inform new designs that can reduce user burden and favor the expression of conversational agency of augmented communicators.

For more information, please contact Stephanie.

Check out our CHI 2020 video presentation to learn more.

Active Sensor Policies from Human Eye-gaze for Assisted and Autonomous Driving

Using Programmable Light Curtains - an active sensor, we can obtain 3D information about the world in a more precise, dense, and frequent manner than with LiDAR devices. We study the use of driver eye gaze to inform policies of placing and shaping the light curtain for autonomous and assisted driving.

Please contact Abhijat for more information.


Assistive Manipulation Through Intent Recognition


An upper body mobility limitation can severely impact a person's quality of life. Such limitations can prevent people from performing everyday tasks such as picking up a cup or opening a door. The U.S. Census Bureau has indicated that more than 8.2% of the U.S. population, or 19.9 million Americans, suffer from upper body limitations. Assistive robots offer a way for people with severe mobility impairment to complete daily tasks. However, current assistive robots primarily operate through teleoperation, which requires significant cognitive and physical effort from the user. We explore how these assistive robots can be improved with artificial intelligence to take an active role in helping their users. Drawing from our understanding of human verbal and nonverbal behaviors (like speech and eye gaze) during robot teleoperation, we study how intelligent robots can predict human intent during a task and assist toward task completion. We aim to develop technology to decrease operator fatigue and task duration when using assistive robots by employing human-sensitive shared autonomy.

Reuben, Ben and Maggie are the contacts on this project.


Robot Self-Assesment (MURI)

A Baxter humanoid robot uses its gripper on its arm to stack a Jenga block on top of the tower that a human is constructing.

Autonomous agents need to learn increasingly competent and complex behaviors. One way of effectively learning these behaviors is to include people in the learning process. Therefore, we are investigating human-in-the-loop strategies that are both more user friendly and lead to efficient learning. Please direct any questions to Pallavi Koppol.

Furthermore, despite autonomous agents being able to learn complex behaviors, we likely won’t use them until we can reliably predict their behavior in new situations. Thus we are designing agents that better communicate generalizable models of their behavior to humans through demonstrations. We are specifically leveraging human models and learning strategies (e.g. scaffolding) to select demonstrations that are not only informative but also easily understood by humans. Please contact Michael Lee for more information.


Recognizing and Reacting to Human Needs Determined by Social Signals

A video from a cooking show with pose and gaze points overlaid.

Being able to identify which humans need help and when they need help will enable robots to spontaneously offer assistance when needed, as well as triage how their help can best be distributed. To perform this kind of assessment requires an understanding of how humans naturally communicate their needs to others, as well as a model of individuals and their needs over time. To achieve and demonstrate these goals, this project seeks to build a waiter robot that can anticipate customer needs and respond to them both when actively hailed or implicitly needed. This environment also showcases the challenge of finding these signals while humans are also engaged in human-human group interactions and are not solely focused on their robot collaborator. Successfully implementing this system can help improve restaurant efficiency, and provide insight into how to model human thinking.

Questions can be directed to Ada.

Adaptive Coaching Via Machine Theory of Mind

In order to develop AI partners that can become effective teammates, we must endow them with basic social understanding—particularly the ability to understand humans’ intentions, knowledge, and mental states. The ability to infer other humans’ mental models and use those models to perform actions, make predictions, and evaluate outcomes is called Theory of Mind (ToM). The goal of this project is to develop an artificial agent that can adaptively assist human teams by (i) correcting suboptimal strategies, (ii) sharing knowledge across teammates when needed, and (iii) instructing human players to better align their goals to adapt to a dynamic environment. By learning the goals, dynamics, and mental states of human partners, an AI can leverage this understanding to optimally generate context-aware, personalized assistance.

Questions can be directed to Michelle.

Uncertainty Estimation and Resolution in Task Transfer


Adaptability is an essential skill in human cognition, enabling us to draw from our extensive, life-long experiences with various objects and tasks in order to address novel problems. To date, robots do not have this kind of adaptability, and yet, as our expectations of robots’ interactive and assistive capacity grows, it will be increasingly important for them to adapt to unpredictable environments in a similar manner as humans.

We explore how different types of interaction enable a robot to address novel task variations. Prior work has shown how different types of transfer problems can be addressed via continued interaction between the teacher and robot. Using a variety of interaction types allows a robot to obtain different task information and then address transfer problems of various complexity, such as identifying object replacements and creative tool use. Our current work involves assessing the robot’s proficiency at a task; in order for a robot to attempt to address a novel task variation, it needs to assess what knowledge it needs and which interaction type is most likely to provide it.