Augmentative and Alternative Communication
Persons with complex communication needs (CCNs), use augmentative and alternative communication (AAC) devices to express thoughts, needs, wants, and ideas when they cannot rely on their speech to communicate. Some examples of these devices include specialized keyboards along with adapted controllers and text-to-speech interfaces. We are exploring different ways people use AAC devices in dyadic interactions to inform new designs that can reduce user burden of persons with CCNs and motor impairments and enable the design of more personal AAC interfaces.
For more information, please contact Stephanie.
Markerless 3D Human Pose Forecasting
It is widely agreed that effective interaction with humans is a challenging problem that first requires that robots be able to perceive them. At HARP lab we believe that enabling robots to model human intent and predict human behavior would allow for more natural human robot interaction. To this end, we are also working on human pose forecasting methods with low-computational cost.
Abhijat would be happy to provide more information!
Assistive Manipulation Through Intent Recognition
An upper body mobility limitation can severely impact a person's quality of life. Such limitations can prevent people from performing everyday tasks such as picking up a cup or opening a door. The U.S. Census Bureau has indicated that more than 8.2% of U.S. population, or 19.9 million Americans, suffer from upper body limitations. Assistive robots offer a way for people with severe mobility impairment to complete daily tasks. However, current assistive robots primarily operate through teleoperation, which requires significant cognitive and physical effort from the user. We exlore how these assistive robots can be improved with artificial intelligence to take an active role in helping their users. Drawing from our understanding of human verbal and nonverbal behaviors (like speech and eye gaze) during robot teleoperation, we study how intelligent robots can predict human intent during a task and assist toward task completion. We aim to develop technology to decrease operator fatigue and task duration when using assistive robots by employing human-sensitive shared autonomy.
Robot Self-Assesment (MURI)
Effective task delegation within a team requires each member to (1) understand task objectives, (2) estimate task feasibility, and (3) convey a reason for the acceptance or refusal of a task. Such self-assessment by robots will be critical to successful human-robot collaboration that maximizes the potential of each member while being robust to unforeseen failures. We are accordingly developing methods that will allow robots to quantify and convey their proficiency at a particular task during (in situ), after (post hoc), and before (a priori) execution.
Our current research directions include developing effective human-in-the-loop training (Pallavi Koppol), estimating the similarity between scenarios (Sarthak Ahuja), and using conveying a notion of confidence of proficiency through interval estimation (Michael Lee).
Recognizing and Reacting to Human Needs Determined by Social Signals
Being able to identify which humans need help and when they need help will enable robots to spontaneously offer assistance when needed, as well as triage how their help can best be distributed. To perform this kind of assessment requires an understanding of how humans naturally communicate their needs to others, as well as a model of individuals and their needs over time. To achieve and demonstrate these goals, this project seeks to build a waiter robot that can anticipate customer needs and respond to them both when actively hailed or implicitly needed. This environment also showcases the challenge of finding these signals while humans are also engaged in human-human group interactions and are not solely focused on their robot collaborator. Successfully implementing this system can help improve restaurant efficiency, and provide insight into how to model human thinking.
Questions can be directed to Ada.