Research Interests

Goosebumps – Texture-changing Robotic Skin

We developed a soft robotic skin that can change its texture to express its internal state, such as emotions. The prototype skin can animate a combination of goosebumps and spikes. This is in contrast to most socially expressive robots, which use either gestures or facial expressions to communicate. In a first prototype, we map these skin changes to the robot’s emotion to create a more effectively communicating social robotic companion.

Robotic Drumming Prosthesis

In collaboration with the Georgia Tech GTCMT, we are working on a series of robotic drumming prostheses. The first one was built for drummer Jason Barnes who lost his hand in an accident, including two separately actuated drumming sticks. We are now developing a “third arm” version for able-bodied drummers, which will be a research platform for shared autonomy control (click image for video and gallery).

Human-Robot Theater

Theater acting and other performing arts could serve HRI in two ways: as useful testbeds for the development and evaluation of action coordination in human-robot interaction, and by providing insights for action-perception frameworks that can serve the AI of social robots. This research  looks at the connection of theater acting and HRI and at the development of autonomous robotic theater actors.

Social Psychology outcomes of HRI

We study social psychological outcomes of Human-robot interaction, in order to design robots that don’t threaten, but enhance people’s everyday lives. For example, we look at how a robot’s behavior affects people’s sense of the robot being supportive of their needs, with applications for robots in caregiving, therapy, and interrogation roles. Another project looks at the relationship of robot presence, design, and behavior on human honesty.

Robotic Companions for Behavior Change

We are interested how robotic companions can lead to behavior change in humans. One such approach is for them to act as “Empathy Objects”. When people interact, they are often unaware or only partially aware of the effect their behavior has on others. Empathy Objects, by reacting emotionally to people’s behavior through subtle physical gestures,  could increase people’s self-awareness to the emotional state of others, leading to behavior change.

Human-Robot Collaborative Fluency

Collaborative fluency is the coordinated meshing of joint activities between members of a well-synchronized team. Two people repeatedly performing an activity together naturally converge to a high level of coordination, resulting in a fluent meshing of their actions. In contrast, human-robot interaction is often structured in a rigid stop-and-go fashion. We aim to build robotic agents that can work side-by-side humans by displaying the kind of fluency that humans are accustomed to from each other.

Generating Expressive Motion for Toy Robots

With the increasing accessibility and popularity of robot kits, microcontroller platforms, cheap actuators, and online hardware tutorials, more people are building or buying simple low-cost robots than ever before. How can we make these low-degree-of-freedom robots expressive and entertaining? This research looks at developing systems that lets lay users design movements and gestures for low-cost robots.

Quality Metrics for Human-Robot Collaboration

As part of our research in human-robot collaborative fluency, we are developing a number of metrics to evaluate the quality and fluency in human-robot and human-agent shared-location teamwork. We propose both subjective and objective metrics to measure fluency between a human and robot. To validate these metrics, we conduct human-subject study linking objective and subjective metrics of fluency. We are particularly interest in an actor-observer bias with respect to human-robot interaction quality.

Robotic Experience Companionship

Robotic companions are expected to play both a functional and a social role in human environments, such as homes, schools, offices, nursing homes, and cars. There, robots will be designed to respond to events and stimuli around them, in part to fulfill a task, but also to affect a social response in their human counterparts. This research experimentally evaluates the effects that a robot’s response to an event has on a human’s perception of the same event.

Human-Robot Joint Jazz Improvisation

How can humans and robots perform live music together, in a fluent and synchronized way, and what can we learn from robotic musicianship for other human-robot collaborative tasks? Music is one of the most time-critical kinds of human-robot  interactions. We have developed algorithms for Shimon, a robotic marimba player that listens to a human pianist, and plays it part in a a jazz duo.

Location-based Storytelling

CityCast is an urban media experience that connects people with the stories of their city and neighborhood. It is a content delivery system allowing users to listen to professionally produced stories based on their location. Listeners can experience stories situated in the original setting while walking through the streets, integrating audio content with visual attention to the surroundings. A city is made up of millions of stories and its oral history is embedded in its streets, parks, and buildings. CityCast creates an urban radio experience connecting people with the lives of those who have lived the city before them.

Shared Workspace MDP

To frame the problem of human-robot collaboration and fluency, we have developed a computational model to develop and evaluate algorithms for robots acting together with people. This modified Markov Decision Process (MDP), models a cost (or distance) based shared-location two-agent collaborative system. This model was used to develop an anticipatory action system for human-robot collaboration, and can be used to compare human-robot and human-human collaborative activities.

Task Collaboration in Human-Robot Teams

We developed a task planning and execution framework enabling a human and an autonomous robot to work together shoulder-to-shoulder, sharing the workspace and the objects required to complete a task. The framework uses a model that affords social interaction, being goal-oriented on multiple nested levels. The collaborative framework based on this model includes self-assessment for mutual support, communication to support joint activity, performing dynamic meshing of sub-plans, and negotiating labor division via turn taking.