As of 2017, this list is not current. Please look at my Google Scholar Page and group website for an up-to-date list of publications.
Vatsal, V. and Hoffman, G. (2017). Wearing your arm on your sleeve – Studying usage contexts for a wearable robotic forearm [Ro-Man'17] pdf →
Wearing your arm on your sleeve – Studying usage contexts for a wearable robotic forearm
26th IEEE International Symposium on Robot and Human Interactive Communication
Abstract
Alves-Oliveira, P., et al. (2017). YOLO, a Robot for Creativity: A Co-Design Study with Children [IDC'17] →
YOLO, a Robot for Creativity: A Co-Design Study with Children
2017 Conference on Interaction Design and Children
Abstract
This paper describes the design and development of YOLO, a social robot aimed at boosting creativity in children. Creativity is one of the most sought-after competencies as we move from industrialized economies, in which standardized knowledge was imperative, to creative economies, where the ability to innovate is crucial for the workforce. YOLO is a robot to be used by children as a tool to boost new ideas and stimulate their creativity. This paper describes how established educational strategies that enhance creativity were combined with co-designing with children as informants to reach the the prototype design of the robot.
Luria, M., Hoffman, G., & Zuckerman, O. (2017). Comparing Social Robot, Screen and Voice Interfaces for Smart-Home Control [CHI'17] →
Comparing Social Robot, Screen and Voice Interfaces for Smart-Home Control
2017 CHI Conference on Human Factors in Computing Systems
Abstract
With domestic technology on the rise, the quantity and complexity of smart-home devices are becoming an important interaction design challenge. We present a novel design for a home control interface in the form of a social robot, commanded via tangible icons and giving feedback through expressive gestures. We experimentally compare the robot to three common smart-home interfaces: a voice-control loudspeaker; a wall-mounted touch-screen; and a mobile application. Our findings suggest that interfaces that rate higher on flow rate lower on usability, and vice versa. Participants’ sense of control is highest using familiar interfaces, and lowest using voice control. Situation awareness is highest using the robot, and also lowest using voice control. These findings raise questions about voice control as a smart-home interface, and suggest that embodied social robots could provide for an engaging interface with high situation awareness, but also that their usability remains a considerable design challenge.
Megidish, B., Zuckerman, O., & Hoffman, G. (2017). Animating Mechanisms: A Pipeline for Authoring Robot Gestures [HRI'17 LBR] →
Animating Mechanisms: A Pipeline for Authoring Robot Gestures
Companion Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction
Abstract
Designing and authoring gestures for socially expressive robots has been an increasingly important problem in recent years. In this demo we present a new pipeline that enables animators to create gestures for robots in a 3D animation authoring environment, without knowledge in computer programming. The pipeline consists of an exporter for a 3D animation software and an interpreter running on a System-on-Module translating the exported animation into motor control commands.
Görür, O. C., et al. (2017). Toward integrating Theory of Mind into Adaptive Decision-Making of Social Robots to Understand Human Intention [HRI'17 Workshop] →
Toward integrating Theory of Mind into Adaptive Decision-Making of Social Robots to Understand Human Intention
HRI 2017 Workshop on the Role of Intentions in Human-Robot Interaction
Abstract
We propose an architecture that integrates Theory of Mind into a robot’s decision-making to infer a human’s intention and adapt to it. The architecture implements human-robot collaborative decision-making for a robot incorporating human variability in their emotional and intentional states. This research first implements a mechanism for stochastically estimating a human’s belief over the state of the actions that the human could possibly be executing. Then, we integrate this information into a novel stochastic human-robot shared planner that models the human’s preferred plan. Our contribution lies in the ability of our model to handle the conditions: 1) when the human’s intention is estimated incorrectly and the true intention may be unknown to the robot, and 2) when the human’s intention is estimated correctly but the human doesn’t want the robot’s assistance in the given context. A robot integrating this model into its decision-making process would better understand a human’s need for assistance and therefore adapt to behave less intrusively and more reasonably in assisting its human companion.
Thomaz, A.L., Hoffman, G., & Cakmak, M. (2016). Computational Human-Robot Interaction [FnT-ROB] pdf →
Computational Human-Robot Interaction
Foundations and Trends® in Robotics 4(2-3)
Abstract
We present a systematic survey of computational research in human-robot interaction (HRI) over the past decade. Computational HRI is the subset of the field that is specifically concerned with the algorithms, techniques, models, and frameworks necessary to build robotic systems that engage in social interactions with humans. Within the field of robotics, HRI poses distinct computational challenges in each of the traditional core research areas: perception, manipulation, planning, task execution, navigation, and learning. These challenges are addressed by the research literature surveyed here. We surveyed twelve publication venues and include work that tackles computational HRI challenges, categorized into eight topics: (a) perceiving humans and their activities; (b) generating and understanding verbal expression; (c) generating and understanding nonverbal behaviors; (d) modeling, expressing, and understanding emotional states; (e) recognizing and conveying intentional action; (f) collaborating with humans; (g) navigating with and around humans; and (h) learning from humans in a social manner. For each topic, we suggest promising future research areas.
Birnbaum, G. E., et al. (2016). What robots can teach us about intimacy [CHB'16] →
What robots can teach us about intimacy: The reassuring effects of robot responsiveness to human disclosure
Computers in Human Behavior 63
Abstract
Perceiving another person as responsive to one’s needs is inherent to the formation of attachment bonds and is the foundation for safe-haven and secure-base processes. Two studies examined whether such processes also apply to interactions with robots. In both studies, participants had one-at-a-time sessions, in which they disclosed a personal event to a non-humanoid robot that responded either responsively or unresponsively across two modalities (gestures, text). Study 1 showed that a robot’s responsiveness increased perceptions of its appealing traits, approach behaviors towards the robot, and the willingness to use it as a companion in stressful situations. Study 2 found that in addition to producing similar reactions in a different context, interacting with a responsive robot improved self-perceptions during a subsequent stress-generating task. These findings suggest that humans not only utilize responsiveness cues to ascribe social intentions to robots, but can actually use them as a source of consolation and security.
Forlizzi, J., et al. (2016). Let’s Be Honest – A Controlled Field Study of Ethical Behavior in the Presence of a Robot [RO-MAN'16] →
Let’s Be Honest – A Controlled Field Study of Ethical Behavior in the Presence of a Robot
Proceedings of the 25th International Symposium on Robot and Human Interactive Communication
Abstract
Human-robot collaboration will increasingly take place in human social settings, including contexts where ethical and honest behavior is paramount. How might these robots affect human honesty? In this paper, we present first evidence of how a robot’s presence affects people’s ethical behavior in a controlled field study. We observed people passing by a food plate marked as “reserved”, comparing three conditions: no observer, a human observer, and a robot observer. We found that a human observer elicits less attention than a robot, but evokes more of a socially normative presence causing people to act honestly. Conversely, we found that a robot observer elicits more attention, engagement, and a monitoring presence. But even though people were suspicious that they were being monitored, they still behaved dishonestly in the robot observer condition.
Luria, M., et al. (2016). Designing Vyo, a Robotic Smart Home Assistant [RO-MAN'16] →
Designing Vyo, a Robotic Smart Home Assistant: Bridging the Gap Between Device and Social Agent
Proceedings of the 25th International Symposium on Robot and Human Interactive Communication
Abstract
We describe the design process of “Vyo”, a personal assistant serving as a centralized interface for smart home devices. Building on the concepts of ubiquitous and engaging computing in the domestic environment, we identified five design goals for the home robot: engaging, unobtrusive, device-like, respectful, and reassuring. These goals led our design process, which included simultaneous iterative development of the robot’s morphology, nonverbal behavior and interaction schemas. We continued with user-centered design research using puppet prototypes of the robot to assess and refine our design choices. The resulting robot, Vyo, straddles the boundary between a monitoring device and a socially expressive agent, and presents a number of novel design outcomes: The combination of TUI “phicons” with social robotics; gesture-related screen exposure; and a non-anthropomorphic monocular expressive face. We discuss how our design goals are expressed in the elements of the robot’s final design.
Hoffman, G. (2016). OpenWoZ, A Runtime-Configurable Wizard-of-Oz Framework for Human-Robot Interaction pdf →
OpenWoZ, A Runtime-Configurable Wizard-of-Oz Framework for Human-Robot Interaction
AAAI Spring Symposium on Enabling Computing Research in Socially Intelligent Human-Robot Interaction
Abstract
Wizard-of-Oz (WoZ) is a common technique enabling HRI researchers to explore aspects of interaction not yet backed by autonomous systems. A standardized, open, and flexible WoZ framework could therefore serve the community and accelerate research both for the design of robotic systems and for their evaluation.
This paper presents the definition of OpenWoZ, a Wizard-of-Oz framework for HRI, designed to be updated during operation by the researcher controlling the robot. OpenWoZ is implemented as a thin HTTP server running on the robot, and a cloud-backed multi-platform client schema. The WoZ server accepts representational state transfer (REST) requests from a number and variety of clients simultaneously. This “separation of concerns” in OpenWoZ allows addition of commands, new sequencing of behaviors, and adjustment of parameters, all during run-time.
Roizman, M., et al. (2016). Studying the Opposing Effects of Robot Presence on Human Corruption [HRI'16 LBR] →
Studying the Opposing Effects of Robot Presence on Human Corruption
11th ACM/IEEE International Conference on Human-Robot Interaction (HRI'16) Late Breaking Reports
Abstract
Social presence has two opposing effects on human corruption: the collaborative and contagious nature of another person’s presence can cause people to behave in a more corrupt manner. In contrast, the monitoring nature of another person’s presence can decrease corruption. We hypothesize that a robot’s presence can provide the best of both worlds: Decreasing corruption by providing a monitoring presence, without increasing it by collusion. We describe an experimental study currently underway that examines this hypothesis, and report on initial findings from pilot runs of our experimental protocol.
Birnbaum, G. E., et al. (2016). Machines as a source of consolation [HRI'16] →
Machines as a source of consolation: Robot responsiveness increases approach behavior and desire for companionship
Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction (Forthcoming)
Abstract
Responsiveness to one’s bids for proximity in times of need is a linchpin of human interaction. As such, the ability to be perceived as responsive has design implications for socially assistive robotics. We report on a large-scale experimental laboratory study (n=102) examining robot responsiveness and its effects on human attitudes and behaviors. In one-on-one sessions, participants disclosed a personal event to a non-humanoid robot. The robot responded either responsively or unresponsively across two modalities: Simple gestures and written text. We replicated previous findings that the robot’s responsiveness increased perceptions of its appealing traits. In addition, we found that robot responsiveness increased nonverbal approach behaviors (physical proximity, leaning toward the robot, eye contact, smiling) and participants’ willingness to be accompanied by the robot during stressful events. These findings suggest that humans not only utilize responsiveness cues to ascribe social intentions to personal robots, but actually change their behavior towards responsive robots and may want to use such robots as a source of consolation.
Alves-Oliveira, P., et al. (2016). Boosting Children’s Creativity through Creative Interactions with Social Robots [HRI'16 LBR] →
Boosting Children’s Creativity through Creative Interactions with Social Robots
11th ACM/IEEE International Conference on Human-Robot Interaction (HRI'16) Late Breaking Reports
Abstract
Creativity is one of the most important and pervasive of all human abilities. However, it seems to decline during school age years, in a phenomenon entitled ”creative crisis”. As developed societies are shifting from an industrialized economy to a creative economy, there is a need to support creative abilities through life. With this work, we aim to use social robots as boosters for creative-driven behaviors with children.
Zuckerman, O., et al. (2016). KIP3: Robotic Companion as an External Cue to Students with ADHD [TEI'16] →
KIP3: Robotic Companion as an External Cue to Students with ADHD
Extended Abstracts of the 10th International Conference on Tangible, Embedded and Embodied Interaction
Abstract
We present the design and initial evaluation of Kip3, a social robotic device for students with ADHD that provides immediate feedback for inattention or impulsivity events. We designed a research platform comprised of a tablet-based Continuous Performance Test (CPT) that is used to assess inattention and impulsivity, and a socially expressive robotic device (Kip3) as feedback. We evaluated our platform with 10 students with ADHD in a within subject user study, and report that 9 out of 10 participants felt that Kip3 helped them regain focus, but wondered if it will be effective over time and how it will identify inattention in more complex situations outside the lab.
Hoffman, G., Bauman, S, & Vanunu, K. (2016). Robotic Experience Companionship in Music Listening and Video Watching [PUC'16] →
Robotic Experience Companionship in Music Listening and Video Watching
Personal and Ubiquitous Computing, 20(1), pp 51–63
Abstract
We propose the notion of Robotic Experience Companionship (REC): a person’s sense of sharing an experience with a robot. Does a robot’s presence and response to a situation affect a human’s understanding of the situation and of the robot, even without direct human-robot interaction? We present the first experimental assessment of REC, studying people’s experience of entertainment media as they share it with a robot. Both studies use an autonomous custom-designed desktop robot capable of performing gestures synchronized to the media. Study I (n=67), examining music listening companionship, finds that the robot’s dance-like response to music causes participants to feel that the robot is co-listening with them, and increases their liking of songs. The robot’s response also increases its perceived human character traits. We find REC to be moderated by music listening habits, such that social listeners were more affected by the robot’s response. Study II (n=91), examining video watching companionship supports these findings, demonstrating that social video viewers enjoy the experience more with the robot present, while habitually solitary viewers do not. Also in line with Study~I, the robot’s response to the a video clip causes people to attribute more positive human character traits to the robot. This has implications for robots as companions for digital media consumption, but also suggests design implications based on REC for other shared experiences with personal robots.
Bretan, M., Hoffman, G., & Weinberg, G. (2015). Emotionally Expressive Dynamic Physical Behaviors in Robots [J-HCS'15] →
Emotionally Expressive Dynamic Physical Behaviors in Robots
International Journal of Human-Computer Studies, Volume 78
Abstract
For social robots to respond to humans in an appropriate manner they need to use apt affect displays, revealing underlying emotional intelligence. We present an artificial emotional intelligence system for robots, with both a generative and a perceptual aspect. On the generative side, we explore the expressive capabilities of an abstract, faceless, creature-like robot, with very few degrees of freedom, lacking both facial expressions and the complex humanoid design found often in emotionally expressive robots. We validate our system in a series of experiments: in one study, we find an advantage in classification for animated vs static affect expressions and advantages in valence and arousal estimation and personal preference ratings for both animated vs static and physical vs on-screen expressions. In a second experiment, we show that our parametrically generated expression variables correlate with the intended user affect perception. On the perceptual side, we present a new corpus of sentiment-tagged social media posts for training the robot to perceive affect in natural language. In a third experiment we estimate how well the corpus generalizes to an independent data set through a cross validation using a perceptron and demonstrate that the predictive model is comparable to other sentiment-tagged corpi and classifiers. Combining the perceptual and generative systems, we show in a fourth experiment that our automatically generated affect responses cause participants to show signs of increased engagement and enjoyment compared with arbitrarily chosen comparable motion parameters.
Mizrahi, M., et al. (2015). Robotic Attachment [SPSP'15 Poster] →
Robotic Attachment: The Effects of a Robot’s Responsiveness on its Appeal as a Source of Consolation
Poster at the 16th Annual Meeting of the Society for Personality and Social Psychology
closeHoffman, G., et al. (2015). Design and Evaluation of a Peripheral Robotic Conversation Companion. [HRI'15] Best Paper pdf →
Design and Evaluation of a Peripheral Robotic Conversation Companion.
Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction
Abstract
We present the design, implementation, and evaluation of a peripheral empathy-evoking robotic conversation companion, Kip1. The robot’s function is to increase people’s awareness to the effect of their verbal behavior towards others, potentially leading to behavior change. Specifically, Kip1 is designed to promote non-aggressive conversation between people. It monitors the conversation’s nonverbal aspects and maintains an emotional model of its reaction to the conversation. If the conversation seems calm, Kip1 responds by a gesture designed to communicate curious interest. If the conversation seems aggressive, Kip1 responds by a gesture designed to communicate fear. We describe the design process of Kip1, guided by the principles of “peripheral” and “evocative”. We detail its hardware and software systems, and a study evaluating the effects of the robot’s autonomous behavior on couple’s conversations. Participant couples were guided to conduct a conflict conversation with the robot present, in one of two conditions: the robot reacting or not reacting to their conversation. We find support for our design goals. A responsive conversation companion leads to more gaze attention, but not more verbal distraction. This suggests that robotic devices could be designed as companions to human-human interaction without compromising the natural communication flow between people. Participants also rated the reacting robot as having significantly more social human character traits and as being significantly more similar to them. This points to the robot’s potential to elicit people’s empathy.
Slyper, R., Hoffman, G., & Shamir, A. (2015). Mirror Puppeteering: Animating Toy Robots in Front of a Webcam [TEI'15] pdf →
Mirror Puppeteering: Animating Toy Robots in Front of a Webcam
Proceedings of the 9th International Conference on Tangible, Embedded and Embodied Interaction
Abstract
Mirror Puppeteering is a system for easily creating gestures (“animations”) for robotic toys, custom robots, and virtual characters. Lay users can record animations by simply moving a robot’s limbs in front of a webcam. Makers and hobbyists can use the system to easily set up their custom-built robots for animation. Gamers and amateur animators can real-time control or save animations for virtual characters. Our system works by tracking circular markers on the robot’s surface and translating these into motor commands, using a calibration map between marker locations in camera space and motor angles. New robots can be quickly set up for Mirror Puppeteering without knowledge of the robot’s 3D structure, as we demonstrate on several robots. In a user study, participants found our method more enjoyable, usable, easy to learn, and successful than traditional animation methods.
Zuckerman, O., Hoffman, G., & Gal-Oz, A. (2015). In-car game design for children [J-CCI'15] →
In-car game design for children: Promoting interactions inside and outside the car
Journal of Child-Computer Interaction, In Press
Abstract
Long car rides can become a source of boredom for children, consequently causing tension inside the car. Common solutions against boredom include entertainment devices suitable for in-car use. Such devices often disengage children from other family members inside the car, as well as from the outside world. We set out to create a novel in-car game that connects children with their family and their environment, instead of only their entertainment devices. The game, called Mileys, integrates location-based information, augmented reality and virtual characters. We developed Mileys in an iterative process – findings from the first round of prototyping and evaluation guided the design of a second-generation prototype and lead to additional evaluations. In this paper we discuss lessons learned during the development and evaluation of Mileys, present current challenges for location- based in-car game design, and suggest potential solutions for promoting interactions inside and outside the car.
Hoffman, G., et al. (2015). Robot Presence and Human Honesty: Experimental Evidence [HRI'15] pdf →
Robot Presence and Human Honesty: Experimental Evidence
Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction (Forthcoming)
Abstract
Robots are predicted to serve in environments in which human honesty is important, such as the workplace, schools, and public institutions. Can the presence of a robot facilitate honest behavior? In this paper, we describe an experimental study evaluating the effects of robot social presence on people’s honesty. Participants complete a perceptual task, which is structured so as to allow them to earn more money by not complying with the experiment instructions. We compare three conditions between subjects: Completing the task alone in a room; completing it with a non-monitoring human present; and completing it with a non-monitoring robot present. The robot is a new expressive social head capable of 4-DoF head movement and screen-based eye animation, specifically designed and built for this research. It was designed to convey social presence, but not monitoring. We find that people cheat in all three conditions, but cheat equally less when there is a human or a robot in the room, compared to when they are alone. We did not find differences in the perceived authority of the human and the robot, but did find that people felt significantly less guilty after cheating in the presence of a robot as compared to a human. This has implications for the use of robots in monitoring and supervising tasks in environments in which honesty is key.
Zuckerman, O., & Hoffman, G. (2015). Empathy Objects: Robotic Devices as Conversation Companions [TEI'15] pdf →
Empathy Objects: Robotic Devices as Conversation Companions
Extended Abstracts of the 9th International Conference on Tangible, Embedded and Embodied Interaction
Abstract
We present the notion of Empathy Objects, ambient robotic devices accompanying human-human interaction. Empathy Objects respond to human behavior using physical gestures as nonverbal expressions of their “emotional states”. The goal is to increase people’s self-awareness to the emotional state of others, leading to behavior change. We demonstrate an Empathy Object prototype, Kip1, a conversation companion designed to promote non-aggressive conversation between people.
Hoffman, G., & Ju, W. (2014). Designing Robots with Movement in Mind [J-HRI'14] pdf →
Designing Robots with Movement in Mind
Journal of Human-Robot Interaction, 3(1), 89–122
Abstract
This paper makes the case for designing interactive robots with their expressive movement in mind. As people are highly sensitive to physical movement and spatiotemporal affordances, well-designed robot motion can communicate, engage, and offer dynamic possibilities beyond the machines’ sur- face appearance or pragmatic motion paths. We present techniques for movement centric design, including character animation sketches, video prototyping, interactive movement explorations, Wiz- ard of Oz studies, and skeletal prototypes. To illustrate our design approach, we discuss four case studies: a social head for a robotic musician, a robotic speaker dock listening companion, a desktop telepresence robot, and a service robot performing assistive and communicative tasks. We then re- late our approach to the design of non-anthropomorphic robots and robotic objects, a design strategy that could facilitate the feasibility of real-world human-robot interaction.
Hoffman, G., et al. (2014). Robot responsiveness to human disclosure affects social impression and appeal [HRI'14] pdf →
Robot responsiveness to human disclosure affects social impression and appeal
Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction
Abstract
In human relationships, responsiveness—behaving in a sensitive manner that is supportive of another person’s needs— plays a major role in any interaction that involves effective communication, caregiving, and social support. Perceiving one’s partner as responsive has been tied to both personal and relationship well-being. In this work, we examine whether and how a robot’s behavior can instill a sense of responsiveness, and the effects of a robot’s perceived responsiveness on the human’s perception of the robot. In an experimental between-subject study (n=34), a desktop non-humanoid robot performed either positive or negative responsiveness behaviors across two modalities (simple gestures and written text) in response to P’s negative event disclosure. We found that perceived partner responsiveness, positive human-like traits, and robot attractiveness were higher in the positively responsive condition. This has design implications for interactive robots, in particular for robots in caregiving roles.
Hoffman, G., Cakmak, M, &, Chao, C. (2014). Timing in human-robot interaction [HRI'14 Workshop Summary] →
Timing in human-robot interaction
Workshop in the 9th ACM/IEEE International Conference on Human-Robot Interaction
Abstract
Timing plays a role in a range of human-robot interaction scenarios, as humans are highly sensitive to timing and interaction fluency. It is central to spoken dialogue, with turn-taking, interruptions, and hesitation influencing both task efficiency and user affect. Timing is also an important factor in the interpretation and generation of gestures, gaze, facial expressions, and other nonverbal behavior. Beyond communication, temporal synchronization is functionally necessary for sharing resources and physical space, as well as coordinating multi-agent actions. Timing is thus crucial to the success of a broad spectrum of HRI applications, including but not limited to situated dialogue; collaborative manipulation; performance, musical, and entertainment robots; and expressive robot companions. Recent years have seen a growing interest in the HRI community in the various research topics related to human-robot timing. The purpose of this workshop is to explore and discuss theories, computational models, systems, empirical studies, and interdisciplinary insights related to the notion of timing, fluency, and rhythm in human-robot interaction.
Hoffman, G., et al. (2013). In-car game design for children: child vs. parent perspective [IDC'13] →
In-car game design for children: child vs. parent perspective
Proceedings of the 12th International Conference on Interaction Design and Children
Abstract
Family car rides can become a source of boredom for child passengers, and consequently cause tension inside the car. In an attempt to overcome this problem, we developed Mileys—a novel in-car game that integrates location-based information, augmented reality and virtual characters. It is aimed to make car rides more interesting for child passengers, strengthen the bond between family members, encourage safe and ecological driving, and connect children with their environment instead of their entertainment devices. We evaluated Mileys with a six-week long field study, which revealed differences between children and parents regarding their desired in-car experience. Children wish to play enjoyable games, whereas parents view car rides as an opportunity for strengthening the bond between family members and for educating their children. Based on our findings, we identify five key challenges for in-car game design for children: different expectations by parents and children, undesired detachment, short interaction span, poor GPS reception, and motion sickness.
Hoffman, G. (2013). Evaluating Fluency in Human-Robot Collaboration [RSS'13 Workshop] Best Workshop Paper pdf →
Evaluating Fluency in Human-Robot Collaboration
Robotics: Science and Systems Workshop on Human-Robot Collaboration
Abstract
Please refer to the new journal version of this paper.
Collaborative fluency is the coordinated meshing of joint activities between members of a well-synchronized team. We aim to build robotic team members that can work side-by-side humans by displaying the kind of fluency that humans are accustomed to from each other. As part of this effort, we have developed a number of metrics to evaluate the level of fluency in human-robot shared-location teamwork. In this paper we discuss issues in measuring fluency, present both subjective and objective metrics that have been used to measure fluency between a human and robot, and report on findings along the proposed metrics.
Hoffman, G., & Vanunu, K. (2013). Effects of Robotic Companionship on Music Enjoyment and Agent Perception [HRI'13] pdf →
Effects of Robotic Companionship on Music Enjoyment and Agent Perception
Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction
Abstract
We evaluate the effects of robotic listening companionship on people’s enjoyment of music, and on their perception of the robot. We present a robotic speaker device designed for joint listening and embodied performance of the music played on it. The robot generates smoothed real-time beat-synchronized dance moves, uses nonverbal gestures for common ground, and can make and maintain eye-contact.
In an experimental between-subject study (n=67), participants listened to songs played on the speaker device, with the robot either moving in sync with the beat, moving off-beat, or not moving at all. We found that while the robot’s beat precision was not consciously detected by Ps, an on-beat robot positively affected song liking. There was no effect on overall experience enjoyment. In addition, the robot’s response caused Ps to attribute more positive human-like traits to the robot, as well as rate the robot as more similar to themselves. Notably, personal listening habits (solitary vs. social) affected agent attributions.
This work points to a larger question, namely how a robot’s perceived response to an event might affect a human’s perception of the same event.
Hoffman, G., et al. (2012). Evaluating Music Listening with a Robotic Companion [IROS-iHAI'12] →
Evaluating Music Listening with a Robotic Companion
IEEE/RSJ International Conference on Intelligent Robots and Systems Int'l Workshop on Human-Agent Interaction
Abstract
Music listening is a central activity in human culture, and throughout history the introduction of new audio reproduction technologies have influenced the way music is consumed and perceived.
In this work, we discuss a robotic speaker, designed to behave as both a reproduction device and as a music listening companion. The robot is intended to enhance a human’s listening experience by providing social presence and embodied musical performance. In a sample application, it generates segment-specific, beat-synchronized gestures based on the song’s genre, and maintains eye-contact with the user.
We describe an experimental human-subject study (n=67), evaluating the effect of the robot’s behavior on people’s enjoyment of the songs played, as well as on their sense of the robot’s social presence and their impression of the robot as an autonomous agent.
Hoffman, G. (2012). Dumb Robots, Smart Phones: a Case Study of Music Listening Companionship [RO-MAN'12] Best Paper Nomination pdf →
Dumb Robots, Smart Phones: a Case Study of Music Listening Companionship
Proceedings of the 21st International Symposium on Robot and Human Interactive Communication
Abstract
Combining high-performance, sensor-rich mobile devices with simple, low-cost robotic platforms could accelerate the adoption of personal robotics in real-world environments. We present a case study of this “dumb robot, smart phone” paradigm: a robotic speaker dock and music listening companion. The robot is designed to enhance a human’s listening experience by providing social presence and embodied musical performance. In its initial application, it generates segment-specific, beat-synchronized gestures based on the song’s genre, and maintains eye-contact with the user. All of the robot’s computation, sensing, and high-level motion control is performed on a smartphone, with the rest of the robot’s parts handling mechanics and actuator bridging.
Hoffman, G. (2012). Embodied Cognition for Autonomous Interactive Robots [TopiCS'12] pdf →
Embodied Cognition for Autonomous Interactive Robots
Topics in Cognitive Science, 4(4), 759–772
Abstract
In the past, notions of embodiment have been applied to robotics mainly in the realm of very simple robots, and supporting low-level mechanisms such as dynamics and navigation. In contrast, most human-like, interactive, and socially adept robotic systems turn away from embodiment and use amodal, symbolic, and modular approaches to cognition and interaction. At the same time, recent research in Embodied Cognition (EC) is spanning an increasing number of complex cognitive processes, including language, nonverbal communication, learning, and social behavior.
This article suggests adopting a modern EC approach for autonomous robots interacting with humans. In particular, we present three core principles from EC that may be applicable to such robots: (a) modal perceptual representation, (b) action/perception and action/cognition integration, and (c) a simulation-based model of top-down perceptual biasing. We describe a computational framework based on these principles, and its implementation on two physical robots. This could provide a new paradigm for embodied human–robot interaction based on recent psychological and neurological findings.
Hoffman, G. (2011). On Stage: Robots as Performers [RSS'11 Workshop] pdf →
On Stage: Robots as Performers
Robotics: Science and Systems Workshop on Human-Robot Interaction
Abstract
This paper suggests to turn to the performative arts for insights that may help the fluent coordination and joint-action timing of human-robot interaction (HRI). We argue that theater acting and musical performance robotics could serve as useful testbeds for the development and evaluation of action coordination in robotics. We also offer two insights from theater acting literature for HRI: the maintenance of continuous sub-surface processes that manifest in motor action, and an emphasis on fast, inaccurate responsiveness using partial information and priming in action selection.
Hoffman, G., & Weinberg, G. (2011). Interactive Improvisation with a Robotic Marimba Player [AU-RO'11] pdf →
Interactive Improvisation with a Robotic Marimba Player
Autonomous Robots, 31(2-3), 133-153
Abstract
Shimon is a interactive robotic marimba player, developed as part of our ongoing research in Robotic Musicianship. The robot listens to a human musician and continuously adapts its improvisation and choreography, while playing simultaneously with the human. We discuss the robot’s mechanism and motion-control, which uses physics simulation and animation principles to achieve both expressivity and safety. We then present an interactive improvisation system based on the notion of physical gestures for both musical and visual expression. The system also uses anticipatory action to enable real-time improvised synchronization with the human player.
We describe a study evaluating the effect of embodiment on one of our improvisation modules: antiphony, a call-and-response musical synchronization task. We conducted a 3×2 within-subject study manipulating the level of embodiment, and the accuracy of the robot’s response. Our findings indicate that synchronization is aided by visual contact when uncertainty is high, but that pianists can resort to internal rhythmic coordination in more predictable settings. We find that visual coordination is more effective for synchronization in slow sequences; and that occluded physical presence may be less effective than audio-only note generation.
Finally, we test the effects of visual contact and embodiment on audience appreciation. We find that visual contact in joint Jazz improvisation makes for a performance in which audiences rate the robot as playing better, more like a human, as more responsive, and as more inspired by the human. They also rate the duo as better synchronized, more coherent, communicating, and coordinated; and the human as more inspired and more responsive.
Hoffman, G. and Weinberg, G. (2011). Interactive Improvisation with a Robotic Marimba Player [Book Chapter] →
Interactive Improvisation with a Robotic Marimba Player
Chapter in J. Solis, & K. Ng (Eds.), Musical Robots and Multimodal Interactive Multimodal Systems.
closeHoffman, G., & Weinberg, G. (2010). Synchronization in Human-Robot Musicianship [RO-MAN'10] →
Synchronization in Human-Robot Musicianship
Proceedings of the 19th International Symposium on Robot and Human Interactive Communication
Abstract
Shimon is a interactive robotic marimba player, developed as part of our ongoing research in Robotic Musicianship (RM). One of the potential benefits of RM is that it provides human players with embodied information that relates spatial movement to tone generation. This can aid in anticipation and coordination of synchronous playing.
As part of a human-robot Jazz improvisation system, we present an anticipatory system enabling beat-matched real-time synchronization. Our system enables flexible, yet coordinated call-and-response, a standard type of musical interaction. It was used in a live public human-robot joint Jazz performance.
We also describe a preliminary study evaluating the effect of embodiment on this call-and-response musical synchronization task. We conducted a 3×2 within-subject study manipulating the level of embodiment (visual co-presence, physical presence but visual occlusion, and synthesized sound), and the accuracy of the robot’s response.
Our findings indicate that synchronization is aided by visual contact when uncertainty is high, but that pianists can resort to internal rhythmic coordination in more predictable settings. We find that visual coordination is more effective for synchronization for slow sequences compared to faster sequences; and that occluded physical presence may be less effective than audio-only note generation.
Hoffman, G., & Weinberg, G. (2010). Gesture-based Human-Robot Jazz Improvisation [ICRA'10] Best Paper →
Gesture-based Human-Robot Jazz Improvisation
Proceedings of the IEEE International Conference on Robotics and Automation
Abstract
We present Shimon, an interactive improvisational robotic marimba player, developed for research in Robotic Musicianship. The robot listens to a human musician and continuously adapts its improvisation and choreography, while playing simultaneously with the human. We discuss the robot’s mechanism and motion-control, which uses physics simulation and animation principles to achieve both expressivity and safety. We then present a novel interactive improvisation system based on the notion of gestures for both musical and visual expression. The system also uses anticipatory beat-matched action to enable real-time synchronization with the human player.
Our system was implemented on a full-length human-robot Jazz duet, displaying highly coordinated melodic and rhythmic human-robot joint improvisation. We have performed with the system in front of a live public audience.
Hoffman, G., Weinberg, G. (2010). Shimon: An Interactive Improvisational Robotic Marimba Player [CHI'10 Extended Abstract] →
Shimon: An Interactive Improvisational Robotic Marimba Player
Extended Abstracts Proceedings of the ACM International Conference on Human Factors in Computing Systems
Abstract
Shimon is an autonomous marimba-playing robot designed to create interactions with human players that lead to novel musical outcomes. The robot combines music perception, interaction, and improvisation with the capacity to produce melodic and harmonic acoustic responses through choreographic gestures. We developed an anticipatory action framework, and a gesture-based behavior system, allowing the robot to play improvised Jazz with humans in synchrony, fluently, and without delay. In addition, we built an expressive non-humanoid head for musical social communication. This paper describes our system, used in a performance and demonstration at the CHI 2010 Media Showcase.
Hoffman, G. (2010). Anticipation in Human-Robot Interaction [AAAI'10 Spring Symposium] →
Anticipation in Human-Robot Interaction
AAAI 2010 Spring Symposium: It’s All in the Timing
Abstract
Hoffman, G., & Breazeal, C. (2010). Effects of Anticipatory Perceptual Simulation on Practiced Human-Robot Tasks [AU-RO'10] pdf →
Effects of Anticipatory Perceptual Simulation on Practiced Human-Robot Tasks
Autonomous Robots, 28(4), 403-423
Abstract
With the aim of attaining increased fluency and efficiency in human-robot teams, we have developed a cognitive architecture for robotic teammates based on the neuro-psychological principles of anticipation and perceptual simulation through top-down biasing. An instantiation of this architecture was implemented on a non-anthropomorphic robotic lamp, performing a repetitive human-robot collaborative task.
In a human-subject study in which the robot works on a joint task with untrained subjects, we find our approach to be significantly more efficient and fluent than in a comparable system without anticipatory perceptual simulation. We also show the robot and the human to improve their relative contribution at a similar rate, possibly playing a part in the human’s “like-me” perception of the robot.
In self-report, we find significant differences between the two conditions in the sense of team fluency, the team’s improvement over time, the robot’s contribution to the efficiency and fluency, the robot’s intelligence, and in the robot’s adaptation to the task. We also find differences in verbal attitudes towards the robot: most notably, subjects working with the anticipatory robot attribute more human qualities to the robot, such as gender and intelligence, as well as credit for success, but we also find increased self-blame and self-deprecation in these subjects’ responses.
We believe that this work lays the foundation towards modeling and evaluating artificial practice for robots working in collaboration with humans.
Gray, J., et al. (2010). Expressive, Interactive Robots [HRI'10 Workshop] →
Expressive, Interactive Robots: Tools, Techniques, and Insights Based on Collaborations
HRI 2010 Workshop: What do collaborations with the arts have to say about HRI?
Abstract
Abstract—In our experience, a robot designer, behavior architect, and animator must work closely together to create an interactive robot with expressive, dynamic behavior. This paper describes lessons learned from these collaborations, as well as a set of tools and techniques developed to help facilitate the collaboration. The guiding principles of these tools and techniques are to allow each collaborator maximum flexibility with their role and shield them from distracting complexities, while facilitating the integration of their efforts, propagating important constraints to all parties, and minimizing redundant or automatable tasks. We focus on three areas: (1) how the animator shares their creations with the behavior architect, (2) how the behavior architect integrates artistic content into dynamic behavior, and (3) how that behavior is performed on the physical robot.
Hoffman, G., Kubat, R., & Breazeal, C. (2008). A Hybrid Control System for Puppeteering a Live Robotic Stage Actor [RO-MAN'08] Best Paper →
A Hybrid Control System for Puppeteering a Live Robotic Stage Actor
Proceedings of the 17th International Symposium on Robot and Human Interactive Communication
Abstract
This paper describes a robotic puppeteering system used in a theatrical production involving one robot and two human performers on stage. We draw from acting theory and human-robot interaction to develop a hybrid-control puppeteering interface which combines reactive expressive gestures and parametric behaviors with a point-of-view eye contact module. Our design addresses two core considerations: allowing a single operator to puppeteer the robot’s full range of behaviors, and allowing for gradual replacement of human-controlled modules by autonomous subsystems.
We wrote a play specifically for a performance between two humans and one of our research robots, a robotic lamp which embodied a lead role in the play. We staged three performances with the robot as part of a local festival of new plays. Though we have yet to perform a formal statistical evaluation of the system, we interviewed the actors and director and present their feedback about working with the system.
Hoffman, G., & Breazeal, C. (2008). Anticipatory Perceptual Simulation for Human-Robot Joint Practice [AAAI'08] →
Anticipatory Perceptual Simulation for Human-Robot Joint Practice: Theory and Application Study
Proceedings of the 23rd AAAI Conference for Artificial Intelligence
Abstract
We have developed a cognitive architecture for a robotic teammate based on the neuro-psychological principles of anticipation and perceptual simulation, with the aim of attaining increased fluency and efficiency in human-robot teams. An instantiation of this architecture was implemented on a non-anthropomorphic robotic lamp, performing in a human-robot collaborative task.
In a human-subject study in which the robot works on a joint task with untrained subjects, we find our approach to be significantly more efficient and fluent than in a comparable system without anticipatory perceptual simulation. We also show the robot and the human to be increasingly contributing at a similar rate, and we find significant differences in a number of self-report measures and verbal attitudes towards the robot. Notably, we identify increased self-deprecation in human subject responses vis-à-vis the anticipatory robot.
Hoffman, G., & Breazeal, C. (2008). Achieving fluency through perceptual-symbol practice in human-robot collaboration [HRI'08] →
Achieving fluency through perceptual-symbol practice in human-robot collaboration
Proceedings of the third ACM/IEEE International Conference on Human-Robot Interaction
Abstract
We have developed a cognitive architecture for robotic teammates based on the neuro-psychological principles of perceptual symbols and simulation, with the aim of attaining increased fluency in human-robot teams. An instantiation of this architecture was implemented on a robotic desk lamp, performing in a human-robot collaborative task. This paper describes initial results from a human-subject study measuring team efficiency and team fluency, in which the robot works on a joint task with untrained subjects. We find significant differences in a number of efficiency and fluency metrics, when comparing our architecture to a purely reactive robot with similar capabilities.
Hoffman, G., & Breazeal, C. (2007). Cost-based anticipatory action selection for human–robot fluency [T-RO'07] pdf →
Cost-based anticipatory action selection for human–robot fluency
IEEE Transactions on Robotics, 23(5), 952-961
Abstract
A crucial skill for fluent action meshing in human team activity is a learned and calculated selection of anticipatory actions. We believe that the same holds for robotic teammates, if they are to perform in a similarly fluent manner with their human counterparts.
In this work we describe a model for human robot joint action, and propose an adaptive action selection mechanism for a robotic teammate, which makes anticipatory decisions based on the confidence of their validity and their relative risk. We conduct an analysis of our method, predicting an improvement in task efficiency compared to a purely reactive process.
We then present results from a study involving untrained human subjects working with a simulated version of a robot using our system. We show a significant improvement in best-case task efficiency when compared to a group of users working with a reactive agent, as well as a significant difference in the perceived commitment of the robot to the team and its contribution to the team’s fluency and success. By way of explanation, we raise a number of fluency metric hypotheses, and evaluate their significance between the two study conditions.
Hoffman, G., & Breazeal, C. (2007). Effects of Anticipatory Action on Human-robot Teamwork [HRI'07] Best Student Paper →
Effects of Anticipatory Action on Human-robot Teamwork: Efficiency, Fluency, and Perception of Team
Proceedings of the ACM/IEEE international conference on Human-Robot Interaction
Abstract
A crucial skill for fluent action meshing in human team activity is a learned and calculated selection of anticipatory actions. We believe that the same holds for robotic teammates, if they are to perform in a similarly fluent manner with their human counterparts. In this work, we propose an adaptive action selection mechanism for a robotic teammate, making anticipatory decisions based on the confidence of their validity and their relative risk. We predict an improvement in task efficiency and fluency compared to a purely reactive process.
We then present results from a study involving untrained human subjects working with a simulated version of a robot using our system. We show a significant improvement in best-case task efficiency when compared to a group of users working with a reactive agent, as well as a significant difference in the perceived commitment of the robot to the team and its contribution to the team’s fluency and success. By way of explanation, we propose a number of fluency metrics that differ significantly between the two study groups.
Hoffman, G. (2006). Time Bracketing [DIME'06] pdf →
Time Bracketing
Proceedings of the 1st International Conference on Digital Interactive Media in Entertainment
Abstract
Time Bracketing is a novel technique to photographically depict a time-varying space or object in a single image without spatial distortion or fragmentation. The approach attempts to strike a balance between procedural generation and manual composition as well as between faithful depiction and digital manipulation. It also enables the clear representation of both dynamic and static scene elements. As a result, Time Bracketing puts the subject, rather than the technique, in the center of its artistic creation. This paper introduces the method, presents a custom-built authoring software for the creation of Time Bracketing images, and shows and architectural studies using the described technique and software.
Thomaz, A. L., Hoffman, G., & Breazeal, C. (2006). Reinforcement Learning with Human Teachers [RO-MANʼ06] →
Reinforcement Learning with Human Teachers: Understanding How People Want to Teach Robots
Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication
Abstract
While Reinforcement Learning (RL) is not traditionally designed for interactive supervisory input from a human teacher, several works in both robot and software agents have adapted it for human input by letting a human trainer control the reward signal. In this work, we experimentally examine the assumption underlying these works, namely that the human-given reward is compatible with the traditional RL reward signal. We describe an experimental platform with a simulated RL robot and present an analysis of real-time human teaching behavior found in a study in which untrained subjects taught the robot to perform a new task.
We report three main observations on how people administer feedback when teaching a robot a task through Reinforcement Learning: (a) they use the reward channel not only for feedback, but also for future-directed guidance; (b) they have a positive bias to their feedback — possibly using the signal as a motivational channel; and (c) they change their behavior as they develop a mental model of the robotic learner. In conclusion, we discuss future extensions to RL to accommodate these lessons.
Hoffman, G. (2006). Acting Lessons for Artificial Intelligence [AI'50 Summit] →
Acting Lessons for Artificial Intelligence
50th Anniversary Summit of Artificial Intelligence
Abstract
Theater actors have been staging artificial intelligence for centuries. If one shares the view that intelligence manifests in behavior, one must wonder what lessons the AI community can draw from a practice that is historically concerned with the infusion of artificial behavior into such vessels as body and text. Like researchers in AI, actors construct minds by systematic investigation of intentions, actions, and motor processes with the proclaimed goal of artificially recreating human-like behavior. Therefore, acting methodology may hold valuable directives for designers of artificially intelligent systems. Indeed, a review of acting method literature reveals a number of insights that may be of interest to the AI community.
Hoffman, G., & Breazeal, C. (2006). Robotic Partners’ Bodies and Minds [CogRob'06] →
Robotic Partners’ Bodies and Minds: An Embodied Approach to Fluid Human-Robot Collaboration
AAAI'06 Fifth International Workshop on Cognitive Robotics
Abstract
A mounting body of evidence in psychology and neuroscience points towards an embodied model of cognition, in which the mechanisms governing perception and action are strongly interconnected, and also play a central role in higher cognitive functions, traditionally modeled as amodal symbol systems.
We argue that robots designed to interact fluidly with humans must adopt a similar approach, and shed traditional distinctions between cognition, perception, and action. In particular, embodiment is crucial to fluid joint action, in which the robot’s performance must tightly integrate with that of a human counterpart, taking advantage of rapid sub-cognitive processes.
We thus propose a model for embodied robotic cognition that is built upon three propositions: (a) modal, perceptual models of knowledge; (b) integration of perception and action; (c) top-down bias in perceptual processing. We then discuss implications and derivatives of our approach.
Hoffman, G., & Breazeal, C. (2006). What Lies Ahead? Expectation Management in Human-Robot Collaboration [AAAI'06 Spring Symposium] →
What Lies Ahead? Expectation Management in Human-Robot Collaboration
AAAI 2006 Spring Symposium: To Boldly Go Where No Human-Robot Team Has Gone Before
Abstract
We aim to build robots that go beyond command-and-response and can engage in fluent collaborative behavior with their human counterparts. This paper discusses one aspect of collaboration fluency: expectation management -predicting what a human collaborator will do next and how to act on that prediction. We propose a formal time-based collaborative framework that can be used to evaluate this and other aspects of collocated human-robot teamwork, and show how expectation management can enable a higher level of fluency and improved efficiency in this framework. We also present an implementation of the proposed theoretical framework in a simulated human-robot collaborative task.
Thomaz, A. L., Hoffman, G., & Breazeal, C. (2006). Experiments in Socially Guided Machine Learning [HRI'06] Best Student Short Paper →
Experiments in Socially Guided Machine Learning: Understanding Human Intent of Reward/Punishment
Proceedings of the ACM/IEEE international conference on Human-Robot Interaction
Abstract
In Socially Guided Machine Learning we explore the ways in which machine learning can more fully take advantage of natural human interaction. In this paper we are studying the role real-time human interaction plays in training assistive robots to perform new tasks. We describe an experimental platform, Sophie’s World, and present descriptive analysis of human teaching behavior found in a user study. We report three important observations of how people administer reward and punishment to teach a simulated robot a new task through Reinforcement Learning. People adjust their behavior as they develop a model of the learner, they use the reward channel for guidance as well as feedback, and they may also use it as a motivational channel.
Breazeal, C., et al. (2004). Tutelage and collaboration for humanoid robots [J-Humanoids'04] →
Tutelage and collaboration for humanoid robots
International Journal of Humanoid Robotics, 1(2), 315-348
Abstract
This paper presents an overview of our work towards building socially intelligent, cooperative humanoid robots that can work and learn in partnership with people. People understand each other in social terms, allowing them to engage others in a variety of complex social interactions including communication, social learning, and cooperation. We present our theoretical framework that is a novel combination of Joint Intention Theory and Situated Learning Theory and demonstrate how this framework can be applied to develop our sociable humanoid robot, Leonardo. We demonstrate the robot’s ability to learn quickly and effectively from natural human instruction using gesture and dialog, and then cooperate to perform a learned task jointly with a person. Such issues must be addressed to enable many new and exciting applications for robots that require them to play a long-term role in people’s daily lives.
Brooks, A. G., et al. (2004). Robot’s play: interactive games with sociable machines [CIE'04] →
Robot’s play: interactive games with sociable machines
Computers in Entertainment, 2(3)
Abstract
Hoffman, G., & Breazeal, C. (2004). Collaboration in Human-Robot Teams [AIAA'04] Best Session Paper pdf →
Collaboration in Human-Robot Teams
Proceedings of the 1st AIAAʼ04 Intelligent Systems Conference
Abstract
Many new applications for robots require them to work alongside people as capable members of human-robot teams. These include—in the long term—robots for homes, hospitals, and offices, but already exist in more advanced settings, such as space exploration. The work reported in this paper is part of an ongoing collaboration with NASA JSC to develop Robonaut, a humanoid robot envisioned to work with human astronauts on maintenance operations for space missions. To date, work with Robonaut has mainly investigated performing a joint task with a human in which the robot is being teleoperated. However, perceptive disorientation, sensory noise, and control delays make teleoperation cognitively exhausting even for a highly skilled operator. Control delays in long range teleoperation also make shoulder-to-shoulder teamwork difficult. These issues motivate our work to make robots collaborating with people more autonomous.
Our work focuses on a scenario of a human and an autonomous humanoid robot working together shoulder-to-shoulder, sharing the workspace and the objects required to complete a task. A robotic member of such a team must be able to work towards a shared goal, and be in agreement with the human as to the sequence of actions that will be required to reach that goal, as well as dynamically adjust its plan according to the human’s actions. Human-robot collaboration of this nature is an important yet relatively unexplored kind of human-robot interaction.
This paper describes our work towards building a dynamic collaborative framework enabling such an interaction. We discuss our architecture and its implementation for controlling a humanoid robot, working on a task with a human partner. Our approach stems from Joint Intention Theory, which shows that for joint action to emerge, teammates must communicate to maintain a set of shared beliefs and to coordinate their actions towards the shared plan. In addition, they must demonstrate commitment to doing their own part, to the others doing theirs, to providing mutual support, and finally—to a mutual belief as to the state of the task.
We argue that to this end, the concept of task and action goals is central. We therefore present a goal-driven hierarchical task representation, and a resulting collaborative turn-taking system, implementing many of the above-mentioned requirements of a robotic teammate. Additionally, we show the implementation of relevant social skills supporting our collaborative framework.
Finally, we present a demonstration of our system for collaborative execution of a hierarchical object manipulation task by a robot-human team. Our humanoid robot is able to divide the task between the participants while taking into consideration the collaborator’s actions when deciding what to do next. It is capable of asking for mutual support in the cases where it is unable to perform a certain action. To facilitate this interaction, the robot actively maintains a clear and intuitive channel of communication to synchronize goals, task states, and actions, resulting in a fluid, efficient collaboration.