As of 2017, this list is not current. Please look at my Google Scholar Page and group website for an up-to-date list of publications.
Vatsal, V. and Hoffman, G. (2017). Wearing your arm on your sleeve – Studying usage contexts for a wearable robotic forearm [Ro-Man'17] pdf →
Wearing your arm on your sleeve – Studying usage contexts for a wearable robotic forearm
26th IEEE International Symposium on Robot and Human Interactive Communication
Abstract
Alves-Oliveira, P., et al. (2017). YOLO, a Robot for Creativity: A Co-Design Study with Children [IDC'17] →
YOLO, a Robot for Creativity: A Co-Design Study with Children
2017 Conference on Interaction Design and Children
Abstract
This paper describes the design and development of YOLO, a social robot aimed at boosting creativity in children. Creativity is one of the most sought-after competencies as we move from industrialized economies, in which standardized knowledge was imperative, to creative economies, where the ability to innovate is crucial for the workforce. YOLO is a robot to be used by children as a tool to boost new ideas and stimulate their creativity. This paper describes how established educational strategies that enhance creativity were combined with co-designing with children as informants to reach the the prototype design of the robot.
Luria, M., Hoffman, G., & Zuckerman, O. (2017). Comparing Social Robot, Screen and Voice Interfaces for Smart-Home Control [CHI'17] →
Comparing Social Robot, Screen and Voice Interfaces for Smart-Home Control
2017 CHI Conference on Human Factors in Computing Systems
Abstract
With domestic technology on the rise, the quantity and complexity of smart-home devices are becoming an important interaction design challenge. We present a novel design for a home control interface in the form of a social robot, commanded via tangible icons and giving feedback through expressive gestures. We experimentally compare the robot to three common smart-home interfaces: a voice-control loudspeaker; a wall-mounted touch-screen; and a mobile application. Our findings suggest that interfaces that rate higher on flow rate lower on usability, and vice versa. Participants’ sense of control is highest using familiar interfaces, and lowest using voice control. Situation awareness is highest using the robot, and also lowest using voice control. These findings raise questions about voice control as a smart-home interface, and suggest that embodied social robots could provide for an engaging interface with high situation awareness, but also that their usability remains a considerable design challenge.
Forlizzi, J., et al. (2016). Let’s Be Honest – A Controlled Field Study of Ethical Behavior in the Presence of a Robot [RO-MAN'16] →
Let’s Be Honest – A Controlled Field Study of Ethical Behavior in the Presence of a Robot
Proceedings of the 25th International Symposium on Robot and Human Interactive Communication
Abstract
Human-robot collaboration will increasingly take place in human social settings, including contexts where ethical and honest behavior is paramount. How might these robots affect human honesty? In this paper, we present first evidence of how a robot’s presence affects people’s ethical behavior in a controlled field study. We observed people passing by a food plate marked as “reserved”, comparing three conditions: no observer, a human observer, and a robot observer. We found that a human observer elicits less attention than a robot, but evokes more of a socially normative presence causing people to act honestly. Conversely, we found that a robot observer elicits more attention, engagement, and a monitoring presence. But even though people were suspicious that they were being monitored, they still behaved dishonestly in the robot observer condition.
Luria, M., et al. (2016). Designing Vyo, a Robotic Smart Home Assistant [RO-MAN'16] →
Designing Vyo, a Robotic Smart Home Assistant: Bridging the Gap Between Device and Social Agent
Proceedings of the 25th International Symposium on Robot and Human Interactive Communication
Abstract
We describe the design process of “Vyo”, a personal assistant serving as a centralized interface for smart home devices. Building on the concepts of ubiquitous and engaging computing in the domestic environment, we identified five design goals for the home robot: engaging, unobtrusive, device-like, respectful, and reassuring. These goals led our design process, which included simultaneous iterative development of the robot’s morphology, nonverbal behavior and interaction schemas. We continued with user-centered design research using puppet prototypes of the robot to assess and refine our design choices. The resulting robot, Vyo, straddles the boundary between a monitoring device and a socially expressive agent, and presents a number of novel design outcomes: The combination of TUI “phicons” with social robotics; gesture-related screen exposure; and a non-anthropomorphic monocular expressive face. We discuss how our design goals are expressed in the elements of the robot’s final design.
Birnbaum, G. E., et al. (2016). Machines as a source of consolation [HRI'16] →
Machines as a source of consolation: Robot responsiveness increases approach behavior and desire for companionship
Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction (Forthcoming)
Abstract
Responsiveness to one’s bids for proximity in times of need is a linchpin of human interaction. As such, the ability to be perceived as responsive has design implications for socially assistive robotics. We report on a large-scale experimental laboratory study (n=102) examining robot responsiveness and its effects on human attitudes and behaviors. In one-on-one sessions, participants disclosed a personal event to a non-humanoid robot. The robot responded either responsively or unresponsively across two modalities: Simple gestures and written text. We replicated previous findings that the robot’s responsiveness increased perceptions of its appealing traits. In addition, we found that robot responsiveness increased nonverbal approach behaviors (physical proximity, leaning toward the robot, eye contact, smiling) and participants’ willingness to be accompanied by the robot during stressful events. These findings suggest that humans not only utilize responsiveness cues to ascribe social intentions to personal robots, but actually change their behavior towards responsive robots and may want to use such robots as a source of consolation.
Zuckerman, O., et al. (2016). KIP3: Robotic Companion as an External Cue to Students with ADHD [TEI'16] →
KIP3: Robotic Companion as an External Cue to Students with ADHD
Extended Abstracts of the 10th International Conference on Tangible, Embedded and Embodied Interaction
Abstract
We present the design and initial evaluation of Kip3, a social robotic device for students with ADHD that provides immediate feedback for inattention or impulsivity events. We designed a research platform comprised of a tablet-based Continuous Performance Test (CPT) that is used to assess inattention and impulsivity, and a socially expressive robotic device (Kip3) as feedback. We evaluated our platform with 10 students with ADHD in a within subject user study, and report that 9 out of 10 participants felt that Kip3 helped them regain focus, but wondered if it will be effective over time and how it will identify inattention in more complex situations outside the lab.
Hoffman, G., et al. (2015). Design and Evaluation of a Peripheral Robotic Conversation Companion. [HRI'15] Best Paper pdf →
Design and Evaluation of a Peripheral Robotic Conversation Companion.
Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction
Abstract
We present the design, implementation, and evaluation of a peripheral empathy-evoking robotic conversation companion, Kip1. The robot’s function is to increase people’s awareness to the effect of their verbal behavior towards others, potentially leading to behavior change. Specifically, Kip1 is designed to promote non-aggressive conversation between people. It monitors the conversation’s nonverbal aspects and maintains an emotional model of its reaction to the conversation. If the conversation seems calm, Kip1 responds by a gesture designed to communicate curious interest. If the conversation seems aggressive, Kip1 responds by a gesture designed to communicate fear. We describe the design process of Kip1, guided by the principles of “peripheral” and “evocative”. We detail its hardware and software systems, and a study evaluating the effects of the robot’s autonomous behavior on couple’s conversations. Participant couples were guided to conduct a conflict conversation with the robot present, in one of two conditions: the robot reacting or not reacting to their conversation. We find support for our design goals. A responsive conversation companion leads to more gaze attention, but not more verbal distraction. This suggests that robotic devices could be designed as companions to human-human interaction without compromising the natural communication flow between people. Participants also rated the reacting robot as having significantly more social human character traits and as being significantly more similar to them. This points to the robot’s potential to elicit people’s empathy.
Slyper, R., Hoffman, G., & Shamir, A. (2015). Mirror Puppeteering: Animating Toy Robots in Front of a Webcam [TEI'15] pdf →
Mirror Puppeteering: Animating Toy Robots in Front of a Webcam
Proceedings of the 9th International Conference on Tangible, Embedded and Embodied Interaction
Abstract
Mirror Puppeteering is a system for easily creating gestures (“animations”) for robotic toys, custom robots, and virtual characters. Lay users can record animations by simply moving a robot’s limbs in front of a webcam. Makers and hobbyists can use the system to easily set up their custom-built robots for animation. Gamers and amateur animators can real-time control or save animations for virtual characters. Our system works by tracking circular markers on the robot’s surface and translating these into motor commands, using a calibration map between marker locations in camera space and motor angles. New robots can be quickly set up for Mirror Puppeteering without knowledge of the robot’s 3D structure, as we demonstrate on several robots. In a user study, participants found our method more enjoyable, usable, easy to learn, and successful than traditional animation methods.
Hoffman, G., et al. (2015). Robot Presence and Human Honesty: Experimental Evidence [HRI'15] pdf →
Robot Presence and Human Honesty: Experimental Evidence
Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction (Forthcoming)
Abstract
Robots are predicted to serve in environments in which human honesty is important, such as the workplace, schools, and public institutions. Can the presence of a robot facilitate honest behavior? In this paper, we describe an experimental study evaluating the effects of robot social presence on people’s honesty. Participants complete a perceptual task, which is structured so as to allow them to earn more money by not complying with the experiment instructions. We compare three conditions between subjects: Completing the task alone in a room; completing it with a non-monitoring human present; and completing it with a non-monitoring robot present. The robot is a new expressive social head capable of 4-DoF head movement and screen-based eye animation, specifically designed and built for this research. It was designed to convey social presence, but not monitoring. We find that people cheat in all three conditions, but cheat equally less when there is a human or a robot in the room, compared to when they are alone. We did not find differences in the perceived authority of the human and the robot, but did find that people felt significantly less guilty after cheating in the presence of a robot as compared to a human. This has implications for the use of robots in monitoring and supervising tasks in environments in which honesty is key.
Zuckerman, O., & Hoffman, G. (2015). Empathy Objects: Robotic Devices as Conversation Companions [TEI'15] pdf →
Empathy Objects: Robotic Devices as Conversation Companions
Extended Abstracts of the 9th International Conference on Tangible, Embedded and Embodied Interaction
Abstract
We present the notion of Empathy Objects, ambient robotic devices accompanying human-human interaction. Empathy Objects respond to human behavior using physical gestures as nonverbal expressions of their “emotional states”. The goal is to increase people’s self-awareness to the emotional state of others, leading to behavior change. We demonstrate an Empathy Object prototype, Kip1, a conversation companion designed to promote non-aggressive conversation between people.
Hoffman, G., et al. (2014). Robot responsiveness to human disclosure affects social impression and appeal [HRI'14] pdf →
Robot responsiveness to human disclosure affects social impression and appeal
Proceedings of the 9th ACM/IEEE International Conference on Human-Robot Interaction
Abstract
In human relationships, responsiveness—behaving in a sensitive manner that is supportive of another person’s needs— plays a major role in any interaction that involves effective communication, caregiving, and social support. Perceiving one’s partner as responsive has been tied to both personal and relationship well-being. In this work, we examine whether and how a robot’s behavior can instill a sense of responsiveness, and the effects of a robot’s perceived responsiveness on the human’s perception of the robot. In an experimental between-subject study (n=34), a desktop non-humanoid robot performed either positive or negative responsiveness behaviors across two modalities (simple gestures and written text) in response to P’s negative event disclosure. We found that perceived partner responsiveness, positive human-like traits, and robot attractiveness were higher in the positively responsive condition. This has design implications for interactive robots, in particular for robots in caregiving roles.
Hoffman, G., et al. (2013). In-car game design for children: child vs. parent perspective [IDC'13] →
In-car game design for children: child vs. parent perspective
Proceedings of the 12th International Conference on Interaction Design and Children
Abstract
Family car rides can become a source of boredom for child passengers, and consequently cause tension inside the car. In an attempt to overcome this problem, we developed Mileys—a novel in-car game that integrates location-based information, augmented reality and virtual characters. It is aimed to make car rides more interesting for child passengers, strengthen the bond between family members, encourage safe and ecological driving, and connect children with their environment instead of their entertainment devices. We evaluated Mileys with a six-week long field study, which revealed differences between children and parents regarding their desired in-car experience. Children wish to play enjoyable games, whereas parents view car rides as an opportunity for strengthening the bond between family members and for educating their children. Based on our findings, we identify five key challenges for in-car game design for children: different expectations by parents and children, undesired detachment, short interaction span, poor GPS reception, and motion sickness.
Hoffman, G., & Vanunu, K. (2013). Effects of Robotic Companionship on Music Enjoyment and Agent Perception [HRI'13] pdf →
Effects of Robotic Companionship on Music Enjoyment and Agent Perception
Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction
Abstract
We evaluate the effects of robotic listening companionship on people’s enjoyment of music, and on their perception of the robot. We present a robotic speaker device designed for joint listening and embodied performance of the music played on it. The robot generates smoothed real-time beat-synchronized dance moves, uses nonverbal gestures for common ground, and can make and maintain eye-contact.
In an experimental between-subject study (n=67), participants listened to songs played on the speaker device, with the robot either moving in sync with the beat, moving off-beat, or not moving at all. We found that while the robot’s beat precision was not consciously detected by Ps, an on-beat robot positively affected song liking. There was no effect on overall experience enjoyment. In addition, the robot’s response caused Ps to attribute more positive human-like traits to the robot, as well as rate the robot as more similar to themselves. Notably, personal listening habits (solitary vs. social) affected agent attributions.
This work points to a larger question, namely how a robot’s perceived response to an event might affect a human’s perception of the same event.
Hoffman, G. (2012). Dumb Robots, Smart Phones: a Case Study of Music Listening Companionship [RO-MAN'12] Best Paper Nomination pdf →
Dumb Robots, Smart Phones: a Case Study of Music Listening Companionship
Proceedings of the 21st International Symposium on Robot and Human Interactive Communication
Abstract
Combining high-performance, sensor-rich mobile devices with simple, low-cost robotic platforms could accelerate the adoption of personal robotics in real-world environments. We present a case study of this “dumb robot, smart phone” paradigm: a robotic speaker dock and music listening companion. The robot is designed to enhance a human’s listening experience by providing social presence and embodied musical performance. In its initial application, it generates segment-specific, beat-synchronized gestures based on the song’s genre, and maintains eye-contact with the user. All of the robot’s computation, sensing, and high-level motion control is performed on a smartphone, with the rest of the robot’s parts handling mechanics and actuator bridging.
Hoffman, G., & Weinberg, G. (2010). Synchronization in Human-Robot Musicianship [RO-MAN'10] →
Synchronization in Human-Robot Musicianship
Proceedings of the 19th International Symposium on Robot and Human Interactive Communication
Abstract
Shimon is a interactive robotic marimba player, developed as part of our ongoing research in Robotic Musicianship (RM). One of the potential benefits of RM is that it provides human players with embodied information that relates spatial movement to tone generation. This can aid in anticipation and coordination of synchronous playing.
As part of a human-robot Jazz improvisation system, we present an anticipatory system enabling beat-matched real-time synchronization. Our system enables flexible, yet coordinated call-and-response, a standard type of musical interaction. It was used in a live public human-robot joint Jazz performance.
We also describe a preliminary study evaluating the effect of embodiment on this call-and-response musical synchronization task. We conducted a 3×2 within-subject study manipulating the level of embodiment (visual co-presence, physical presence but visual occlusion, and synthesized sound), and the accuracy of the robot’s response.
Our findings indicate that synchronization is aided by visual contact when uncertainty is high, but that pianists can resort to internal rhythmic coordination in more predictable settings. We find that visual coordination is more effective for synchronization for slow sequences compared to faster sequences; and that occluded physical presence may be less effective than audio-only note generation.
Hoffman, G., & Weinberg, G. (2010). Gesture-based Human-Robot Jazz Improvisation [ICRA'10] Best Paper →
Gesture-based Human-Robot Jazz Improvisation
Proceedings of the IEEE International Conference on Robotics and Automation
Abstract
We present Shimon, an interactive improvisational robotic marimba player, developed for research in Robotic Musicianship. The robot listens to a human musician and continuously adapts its improvisation and choreography, while playing simultaneously with the human. We discuss the robot’s mechanism and motion-control, which uses physics simulation and animation principles to achieve both expressivity and safety. We then present a novel interactive improvisation system based on the notion of gestures for both musical and visual expression. The system also uses anticipatory beat-matched action to enable real-time synchronization with the human player.
Our system was implemented on a full-length human-robot Jazz duet, displaying highly coordinated melodic and rhythmic human-robot joint improvisation. We have performed with the system in front of a live public audience.
Hoffman, G., Kubat, R., & Breazeal, C. (2008). A Hybrid Control System for Puppeteering a Live Robotic Stage Actor [RO-MAN'08] Best Paper →
A Hybrid Control System for Puppeteering a Live Robotic Stage Actor
Proceedings of the 17th International Symposium on Robot and Human Interactive Communication
Abstract
This paper describes a robotic puppeteering system used in a theatrical production involving one robot and two human performers on stage. We draw from acting theory and human-robot interaction to develop a hybrid-control puppeteering interface which combines reactive expressive gestures and parametric behaviors with a point-of-view eye contact module. Our design addresses two core considerations: allowing a single operator to puppeteer the robot’s full range of behaviors, and allowing for gradual replacement of human-controlled modules by autonomous subsystems.
We wrote a play specifically for a performance between two humans and one of our research robots, a robotic lamp which embodied a lead role in the play. We staged three performances with the robot as part of a local festival of new plays. Though we have yet to perform a formal statistical evaluation of the system, we interviewed the actors and director and present their feedback about working with the system.
Hoffman, G., & Breazeal, C. (2008). Anticipatory Perceptual Simulation for Human-Robot Joint Practice [AAAI'08] →
Anticipatory Perceptual Simulation for Human-Robot Joint Practice: Theory and Application Study
Proceedings of the 23rd AAAI Conference for Artificial Intelligence
Abstract
We have developed a cognitive architecture for a robotic teammate based on the neuro-psychological principles of anticipation and perceptual simulation, with the aim of attaining increased fluency and efficiency in human-robot teams. An instantiation of this architecture was implemented on a non-anthropomorphic robotic lamp, performing in a human-robot collaborative task.
In a human-subject study in which the robot works on a joint task with untrained subjects, we find our approach to be significantly more efficient and fluent than in a comparable system without anticipatory perceptual simulation. We also show the robot and the human to be increasingly contributing at a similar rate, and we find significant differences in a number of self-report measures and verbal attitudes towards the robot. Notably, we identify increased self-deprecation in human subject responses vis-à-vis the anticipatory robot.
Hoffman, G., & Breazeal, C. (2008). Achieving fluency through perceptual-symbol practice in human-robot collaboration [HRI'08] →
Achieving fluency through perceptual-symbol practice in human-robot collaboration
Proceedings of the third ACM/IEEE International Conference on Human-Robot Interaction
Abstract
We have developed a cognitive architecture for robotic teammates based on the neuro-psychological principles of perceptual symbols and simulation, with the aim of attaining increased fluency in human-robot teams. An instantiation of this architecture was implemented on a robotic desk lamp, performing in a human-robot collaborative task. This paper describes initial results from a human-subject study measuring team efficiency and team fluency, in which the robot works on a joint task with untrained subjects. We find significant differences in a number of efficiency and fluency metrics, when comparing our architecture to a purely reactive robot with similar capabilities.
Hoffman, G., & Breazeal, C. (2007). Effects of Anticipatory Action on Human-robot Teamwork [HRI'07] Best Student Paper →
Effects of Anticipatory Action on Human-robot Teamwork: Efficiency, Fluency, and Perception of Team
Proceedings of the ACM/IEEE international conference on Human-Robot Interaction
Abstract
A crucial skill for fluent action meshing in human team activity is a learned and calculated selection of anticipatory actions. We believe that the same holds for robotic teammates, if they are to perform in a similarly fluent manner with their human counterparts. In this work, we propose an adaptive action selection mechanism for a robotic teammate, making anticipatory decisions based on the confidence of their validity and their relative risk. We predict an improvement in task efficiency and fluency compared to a purely reactive process.
We then present results from a study involving untrained human subjects working with a simulated version of a robot using our system. We show a significant improvement in best-case task efficiency when compared to a group of users working with a reactive agent, as well as a significant difference in the perceived commitment of the robot to the team and its contribution to the team’s fluency and success. By way of explanation, we propose a number of fluency metrics that differ significantly between the two study groups.
Hoffman, G. (2006). Time Bracketing [DIME'06] pdf →
Time Bracketing
Proceedings of the 1st International Conference on Digital Interactive Media in Entertainment
Abstract
Time Bracketing is a novel technique to photographically depict a time-varying space or object in a single image without spatial distortion or fragmentation. The approach attempts to strike a balance between procedural generation and manual composition as well as between faithful depiction and digital manipulation. It also enables the clear representation of both dynamic and static scene elements. As a result, Time Bracketing puts the subject, rather than the technique, in the center of its artistic creation. This paper introduces the method, presents a custom-built authoring software for the creation of Time Bracketing images, and shows and architectural studies using the described technique and software.
Thomaz, A. L., Hoffman, G., & Breazeal, C. (2006). Reinforcement Learning with Human Teachers [RO-MANʼ06] →
Reinforcement Learning with Human Teachers: Understanding How People Want to Teach Robots
Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication
Abstract
While Reinforcement Learning (RL) is not traditionally designed for interactive supervisory input from a human teacher, several works in both robot and software agents have adapted it for human input by letting a human trainer control the reward signal. In this work, we experimentally examine the assumption underlying these works, namely that the human-given reward is compatible with the traditional RL reward signal. We describe an experimental platform with a simulated RL robot and present an analysis of real-time human teaching behavior found in a study in which untrained subjects taught the robot to perform a new task.
We report three main observations on how people administer feedback when teaching a robot a task through Reinforcement Learning: (a) they use the reward channel not only for feedback, but also for future-directed guidance; (b) they have a positive bias to their feedback — possibly using the signal as a motivational channel; and (c) they change their behavior as they develop a mental model of the robotic learner. In conclusion, we discuss future extensions to RL to accommodate these lessons.
Thomaz, A. L., Hoffman, G., & Breazeal, C. (2006). Experiments in Socially Guided Machine Learning [HRI'06] Best Student Short Paper →
Experiments in Socially Guided Machine Learning: Understanding Human Intent of Reward/Punishment
Proceedings of the ACM/IEEE international conference on Human-Robot Interaction
Abstract
In Socially Guided Machine Learning we explore the ways in which machine learning can more fully take advantage of natural human interaction. In this paper we are studying the role real-time human interaction plays in training assistive robots to perform new tasks. We describe an experimental platform, Sophie’s World, and present descriptive analysis of human teaching behavior found in a user study. We report three important observations of how people administer reward and punishment to teach a simulated robot a new task through Reinforcement Learning. People adjust their behavior as they develop a model of the learner, they use the reward channel for guidance as well as feedback, and they may also use it as a motivational channel.
Hoffman, G., & Breazeal, C. (2004). Collaboration in Human-Robot Teams [AIAA'04] Best Session Paper pdf →
Collaboration in Human-Robot Teams
Proceedings of the 1st AIAAʼ04 Intelligent Systems Conference
Abstract
Many new applications for robots require them to work alongside people as capable members of human-robot teams. These include—in the long term—robots for homes, hospitals, and offices, but already exist in more advanced settings, such as space exploration. The work reported in this paper is part of an ongoing collaboration with NASA JSC to develop Robonaut, a humanoid robot envisioned to work with human astronauts on maintenance operations for space missions. To date, work with Robonaut has mainly investigated performing a joint task with a human in which the robot is being teleoperated. However, perceptive disorientation, sensory noise, and control delays make teleoperation cognitively exhausting even for a highly skilled operator. Control delays in long range teleoperation also make shoulder-to-shoulder teamwork difficult. These issues motivate our work to make robots collaborating with people more autonomous.
Our work focuses on a scenario of a human and an autonomous humanoid robot working together shoulder-to-shoulder, sharing the workspace and the objects required to complete a task. A robotic member of such a team must be able to work towards a shared goal, and be in agreement with the human as to the sequence of actions that will be required to reach that goal, as well as dynamically adjust its plan according to the human’s actions. Human-robot collaboration of this nature is an important yet relatively unexplored kind of human-robot interaction.
This paper describes our work towards building a dynamic collaborative framework enabling such an interaction. We discuss our architecture and its implementation for controlling a humanoid robot, working on a task with a human partner. Our approach stems from Joint Intention Theory, which shows that for joint action to emerge, teammates must communicate to maintain a set of shared beliefs and to coordinate their actions towards the shared plan. In addition, they must demonstrate commitment to doing their own part, to the others doing theirs, to providing mutual support, and finally—to a mutual belief as to the state of the task.
We argue that to this end, the concept of task and action goals is central. We therefore present a goal-driven hierarchical task representation, and a resulting collaborative turn-taking system, implementing many of the above-mentioned requirements of a robotic teammate. Additionally, we show the implementation of relevant social skills supporting our collaborative framework.
Finally, we present a demonstration of our system for collaborative execution of a hierarchical object manipulation task by a robot-human team. Our humanoid robot is able to divide the task between the participants while taking into consideration the collaborator’s actions when deciding what to do next. It is capable of asking for mutual support in the cases where it is unable to perform a certain action. To facilitate this interaction, the robot actively maintains a clear and intuitive channel of communication to synchronize goals, task states, and actions, resulting in a fluid, efficient collaboration.