Papers in Peer-Reviewed Journals

As of 2017, this list is not current. Please look at my Google Scholar Page and group website for an up-to-date list of publications.

2016

Thomaz, A.L., Hoffman, G., & Cakmak, M. (2016). Computational Human-Robot Interaction [FnT-ROB] pdf

Thomaz, A.L., Hoffman, G., & Cakmak, M. (2016)

Computational Human-Robot Interaction

Foundations and Trends® in Robotics 4(2-3)

Abstract

We present a systematic survey of computational research in human-robot interaction (HRI) over the past decade. Computational HRI is the subset of the field that is specifically concerned with the algorithms, techniques, models, and frameworks necessary to build robotic systems that engage in social interactions with humans. Within the field of robotics, HRI poses distinct computational challenges in each of the traditional core research areas: perception, manipulation, planning, task execution, navigation, and learning. These challenges are addressed by the research literature surveyed here. We surveyed twelve publication venues and include work that tackles computational HRI challenges, categorized into eight topics: (a) perceiving humans and their activities; (b) generating and understanding verbal expression; (c) generating and understanding nonverbal behaviors; (d) modeling, expressing, and understanding emotional states; (e) recognizing and conveying intentional action; (f) collaborating with humans; (g) navigating with and around humans; and (h) learning from humans in a social manner. For each topic, we suggest promising future research areas.

Download PDF
close

Birnbaum, G. E., et al. (2016). What robots can teach us about intimacy [CHB'16]

Birnbaum, G. E., Mizrahi, M., Hoffman, G., Reis, H. T., Finkel, E. J., and Sass, O. (2016)

What robots can teach us about intimacy: The reassuring effects of robot responsiveness to human disclosure

Computers in Human Behavior 63

Abstract

Perceiving another person as responsive to one’s needs is inherent to the formation of attachment bonds and is the foundation for safe-haven and secure-base processes. Two studies examined whether such processes also apply to interactions with robots. In both studies, participants had one-at-a-time sessions, in which they disclosed a personal event to a non-humanoid robot that responded either responsively or unresponsively across two modalities (gestures, text). Study 1 showed that a robot’s responsiveness increased perceptions of its appealing traits, approach behaviors towards the robot, and the willingness to use it as a companion in stressful situations. Study 2 found that in addition to producing similar reactions in a different context, interacting with a responsive robot improved self-perceptions during a subsequent stress-generating task. These findings suggest that humans not only utilize responsiveness cues to ascribe social intentions to robots, but can actually use them as a source of consolation and security.

close

Hoffman, G., Bauman, S, & Vanunu, K. (2016). Robotic Experience Companionship in Music Listening and Video Watching [PUC'16]

Hoffman, G., Bauman, S, & Vanunu, K. (2016)

Robotic Experience Companionship in Music Listening and Video Watching

Personal and Ubiquitous Computing, 20(1), pp 51–63

Abstract

We propose the notion of Robotic Experience Companionship (REC): a person’s sense of sharing an experience with a robot. Does a robot’s presence and response to a situation affect a human’s understanding of the situation and of the robot, even without direct human-robot interaction? We present the first experimental assessment of REC, studying people’s experience of entertainment media as they share it with a robot. Both studies use an autonomous custom-designed desktop robot capable of performing gestures synchronized to the media. Study I (n=67), examining music listening companionship, finds that the robot’s dance-like response to music causes participants to feel that the robot is co-listening with them, and increases their liking of songs. The robot’s response also increases its perceived human character traits. We find REC to be moderated by music listening habits, such that social listeners were more affected by the robot’s response. Study II (n=91), examining video watching companionship supports these findings, demonstrating that social video viewers enjoy the experience more with the robot present, while habitually solitary viewers do not. Also in line with Study~I, the robot’s response to the a video clip causes people to attribute more positive human character traits to the robot. This has implications for robots as companions for digital media consumption, but also suggests design implications based on REC for other shared experiences with personal robots.

close

2015

Bretan, M., Hoffman, G., & Weinberg, G. (2015). Emotionally Expressive Dynamic Physical Behaviors in Robots [J-HCS'15]

Bretan, M., Hoffman, G., & Weinberg, G. (2015)

Emotionally Expressive Dynamic Physical Behaviors in Robots

International Journal of Human-Computer Studies, Volume 78

Abstract

For social robots to respond to humans in an appropriate manner they need to use apt affect displays, revealing underlying emotional intelligence. We present an artificial emotional intelligence system for robots, with both a generative and a perceptual aspect. On the generative side, we explore the expressive capabilities of an abstract, faceless, creature-like robot, with very few degrees of freedom, lacking both facial expressions and the complex humanoid design found often in emotionally expressive robots. We validate our system in a series of experiments: in one study, we find an advantage in classification for animated vs static affect expressions and advantages in valence and arousal estimation and personal preference ratings for both animated vs static and physical vs on-screen expressions. In a second experiment, we show that our parametrically generated expression variables correlate with the intended user affect perception. On the perceptual side, we present a new corpus of sentiment-tagged social media posts for training the robot to perceive affect in natural language. In a third experiment we estimate how well the corpus generalizes to an independent data set through a cross validation using a perceptron and demonstrate that the predictive model is comparable to other sentiment-tagged corpi and classifiers. Combining the perceptual and generative systems, we show in a fourth experiment that our automatically generated affect responses cause participants to show signs of increased engagement and enjoyment compared with arbitrarily chosen comparable motion parameters.

close

Zuckerman, O., Hoffman, G., & Gal-Oz, A. (2015). In-car game design for children [J-CCI'15]

Zuckerman, O., Hoffman, G., & Gal-Oz, A. (2015)

In-car game design for children: Promoting interactions inside and outside the car

Journal of Child-Computer Interaction, In Press

Abstract

Long car rides can become a source of boredom for children, consequently causing tension inside the car. Common solutions against boredom include entertainment devices suitable for in-car use. Such devices often disengage children from other family members inside the car, as well as from the outside world. We set out to create a novel in-car game that connects children with their family and their environment, instead of only their entertainment devices. The game, called Mileys, integrates location-based information, augmented reality and virtual characters. We developed Mileys in an iterative process – findings from the first round of prototyping and evaluation guided the design of a second-generation prototype and lead to additional evaluations. In this paper we discuss lessons learned during the development and evaluation of Mileys, present current challenges for location- based in-car game design, and suggest potential solutions for promoting interactions inside and outside the car.

close

2014

Hoffman, G., & Ju, W. (2014). Designing Robots with Movement in Mind [J-HRI'14] pdf

Hoffman, G., & Ju, W. (2014)

Designing Robots with Movement in Mind

Journal of Human-Robot Interaction, 3(1), 89–122

Abstract

This paper makes the case for designing interactive robots with their expressive movement in mind. As people are highly sensitive to physical movement and spatiotemporal affordances, well-designed robot motion can communicate, engage, and offer dynamic possibilities beyond the machines’ sur- face appearance or pragmatic motion paths. We present techniques for movement centric design, including character animation sketches, video prototyping, interactive movement explorations, Wiz- ard of Oz studies, and skeletal prototypes. To illustrate our design approach, we discuss four case studies: a social head for a robotic musician, a robotic speaker dock listening companion, a desktop telepresence robot, and a service robot performing assistive and communicative tasks. We then re- late our approach to the design of non-anthropomorphic robots and robotic objects, a design strategy that could facilitate the feasibility of real-world human-robot interaction.

Download PDF
close

2012

Hoffman, G. (2012). Embodied Cognition for Autonomous Interactive Robots [TopiCS'12] pdf

Hoffman, G. (2012)

Embodied Cognition for Autonomous Interactive Robots

Topics in Cognitive Science, 4(4), 759–772

Abstract

In the past, notions of embodiment have been applied to robotics mainly in the realm of very simple robots, and supporting low-level mechanisms such as dynamics and navigation. In contrast, most human-like, interactive, and socially adept robotic systems turn away from embodiment and use amodal, symbolic, and modular approaches to cognition and interaction. At the same time, recent research in Embodied Cognition (EC) is spanning an increasing number of complex cognitive processes, including language, nonverbal communication, learning, and social behavior.

This article suggests adopting a modern EC approach for autonomous robots interacting with humans. In particular, we present three core principles from EC that may be applicable to such robots: (a) modal perceptual representation, (b) action/perception and action/cognition integration, and (c) a simulation-based model of top-down perceptual biasing. We describe a computational framework based on these principles, and its implementation on two physical robots. This could provide a new paradigm for embodied human–robot interaction based on recent psychological and neurological findings.

 

Download PDF
close

2011

Hoffman, G., & Weinberg, G. (2011). Interactive Improvisation with a Robotic Marimba Player [AU-RO'11] pdf

Hoffman, G., & Weinberg, G. (2011)

Interactive Improvisation with a Robotic Marimba Player

Autonomous Robots, 31(2-3), 133-153

Abstract

Shimon is a interactive robotic marimba player, developed as part of our ongoing research in Robotic Musicianship. The robot listens to a human musician and continuously adapts its improvisation and choreography, while playing simultaneously with the human. We discuss the robot’s mechanism and motion-control, which uses physics simulation and animation principles to achieve both expressivity and safety. We then present an interactive improvisation system based on the notion of physical gestures for both musical and visual expression. The system also uses anticipatory action to enable real-time improvised synchronization with the human player.

We describe a study evaluating the effect of embodiment on one of our improvisation modules: antiphony, a call-and-response musical synchronization task. We conducted a 3×2 within-subject study manipulating the level of embodiment, and the accuracy of the robot’s response. Our findings indicate that synchronization is aided by visual contact when uncertainty is high, but that pianists can resort to internal rhythmic coordination in more predictable settings. We find that visual coordination is more effective for synchronization in slow sequences; and that occluded physical presence may be less effective than audio-only note generation.

Finally, we test the effects of visual contact and embodiment on audience appreciation. We find that visual contact in joint Jazz improvisation makes for a performance in which audiences rate the robot as playing better, more like a human, as more responsive, and as more inspired by the human. They also rate the duo as better synchronized, more coherent, communicating, and coordinated; and the human as more inspired and more responsive.

Download PDF
close

2010

Hoffman, G., & Breazeal, C. (2010). Effects of Anticipatory Perceptual Simulation on Practiced Human-Robot Tasks [AU-RO'10] pdf

Hoffman, G., & Breazeal, C. (2010)

Effects of Anticipatory Perceptual Simulation on Practiced Human-Robot Tasks

Autonomous Robots, 28(4), 403-423

Abstract

With the aim of attaining increased fluency and efficiency in human-robot teams, we have developed a cognitive architecture for robotic teammates based on the neuro-psychological principles of anticipation and perceptual simulation through top-down biasing. An instantiation of this architecture was implemented on a non-anthropomorphic robotic lamp, performing a repetitive human-robot collaborative task.

In a human-subject study in which the robot works on a joint task with untrained subjects, we find our approach to be significantly more efficient and fluent than in a comparable system without anticipatory perceptual simulation. We also show the robot and the human to improve their relative contribution at a similar rate, possibly playing a part in the human’s “like-me” perception of the robot.

In self-report, we find significant differences between the two conditions in the sense of team fluency, the team’s improvement over time, the robot’s contribution to the efficiency and fluency, the robot’s intelligence, and in the robot’s adaptation to the task. We also find differences in verbal attitudes towards the robot: most notably, subjects working with the anticipatory robot attribute more human qualities to the robot, such as gender and intelligence, as well as credit for success, but we also find increased self-blame and self-deprecation in these subjects’ responses.

We believe that this work lays the foundation towards modeling and evaluating artificial practice for robots working in collaboration with humans.

Download PDF
close

2007

Hoffman, G., & Breazeal, C. (2007). Cost-based anticipatory action selection for human–robot fluency [T-RO'07] pdf

Hoffman, G., & Breazeal, C. (2007)

Cost-based anticipatory action selection for human–robot fluency

IEEE Transactions on Robotics, 23(5), 952-961

Abstract

A crucial skill for fluent action meshing in human team activity is a learned and calculated selection of anticipatory actions. We believe that the same holds for robotic teammates, if they are to perform in a similarly fluent manner with their human counterparts.

In this work we describe a model for human robot joint action, and propose an adaptive action selection mechanism for a robotic teammate, which makes anticipatory decisions based on the confidence of their validity and their relative risk. We conduct an analysis of our method, predicting an improvement in task efficiency compared to a purely reactive process.

We then present results from a study involving untrained human subjects working with a simulated version of a robot using our system. We show a significant improvement in best-case task efficiency when compared to a group of users working with a reactive agent, as well as a significant difference in the perceived commitment of the robot to the team and its contribution to the team’s fluency and success. By way of explanation, we raise a number of fluency metric hypotheses, and evaluate their significance between the two study conditions.

Download PDF
close

2004

Breazeal, C., et al. (2004). Tutelage and collaboration for humanoid robots [J-Humanoids'04]

Breazeal, C., et al. (2004)

Tutelage and collaboration for humanoid robots

International Journal of Humanoid Robotics, 1(2), 315-348

Abstract

This paper presents an overview of our work towards building socially intelligent, cooperative humanoid robots that can work and learn in partnership with people. People understand each other in social terms, allowing them to engage others in a variety of complex social interactions including communication, social learning, and cooperation. We present our theoretical framework that is a novel combination of Joint Intention Theory and Situated Learning Theory and demonstrate how this framework can be applied to develop our sociable humanoid robot, Leonardo. We demonstrate the robot’s ability to learn quickly and effectively from natural human instruction using gesture and dialog, and then cooperate to perform a learned task jointly with a person. Such issues must be addressed to enable many new and exciting applications for robots that require them to play a long-term role in people’s daily lives.

close

Brooks, A. G., et al. (2004). Robot’s play: interactive games with sociable machines [CIE'04]

Brooks, A. G., Gray, J., Hoffman, G., Lockerd, A., Lee, H., & Breazeal, C. (2004)

Robot’s play: interactive games with sociable machines

Computers in Entertainment, 2(3)

Abstract

Personal robots for human entertainment form a new class of computer-based entertainment that is beginning to become commercially and computationally practical. We expect a principal manifestation of their entertainment capabilities will be socially interactive game playing. We describe this form of gaming and summarize our current efforts in this direction on our lifelike, expressive, autonomous humanoid robot. Our focus is on teaching the robot via playful interaction using natural social gesture and language. We detail this in terms of two broad categories: teaching as play and teaching with play.
close