Shared Workspace MDP

To frame the problem of human-robot collaboration and fluency, we have developed a computational model to develop and evaluate algorithms for robots acting together with people. This modified Markov Decision Process (MDP), models a cost (or distance) based shared-location two-agent collaborative system. This model was used to develop an anticipatory action system for human-robot collaboration, and can be used to compare human-robot and human-human collaborative activities.

We model the team fluency problem as a discrete time-based deterministic decision process including two agents, a robot and a human, working together on a shared task. In our work, we implement the system in a joint assembly setting, with a robot and a human building a car together, and propose various strategies for a robotic agent to act in this system.

Related paper: Hoffman & Breazeal (2007).