Lively: Enabling Multimodal, Lifelike, and Extensible Real-time Robot Motion

Published in HRI 23: ACM/IEEE International Conference on Human-Robot Interaction, 2023

Recommended citation: Schoen, A., Sullivan, D., Zhang, Z., Rakita, D., & Mutlu, B. 2023. "Lively: Enabling Multimodal, Lifelike, and Extensible Real-time Robot Motion." In Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI 23). Association for Computing Machinery, New York, NY, USA, 594–602.

Download Paper Here

Abstract: Robots designed to interact with people in collaborative or social scenarios must move in ways that are consistent with the robot’s task and communication goals. However, combining these goals in a naïve manner can result in mutually exclusive solutions, or infeasible or problematic states and actions. In this paper, we present Lively, a framework which supports configurable, real-time, task-based and communicative or socially-expressive motion for collaborative and social robotics across multiple levels of programmatic accessibility. Lively supports a wide range of control methods (i.e., position, orientation, and joint-space goals), and balances them with complex procedural behaviors for natural, lifelike motion that are effective in collaborative and social contexts. We discuss the design of three levels of programmatic accessibility of Lively, including a graphical user interface for visual design called LivelyStudio, the core library Lively for full access to its capabilities for developers, and an extensible architecture for greater customizability and capability.