This project presents the results of my three-month study visit at the Intelligent Robotics group at the Umeå University in Sweden: We build a comprehensive and configurable interface for Softbank’s Pepper robot. We initially presented this work at the 2020 Workshop on Affective Shared Perception and ultimately published it as artical in the Frontiers in Robotics and AI Journal. As with most of my work, it is available on GitHub.
Study visit in Umeå
I initially started working on the Pepper robot in the scope of my student job at the Knowledge Technology group at Uni Hamburg. There were already a number of collaborations between the KT group in Hamburg and the Intelligent Robotics group in Umeå, so when I started looking for interesting groups to visit, the Intelligent Robotics group stood out because a) they also work with multiple Pepper robots and b) they were already associated with the KT group. So it came that I visited the Intelligent Robotics group from September to December in 2021 to work on an interface for the Pepper robot, financed through a generous 3-month stipend. The study visit was great, I enjoyed working on this project a lot, everyone at the Intelligent Robotics group was incredibly welcoming, helpful, and friendly, plus there were plenty of oportunities to go camping and fishing on the weekends :)
The most eye-opening experience in this study visit was, by far, going through the lengthy, sometimes tedious process of publishing in a peer-reviewed journal.
The idea and requirements behind the interface were formulated by Professor Hellström and Professor Bensch at the Intelligent Robotics group. The main goal was to make Pepper as a research platform for Human-Robot Interaction (HRI) experiments more approachable. This is motivated by the observation that a lot of Pepper’s functionality is gated behind the robot’s API, which has a steep learning curve and requires sufficient programming knowledge to begin with. Given that there is an inherently social aspect in Human-Robot Interaction, HRI studies carried out by social-science researchers seem valid and required. While these research groups have excellent skills in experiment design, they don’t necessarily have the technical expertise to implement the robotic control software required to conduct HRI experiments. Thus, we set out to create an easily approachable, comprehensive, and configurable interface for the Pepper robot, to lower the barrier towards concrete HRI research. We named the interface
WoZ4U, after the Wizard-of-Oz HRI experiment methodology, find it on GitHub.
To make the interface as widely accessible as possible, we support all major operating systems. This includes Debian-based Linux, macOS, and Windows. Furthermore, we provide a docker image hosting the interface, which should eliminate most if not all requirement conflicts. The docker image further eliminates the need to follow the lengthy setup guide, as the interface is accessible as soon as the docker container is running on the network. In the backend, the interface is implemented as a webserver. This conveniently makes the interface accessible via web browser from any machine in the network (including smart phones), which removes any requirements towards the OS, since any modern OS comes with a web browser.
We had the non-functional requirement that the interface should be easily (re)configurable for different experiments because a tool specialized for one experiment has no general value for the community. As such, we feature a configuration file in
YAML syntax. Every non-general part of the UI is configurable through that file so that no programming is required to set up the interface for a new experiment. Instead, one simply edits a few content-specific lines in the configuration file. For example, one might want to investigate which gestures are perceived as particularly friendly. For that, one edits which gestures should be accessible from the interface based on the following snippet:
gestures: # Buttons will be created for every item in the list - title: "Yes" # This will be shown in the GUI gesture: "animations/Stand/Gestures/Yes_1" # Gesture to execute tooltip: "Yes_1 gesture" # Tooltip for buton key_comb: ["shift", "1"] - title: "No" gesture: "animations/Stand/Gestures/No_1" tooltip: "No_1 gesture" key_comb: ["shift", "2"]
This procedure is the same for all elements in the interface so that the entire interface can easily be configured for different experiments or occasions.
The interface comprises the following robot functionalities:
- Autonomous life management: Provides controls over the general behavior emitted by the robot
- Tablet control: Provides controls over which items (pictures, videos, websites) are displayed on Pepper’s tablet
- Speech control: Provides controls over text-to-speech messages
- Animated speech: Provides controls over speech + gesturing messages
- LED control: Provides control over Pepper’s LEDs
- Motion control: Provides a simplistic motion controller for Pepper’s omnidirectional wheels
- Gesture control: Provides control over Pepper’s gestures
These features comprise almost everything Pepper’s API has to offer. In some cases, we even extend the API for some custom functionalities that are not part of the API (audio and touch-event live streams).
Feel free to read our paper for more details.
We hope that this tool is useful for researchers in the HRI field.