PhD Research (TeleGaze)
Mobile Robot Teleoperation through Eye-Gaze (TeleGaze)
Abstract
This research investigates the use of eye-gaze tracking in controlling the navigation of mobile robots remotely through a purpose built interface that is called TeleGaze. Controlling mobile robots from a remote location requires the user to continuously monitor the status of the robot through some sort of feedback system. Assuming that a vision-based feedback system is used such as video cameras mounted onboard the robot; this requires the eyes of the user to be engaged in the monitoring process throughout the whole duration of the operation. Meanwhile, the hands of the user need to be engaged, either partially or fully, in the driving task using any input devices. Therefore, the aim of this research is to build a vision based interface that enables the user to monitor as well as control the navigation of the robot using only his/her eyes as inputs to the system since the eyes are engaged in performing some tasks anyway. This will free the hands of the user for other tasks while controlling the navigation is done through the TeleGaze interface.
Introduction
In the efforts of developing more natural human-robot interaction and towards obtaining a PhD degree in the field of computing, I started my research in October 2006 at the School of Science and Technology at the Nottingham Trent University (NTU) in the UK. The broad scope of my research was Vision Based Human-Robot Interaction. The aim of the research was to develop vision based natural interactions between humans, as operators, and robots, as assistive agents.
Although many researchers in the field do believe that a full autonomous robot is the future dream of robotics, I do believe, as some other researchers do so, that we should start with less autonomy and more control over the functionality of any robot. This will enable us, humans interacting with robots, to learn more about the level of knowledge and autonomy that we need to put in our future robots. From this point of view, the aim of my research was not to build more autonomous robots. Instead, was to enable more control over their functionality but in a rather natural and easy way.
Phase One of the Research
In this phase of the research, a native TeleGaze interface was developed in order to enable driving through looking. In addition to controlling the robot and the onboard cameras, some functionalities of the interface itself were controlled through inputs from the eyes of the operator only. Hence the term native is used.
The following video demonstrates one scenario of driving the mobile robot using the native TeleGaze interface:
TeleGaze (Phase One, Scenario One) |
For a second scenario using the same version of the interface with the same robotic platform, see the following video:
TeleGaze (Phase One, Scenario Two) |
Phase Two of the Research
In this phase of the research, in addition to redesigning the layout of the interface, a multimodal approach was added to the system. In order to overcome some of the limitations of the eye tracking technology and equipments used in TeleGaze, an accelerator pedal was added to the system. Hence the term multimodal is used to refer to this phase of the research. Also, the robotic platform was updated to a more standard mobile robot.
The following video demonstrates the new design of the interface:
TeleGaze (Phase Two, Scenario One) |
Research Outcomes
The outcomes of the research included a novel human-robot interface for TeleGaze where a human operator can use to fully control the navigation of a mobile robot, with both hands fully free from the navigation task. Also included in the outcomes are results of extensive usability studies where the TeleGaze interface is compared to conventional interfaces such as a computer mouse and a joystick.