Human-robot interaction

Future robots will be expected to work safely side-by-side with people in a variety of different situations. To facilitate these interactions, robots must be intuitive, easy to use or understand, and built to be responsive to the needs of people while understanding their differences [USR18]. As robots gain more utility, and thereby more influence in society, it will become increasingly important to develop Australian systems that interact with Australians in a way that suits Australians [AAS18]. Research in human-robot interaction is becoming increasingly important, particularly regarding social robotics, which includes relevant aspects of robot action, perception and cognition, as well as understanding and modelling humans to give robots sufficient understanding to interact with humans in preferred ways [AAS18].

Human-robot interaction (HRI) requires a better understanding of human intent than currently available. Good design for HRI increases utility, and the likelihood that the system will be successful when used in any application as ultimately all systems are used/deployed/maintained by people. Strong capacity in HRI provides a competitive advantage to those deploying robotics. HRI may be either physical or social and is challenging due to the unpredictability of humans. Robots in the future will make decisions that determine the tenor of these interactions. They will need to interact with humans in a predictable, reliable, ethical and fundamentally safe manner. Social interaction requires building and maintaining complex models of people, including their knowledge, beliefs, goals, desires, emotions and cultural/social norms. Just like humans, robots need to simplify their language depending on context, coordinate their actions to match human preferences, and be able to interpret the actions of others as representative of their inner goals [SM18].

Natural language is the main interface convention for social robotics as people have an expectation that humanoid robots (like people) will understand what they say and follow simple verbal commands. Natural language processing (NLP) gives robots a way to accept instructions from humans without formal programming. Once robots can apply NLP then they are also able to understand more about the human world and be able to learn from information that humans develop for other humans (e.g, from books). The next frontier will be in enabling robots to understand non-verbal communication – facial expressions, gestures, and body language – thought to be responsible for 80-90 per cent of the meaning found in human interactions. Biometrics, particularly applied to recognition of micro-expressions, will be increasingly important in human-robot interactions.

The three most significant challenges that stem from building robots that interact socially with people are modelling social dynamics, learning social and moral norms, and building a robotic theory of mind [SM18]. Our understanding of human social behaviour is not nearly as advanced as our knowledge of Newtonian mechanics or even human visual perception [SM18]. Understanding social interaction requires an understanding of social cues, social signals (which may be context-dependent and culturally determined), appropriate social and moral norms, and an understanding of empathy, ownership, and the need to keep a promise [SM18]. Displaying this understanding will be essential to build the long-term trust and relationships necessary for robots to operate side-by-side with people.

Robots currently lack comprehensive or integrated, rich, usable models of human mental states and are also not designed for long-term interactions. Robots need to integrate models of episodic memory, hierarchical models of tasks and goals, and robust models of emotion to create the detailed cognitive models that capture the psychology that humans effortlessly apply to understanding other humans [SM18]. Current social robots are designed for brief interactions rather than ones lasting months, years, or even decades. This expansion will require adaptable models of robot behaviour that personalise responses based on the length of time the robot has ‘known’ the people it interacts with.

Robots need to be able to predict and understand what the humans around them are intending to do. A robot will need the sensory and processing capacity to feed into a cognitive system that can develop a robust awareness of its environment, both of its own position, and all the other aspects within that environment, be they people, animals, other robots, or other inanimate objects. This requires a range of basic skills such as recognising human poses and activities, awareness of human attention, understanding speech and non-verbal behaviours such as gestures and body language, together with high level competencies that use such information to predict human intent and select appropriate actions. In the future, significant innovation is expected to occur when control is shared between robot and human, with ‘human-in-the-loop’ stimulus and motion mapping enabling robots to learn how to predict and adapt to real-world conditions and safely operate [AAS18]. More research, done in an integrated manner between roboticists, psychologists, cognitive scientists, and social scientists, into developing effective mental models is critical to extend this work to be generally applicable. Social cues differ between countries and cultures, and culturally appropriate models for Australia must be considered.

Research into the realm of social robotics, including human-robot interactions, is currently in its infancy and needs to keep pace with the rapid evolution of robotics technology. There are huge opportunities for the application of social sciences and interdisciplinary research will become imperative. Current research activity is not proportional to the potential impact of the human-robot interaction problem: Australia has an existing lead in some parts of the social robotics field that would be desirable to maintain [AAS18].