Trust in Robots - Trusting Robots

For a brief overview of the objectives of TrustRobots, click here.
(Unfortunately only available in German.)

1 Scientific Claims of the Doctoral College (DC)

With the advance of technical systems, more and more intelligent and autonomous machines will populate our human-made living space and challenge fundamentals of our society. Mobile devices, for instance have not only been changing how we interact with each other, but also - and more fundamentally - how we relate to each other, how we collaborate, and how we distribute work among human and machine agents. While autonomous cars are waiting to drive us around, robots will come to relief us from chores and dangerous, boring or dirty work. Despite we feel in control of the mobile device held in our hand, this feeling may diminish with machines getting larger, more complex and more powerful and not hand-held. Autonomous actions of cars or robots can be scary. The thought of robots going astray is puzzling Europeans, while Asians are rather excited by the thought of things being animated and acting by themselves. Thus, designing technology for people, requires that people are - at all times - either fully in control of the technology or that they can rely on the benevolent intentions and the security of autonomous systems that they cannot control. Therefore, the idea is to build trust into (autonomous) robot systems.

Trust has been recognized as important issue in automation and technology research at least since the 1980s. Taking stocks from research in interpersonal trust it has been argued that trust as attitude transforms into reliance and therefore plays a crucial role with regards to technology acceptance and appropriate use of automation. Furthermore, according to the CASA paradigm (Computers Are Social Actors, Nass et al. 1994) the same social heuristics used in human-human interactions apply to human-computer interactions because computers call to mind similar social attributes as humans. With the anthropomorphizing of technology particularly in the field of (service) robotics as well as the advancements of natural language processing and AI, there has been added a new dimension to CASA. While earlier work clearly sees differences between trust and reliance in technology as opposed to trust between people, this difference may blur when (service) robots are mimicking human interaction patterns and showing anthropomorphic appearance and behaviour.

The title of this DC reflects this challenge: "Trust in robots - Trusting robots" carries different notions and unifies different research areas. While "Trust in robots" addresses question that pertain to the idea of how to develop technology that users are willing to rely on, "Trusting Robots" focuses on the process of establishing a trusting relationship to robots from a human perspective and thus extends the CASA paradigm. It considers that humans may develop a personal relationship to a robot that resembles a human-to-human relationship. However, there is also an alternative interpretation of "trusting robots". It is more futuristic and looks at the human-robot relationship from the other side: it poses the question how to develop AI and robotic technology in order to allow a robot to exhibit "trusting skills" when interacting with humans. Trust in this sense does not mean a linear confidence from the side of the "user" towards the service robot. However, it signifies the fact that a user is assured that the technical system has notions of his or her meaning of objects, interaction and social biases as they take place in everyday living spaces - and will apply them in its robotic functioning accordingly.

1.1 Scientific Challenges

On the one hand, building trust into robot systems boils down to endow robots with abilities and skills to perceive and understand human communication and behaviour (for example through natural language processing, by recognizing facial expression, voice, gestures, and emotions), to recognize and ideally predict human intentions, and to adequately respond to all of these stimuli. Furthermore, any robot reaction needs to assure the users that they are safe at any time.

On the other hand, it is not only important that robots actually are safe for people, but that they are perceived as safe and reliable by their (human) interaction partners. This role of trust in robots is an exciting and yet open research question. Since it is not possible to foresee or enumerate all possible situations, autonomous (social) robots will need to react safely to unexpected and unforeseen encounters. They must be able to learn and adapt, as they will make entrusted autonomous decisions that go far beyond the pre-programmed security rules and algorithms. In such a context, robots are ascribed and will have (social) agency. The fundamental question therefore is how do human actors create a trusting relationship to autonomous machines. When is trust in an autonomous system appropriate? When should we trust, e.g. a care robot, to make the right decisions? When would it be wise not to trust? And even if we initially trusted a robot, which experiences would and should change our trust in it?

Tightly associated with the agency of autonomous machines - often referred to as "machine agency" (e.g. Engen et al 2016) - is the question of (legal) accountability and control in the context distributed tasks among humans and autonomous machines. Here again, trust as a functional equivalent to control (see e.g. Luhmann, 1989) may play an important role.

As sketched before, the challenges associated with AI, robotics and the technological advances that accelerate innovation in these areas requires that researchers from different disciplines come together to unite their competences, methodologies and knowhow. The opportunities of (industrial and service) robotics can only be realized responsibly when the challenges that come along with them are considered appropriately. We need to discuss and understand possible future scenarios from different perspectives: technology-centred, human-centred and form a social/societal perspective taking legal, ethical, political and socio-cultural aspects into account.

According to the recently proposed paradigm of "socio-materiality" (Orlikowski and Scott 2008) neither technology is independent from socio-cultural factors nor can socio-cultural processes be understood independently from technology and its application. Figure 1 attempts to depict the idea of this DC TrustRobot:

Figure 1: TrustRobot - answering fundamental challenges of technical and application research responding to social interaction needs and the societal implications of putting robots close to humans.

Within the DC, we draw a bow from the fundamental question of how humans construct "the significant other" in interaction with machines to the analysis of consequences of the anthropomorphizing of machines for society and its social fabric. Within the bow, we address both, fundamental as well application oriented research questions, all around the common research interest of trust in robots - trusting robots.

Thus, the aim and scope of this research project goes far beyond traditional technology acceptance studies and requires a multi-disciplinary research approach. Taking stock of the TU’s mission statement "Technology for People", the research in this DC is approached from the responsible robotics paradigm: Industrial and Service Robots are intelligent, autonomous machines bereft of any moral capacity or true social capabilities. Therefore, the accountability for the ethical developments that come along with their introduction into society has to be resumed already by scientists and developers. Thus, within this DC we also target the ethical concerns raised in scientific and public debates that robots may create profound challenges to societal values like safety, security, well-being, privacy, etc. Instead of mitigating adversarial effects and harm of new technologies after their introduction, responsible robotics begins in the research and development phases. Therefore, we need to empower as many doctoral candidates with different research backgrounds as possible to be part of this shaping process.

In summary, the main objectives of this DC are (i) to analyse the phenomenon of trust in robots in a transdisciplinary research project from different perspectives and (ii) to develop technologies that increase the social and technical skills of intelligent, autonomous robots. When we succeed in this mission, the overall acceptance of technology among users will increase as well. In the following Section we briefly review related international research. Section 1.3 shows how TrustRobot is positioned and sets out to advance the state of the art.

1.2 Connection to International Research Excellence

According to the research politics of large industrial nations, robots will become essential parts of our lifeworld. Pragmatic reasons often named by policy makers and developers of robots, in particular humanoid robots, alike are that these machines can move without much adaptation in spaces designed for humans and that they will be better and faster accepted than other assistant technologies even in the most private spaces (Duffy, 2003). In urgency to govern this emergent robot development, the European Committee of Legal Affairs published a draft report on "Civil Law Rules on Robotics" (Delveaux 2016), in which privacy, general well-being, and job-loss through automation are the main issues raised. Surprisingly, there is next to nothing in this report on changes in public, semi-public, private, and intimate spaces - that is, of spaces of living - by the introduction of robots. Also surprisingly, the report sets out to discuss legal issues while social aspects and societal implications and how to cope with the presence of a multitude of robots, is not even discussed. While this side is rather negligent, the policy agenda pushes the social aspect of assistive robots for care (elderly, autism, dementia) in the forefront of its research agendas, where social aspects and trust to robots will be of critical importance. By leaving out the issue of trust and space, its production, transformation and reproduction, complex topics that are core elements in shaping of future of our society.

Relevant to the issue of trust and space is the debate that circles around the idea whether or not machines can have "agency" and in case they have, how it differs from human agency (Engen, Pickering, & Walland, 2016; Rammert, 2012). Many theoretical concepts have been developed to understand aspects of what is called machine agency, such as socio-materiality theory (Orlikowski & Scott, 2008), actor-network theory (Latour, 2005) and double dance of agency (Rose & Jones, 2005). Together, these studies indicate that machine agency is not something external pre-made that can be embodied in a technological system, rather it is emerging from people's situated interaction with technologies (Orlikowsky, 2000) and is described as the extent to which they are perceived as having autonomy by human (Rose & Truex, 2000). It is therefore of fundamental interest to understand the construction of machines as "significant others" from a socio-psychological perspective, as it is - amongst other things - the fundament for building and maintaining trust.

Human trust in automated/robotic technology has been of interest to robotics researchers since the early 1980s (Muir, 1987). Research has indicated that "perfect" calibration in Human-Robot-Trust is a challenging endeavour to achieve (Hancock et al., 2016). HRI studies reveal cases of over- as well under-trust in automated systems (Robinette et al, 2016). An extensive amount of research was performed already in order to explore proper trust-reliability calibration, examining the factors influencing human operators’ trust (Hoff & Bashir, 2015). However, most of this research considers trust a stable state, not considering variations over time, besides only a few exceptions (Yang et al., 2016; Desai et al., 2013). Issues of trust, reliability, and (mis)-calibration continue to be active areas of research related to human-robot teaming in its various forms (Robinette et al. 2016; Wang et al., 2016; Lohani et al., 2016). Challenging open research topics on which researchers of the proposed DC are already working on are trust dynamics (Lorenz et al. 2015), influence factors on trust (Huber et al. 2016, Stadler et al. 2014), as well has how to reliably measure trust in HRI (Buchner et al., 2013).

Envisioning our transdisciplinary study of trust and robots, several technical aspects are necessary to bring robots close to humans. Of particular relevance is to perceive and understand what the user is doing and intents to do. Natural body language, as expressed by body posture and movement, plays an important role in HRI. Inconsistencies between different communication modalities that convey affective information - such as facial expression, speech and body language - can decrease trust in bidirectional human-robot communication. McColl et al. (2016) give a thorough survey of different methods and sensor types for automated affect recognition in human-robot interaction. McColl and Nejat (2014) point out that most related research on human emotion recognition from non-verbal communication has focused on facial expressions and vocal intonation, while the impact of body language has been investigated in only few studies so far. Very recent work by McColl et al. (2017) explores the usage of the KInect depth sensor to interpret upper body postures in terms of a person's openness in communication. As opposed to recognizing emotional states in humans, Van de Perre et al. (2017) demonstrate the generation of affective gestures for robots with different degrees of freedom in motion.

The TrustRobots DC is not only well positioned in international research but also addresses fundamental challenges in service and industrial robotics. Regarding service robotics, we experienced the entry of small cleaning robots, lawn mowers, or robotic pool cleaners. Besides a few studies with Roomba, it is rather unknown how people react to the longer presence and possible assistance of robots. Furthermore, studies are carried out in professional care facilities or under controlled conditions in user homes and only for a few days (Leite et al., 2013). Similarly, a recent study concluded that the novelty effect does not wear out even after a few weeks when introducing robots into the direct everyday office environment of humans (Hawes et al., 2016). The longest study of a mobile robot with arm in user homes has also been carried out in 18 homes in Europe including seven users in their homes in Vienna (Vincze et al., 2016) confirms this finding and that users are concerned if they can trust the robot moving around in their home. They show that trust materialises in knowing what the robot is doing and the robot being able to explain what it is doing. Besides indicating several open technical problems in this study,it also clearly highlights the issue of trust. In one case, the robot had to be removed from the user home, because it did something unexpected and unexplained and, thus, ended the trusted relationship that was built up over the first days. The meta-study of Hancock (2011) shows that robot performance and robot behaviour close to humans is key to establishing trust. This view has recently been challenged by two studies indicating that humans are readily willing to trust a robot (Salem, 2015), even if the robot demonstrates to fail in a previous and similar task (Rubinette 2016). While these studies have been conducted over short durations of interaction, they nevertheless indicate the willingness of people to accept robots and their assistance, a prerequisite to introducing robots at larger quantities in our environment. There remain open research questions such as how these relations may form and be maintained over longer durations and how the change from special settings to environments of our daily living influences trust in robots.

With respect to industrial robotic, automation, and - more general - digitalisation, one key question dominates public discussion, especially in high-wage countries: how many jobs will be lost through the technological advances? Thus the question, if and to what extent jobs will be replaced by computerization and automation, is a fundamental question that needs to be addressed adequately. Frey & Osborne, (2013) assume that whole professions rather than specific tasks are to be substituted by technology. According to a recent study by Nagl et al (2017) in Austria, around 9 % of the job profiles have high potential to be replaced by automation. Others have argued that automation and digitalization do not necessarily mean the direct replacement but rather a shift and transformation of the job profiles (Schlund et al. 2017). Thus it is necessary to analyze which tasks lend themselves to automation and which tasks should still be performed by humans, i.e. the division of labour between human and machine agents (Bauer et al. 2016). Furthermore, Autor (2015) argues that although technology enables us to replace labour, the application of new technologies is also a driver for increased demand for higher qualified labour. Hence, aspects of qualification are a core issue to be tackled in order to give future employees the needed skills and/or to requalify current employees (Spath et al., 2013). In summary, automation and digitalization is not necessarily a question of competition between robots and people but poses many question of how to collaborate most fruitfully. The results of technological development on daily work-life should not be dictated by technology, but has to be actively created by taking the interrelation between technology design and organization of labour into account (Bösch, V. et al., 2017). If done properly, this will substantially lower the anxieties surrounding automation and digitalization and contribute to a climate of trust in robots.

1.3 Scientific Positioning

The complexity of the forthcoming challenges for the spaces we live and work in, demands transdisciplinary research. In our previous experience with robotics research we have learned that a robot system complexity requires hybrid competencies of researchers. The highly timely relevance of the topic becomes obvious through initiatives such as the second Dagstuhl seminar on "Ethics and Trust: Principles, Verification and Validation" (Dr. Weiss is one of the co-organizers) as well as through the Informatics Europe and ACM Europe's joint report on "Automated Decision Making", which is expected for publication in February or March 2018. To develop these transdisciplinary competencies, we also want to encourage doctoral students with complementary prior qualifications to apply for the doctoral program. The here proposed DK will combine the different methodological approaches of the various domains included, to allow for synergetic development of the various dissertations/research topics. The format of the DK will make constant supervision of approach and work of a specific discipline by all the other a standard of procedure. The departments in this DK developed a new understanding of humanoid robotics that puts the user in the centre of research approaches.

With the proposed DC TrustRobots we contribute to achieving scientific excellence by exploiting the broad expertise of the partners in a trans-disciplinary collaboration. Thus we aim at establishing a truly transdisciplinary education in robotics guided by the topic of trust. Such education is crucial given the considerable technical progress over the past decade, however still too few robotic products address actual social needs and societal implications are not sufficiently addressed in their development process. A transdisciplinary training on both technological innovation and user-oriented research is thus both timely and important. The transdisciplinary nature of the research problems addressed in the thesis topics calls for an education that is supported by researchers with complementary (method) skills in several disciplines; correspondingly, the training envisioned is intersectorial and multidisciplinary, comprising partners who are experts in robotics, computer science, human-robot interaction and technology assessment, social and management sciences, and even architecture.

On the one hand, the training will provide students with all the skills expected in their fields of expertise; on the other hand, they will acquire transdisciplinary thinking, as well as complementary skills. In particular, students will be trained in the relevant areas in robotics, in methods for studying human-robot interaction and for incorporating social needs and societal implications into robot design from the start and pervading all levels of robotics engineering.

The topic is highly timely with the recent advance in safe robotic hardware. The topic implements the TU Wien mission statement "Technology for People" by creating united forces in response to various challenges - among others the demographic change - ahead of us. The DC will give us the unique opportunity for creating scientific excellence leading to ten PhD students in a novel field. Doctoral studies comprise both scientific advances as well as adding to our teaching competence to excel in a novel transdisciplinary domain.

This broad coverage of the objectives is possible due to the collaboration of the institutes of four faculties of TU Wien as well as a service department, the public library of TU Wien, which will provide parts of its facilities to serve as a public living laboratory for the members of DC in order to run live demonstrations and experiments. The partners are renown and recognised in their respective fields and in robotics, have shown to be active both nationally and internationally, and collaborated in several smaller endeavours successfully. Collaborations range from workshops and seminars, joint contributions to projects, events such as the "Lange Nacht der Roboter" or "Lange Nacht der Forschung" to acquiring to funding of modern robot hardware of half a million Euro.

A critical point is to present future students the option to work with the best possible robot hardware. Relevant industrial and service robots range from modern compliant robot arms safe to operate near humans over mobile manipulators and humanoid platforms such as Pepper up to full sized humanoid robots (see Figure 2).

Figure 2: Robots safely operating next to humans (top left to bottom right): KUKA LWR robot arm safe to be handled by children; The HOBBIT robot with an older adult in their home during a three-week trial period; HOBBIT commanded to pick up by an older adult during these trials; Pepper interacting with children as part of the Crazy robot Project (FWF); a boy controlling romeo to grasp an object using VR (Virtual Reality) glasses; Collaborative worker assistance systems, Fraunhofer IAO, BMW Spartanburg, and Windshield installation, Daimler AG.

The state-of-the-art robotic platforms to conduct the research in TrustRobot are available with the partners, who now want to take the chance to add fundamental research resources to currently running projects. The robots are safe to be used near to humans and provide different embodiments to study different aspects of social interaction, the robot's relation to humans, the space humans and robots life in, relationships that build including the eminent aspect of trust, and beyond this eventual societal implications. Finally, and this is an important factor to consider, these state-of-the-art robotic systems provide us with the great opportunity to attract excellent students.