Trust in Robots - Trusting Robots
(TrustRobots)


3 Thesis Topics

T1: Robots as Significant Others - Understanding Social Agency of Robots
T2: Trusting Robots need Joint Attention
T3: Body Language in Human-Robot Interaction
T4: Scene Understanding for Knowing about Objects and their Use
T5: Safe Human Robot Collaboration
T6: Skepticism to Overconfidence - Trust in Autonomous Robot Decision Making and Operation in Human-Robot Collaboration
T7: Division of Labour in Human-Robot-Interaction in Hybrid Manufacturing Settings
T8: Confidence in Decisions and Actions - Space as a Dimension of Trust in Social Robots
T9: The Interrelation of Safety Assessment and Trust in Uncertain and Dynamic Human-Robot Cooperation
T10: Robot Anthropomorphism, Trust, and Machine Transparency

3.1 Overview

Figure 4 gives an overview of the 10 PhD topics which integrate all partners. All topics start from the human who sets the needs and requirements (T1) for the fundamental technical research topics T2, T3, T4 and T5 as well as for the application oriented topics T6, T7, T8 and T9 and they all cumulate into topic T10 which addresses socio-cultural and societal consequences. The individual tropics are also strongly interconnected as can be seen from the green arrows.

The next Section introduces the 10 proposed thesis topics, describe the research problems, the aim of the dissertation, a first proposal for the methods to solve it, and finally, describes the connections to other topics.



Figure 4: The ten PhD topics in relation to the research aspects of realising fundamental technical and application research for trust in robots and trusting robots (Figure 1). All topics are responding to the need for safe social interaction with robots and lead to questions regarding the societal implications of robots close to humans.


 

3.2 The Doctoral Thesis Topics

 




T1: Robots as Significant Others - Understanding Social Agency of Robots (Köszegi, Gelautz)

Problem description

How does a machine such as a robot achieve social agency? I.e. how do humans construct a "significant other" in their interaction with a robot? How will increased social interaction with robots impact social order in interaction with this technology? With the recent advancements in artificial intelligence and social robotics in the design of technologies that engage in social interaction, these questions become more and more important.

Previous research in this area has been focusing predominantly on robot's (anthropomorphic) appearances, its movements (like gaze, gestures), and its perceptions of its interlocutors' communication (content, prosody, visual orientation, facial expression) (Otterbacher & Talias, 2017; Thunberg, Thellman & Ziemke, 2017; Davenport & Kirby, 2016; Gray & Wegner, 2012). This dissertation aims at a better understanding of the socio-cultural implications of human-robot interaction on the human side. Several studies have already demonstrated that humans are willing to attribute socio-emotional states to robots (e.g. Sung, Guo, Grinter & Christensen, 2007; Duffy, 2003; Bumby, & Dautenhahn, 1999). Furthermore, they are also willing to show empathy and emotions towards (social) robots (e.g. Kahn and colleagues; Scopelliti, Giuliani, & Fornara, 2005). Yet, to date, we only have rudimentary understanding of how humans construct social order in these hybrid interactions (social interaction between human and robot) and how fundamental aspects of social encounters like relationship, trust, face-work or identity of humans are affected by these hybrid encounters.

Aim of dissertation (technical & social relevance)

The dissertation investigates the interaction order in encounters between humans and social robots by applying Goffman's frame and interaction analysis. It is aimed at understanding whether and how social robots are constructed by humans as "significant others" (Goffman 1967). It will give central attention to accorded and ritual forms of interaction such as face-work and performances. This will allow us to learn and understand applied implicit interaction rules and orders of humans in human robot interaction. This will on the one hand contribute to the analysis of socio-cultural implications of pervasive of social robots in society. On the other hand, if robots are to be perceived as trusted and reliable companions, collaborators and interaction partners, they have to perform social interaction in a manner that follows explicit and implicit interaction rules and order. Thus the dissertation will also provide valuable insights to improve the design of social robots.

Method

The research question will be examined using controlled laboratory experiments using Wizard of Oz methodology (Dahlbäck, Jönsson, & Ahrenberg, 1993) or potentially we may use robots with sufficient AI in very controlled social settings. Subjects will be exposed to different laboratory conditions in which they experience e.g. a standard team situation "idea generation with brainstorming" with robots and other humans (confederates). Within different experimental settings, the interaction order is broken by either the robot or a confederate (e.g. the robot or the confederate comments negatively on ideas of others, etc.) This would allow a systematic and in depth analysis of repair behaviors by subjects. Furthermore, it can be analyzed how exactly breaks of interaction orders impact subsequent human-robot interaction and experienced trust of subjects in the robot.

Link to other topics

This thesis will be closely related to the work of T3 (joint attention) and T3 (body language) but also to T7 (division of labor: teamwork settings that require social interaction), and T10 (robot anthropomorphism and social interaction)

 

Back to top.





 




T2: Trusting Robots need Joint Attention (Vincze, Weiss/Fitzgerald)

Problem description

Communication relies on mutual understanding of intentions. One of the key ability of humans is to obtain intentional information from motion and gesture. This capability to exploit attention for rapidly obtaining the relevant task (intention) information is reasonably understood. For example, when communicating the gaze is essential to create joint attention and understanding. Furthermore, humans can deploy the learned abstracted model and generalise to a new setting with new people and objects. In this way we learn how what others intend to do, a large part of mutual trust. This thesis will study how these mechanisms can be implemented on robots. The primary goal is that the robot understands the situation and the active players. The second goal is to fluently present the robot as part of this situation. This supports other aspects of human robot interaction with fluent process or grasping the context and actors of an ongoing discourse.

Aim of work (technical & social relevance)

In robotics, these attention tasks are pre-programmed or circumvented at all with fixed setups. For example, setting the user in fixed positions relative to the robot. In this project we want to fill this gap and equip a robot with the cognitive ability to learn task-related gaze and action schemas. Hence, we set out to study the interplay between perception and task knowledge for task-driven gaze and localisation. While attention mechanisms have been closely studied, so far there has been little work on linking task descriptions with nor deployed in a flexibly manner to encompass a set of tasks and task sequences. Furthermore, we abstract essential object characteristics from multimodal sensory data to obtain abstracted models of objects that then generalises to new settings. We will show the feasibility of this approach by using the contextual knowledge description to succeed with different tasks (filling a cup with milk or making tea) in different settings (the two kitchens of the two proposing partners).

Experience for this topic comes from work with the Haus of Barmherzigkeit on robots to assist elderly persons to stay longer at home. 49 users tested an interface, where the looking direction of the robot highly increased the joint understanding of what the robot is doing and intends to do next to create the feeling of safety to the user far beyond the pure physical safety as created by following robotic norms such as ISO 13482.

Method

Joint attention will exploit neuro-physical findings in humans to implement attention mechanisms using continuous perception method on where to look given a task as well as bottom up salient percepts. Learning from humans, who have a strong mental model of tasks (for example, as shown in the seminal study of Michael Lands on tea making), will enable the creation of episodic models of attention skills. These skills will be the building blocks for the three main phases of joint attention: initiating joint attention, responding to joint attention, and ensuring continuous joint attention. The goal is to build up a general model such that the elements of the phases can be reused in different scenarios. This will allow to more rapidly create novel interactions and help in the application scenarios.

Link to other topics

T1 (human robot interaction settings and factors), T3 (human motion, gesture and eye gaze), T5 (awareness of human, safety), T7 and T9 (teamwork proactive attention, safety), T10 (robot anthropomorphism, human/robot gaze and interaction)

 

Back to top.





 




T3: Body Language in Human-Robot Interaction (Gelautz, Schürer)

Problem description

Nonverbal communication based on natural body language is a powerful means of conveying and perceiving human emotions. In human-robot interaction, coherence between different communication channels such as voice, gaze and body gestures is an important requirement for mutual understanding and trustful collaboration. On the one hand, robots with specific gestures and motion patterns that embody human feelings - such as emotions, moods and attitudes - increase trust in human-robot interaction. On the other hand, robotic agents that are able to interpret intentions and emotions of humans in their surrounding can provide improved situation-aware reactions. The task of capturing and analysing human posture and motion patterns by computer vision methods for subsequent interpretation in terms of emotional cues provides challenging research questions that have been addressed in only few studies so far.

Aim of work (technical & social relevance)

The goal of this thesis is to develop computer vision and machine learning methods that are able to recognize a person's affective state (including emotions such as fear, anger or happiness) based on body posture and movement in 3D space. Technically, our research will cover topics of 3D reconstruction from different types of RGB-D sensors (stereo cameras, Kinect, etc.) along with the detection of motion patterns and segmentation of body segments in dynamic scenes. The work will face technical challenges due to subtleties of human pose and motion patterns that can be associated with emotional expression. The social relevance of the work is motivated by the importance of natural body language in trustworthy and reliable bi-directional human-robot communication. The domain transfer between technical features such as video segmentation results and motion trajectories on the one hand, and various categories of affective states that are relevant for social human-robot interaction on the other hand, opens up a variety of transdisciplinary research questions that will benefit from the multidisciplinary background of the Doctoral College's consortium.

Method

Our work will draw upon research from diverse fields, including computer vision and machine learning algorithms for the segmentation and labelling of body segments, previous work on the analysis and annotation of human dance (where motions and emotions are oftentimes coupled), and the specific requirements of human-robot interaction. A suitable test data set will be compiled to serve as ground truth for the evaluation of the developed computer vision and classification algorithms. The quality of the correlation between body movement and emotion expression will be assessed by domain experts (from social/movement sciences) and user studies. We plan to transfer selected movement patterns onto robotic hardware in the Living Lab for demonstration and further experimentation.

Link to other topics

Topic 1 (empathy and emotion in human-robot communication), Topic 2 (robot motion, gesture and eye gaze), Topic 4 (relation to scene), Topic 8 (Confidence in Decisions and Actions), Topic 10 (embodiment and robot anthropomorphism).

 

Back to top.





 




T4: Scene Understanding for Knowing about Objects and their Use (Vincze, Schlund)

Problem description

Humans will have much less control over a robot than over a handheld. Although robots get safer and more able everyday, it seems rather terrifying to hand over control of daily aspects to a robot. For example, a robot keeps track of all items and belongings in the home where it is placed. Will humans accept that robots do not forget? Alternatively, will humans forget since they can ask the robot any time, as it happened with telephone numbers?

Aim of work (technical & social relevance)

Technically, there is a need to study the classification of all objects in a given setting, starting from the room structure over items of furniture and larger objects that then act as support structure for placing the multitude of items and objects we host at home. This comprises several challenges that only recently found a first method to handle. Learning from large databases aids in learning many object classes, though needs to be extended to everyday settings, where these methods perform poorly so far. This is largely attributed to methods working in single detection and without exploiting any context the environment presents, e.g., with the aforementioned support structures. The social aspect is how humans react to an omnipresent and omni-knowledgeable robot. Actually, in the near future it cannot be expected that robots perfectly observe all items in a household. They will continuously observe but a handheld slipping in a pocket will be gone for the robot as well. Hence, an interesting aspect to study is the interplay between degrees of knowing about things and the lack of responding to requests. This is unlike handhelds, where information is always available and the search is performed by humans though may be misleading. The related research questions is if these behaviours translate to robots in our daily homes.

Method

The use of context and affordances has recently been identified as key enabled to progress towards better object recognition in realistic settings. Since learned networks are not yet able to incorporate such structural information, we rather propose a joint hypothesise and verify approach. This has been proven to be significantly stronger on 3D objects dataset challenges for both laser and RGBD camera data (Aldoma, 2016). We link the perceived structure of objects to potential actions a robot (or human) can execute with them. The strength of neural networks is exploited to learn a large number of objects and classes and to create hypotheses. A typical example is grasping an object, where the shape affords certain grasp types that varies depending on the object category and the task (e.g., different grasps for cup transfer vs. drinking from a cup). Furthermore, so called second order affordances specify what the environment affords to do with an object, e.g., to place the object on or in another object given constraints about object size, orientation, and other properties. The goal is to study how affordance-based object classes are formulated such that the perceived object structure is related to the robot actions and task. We will exploit recent RGB-D (colour and depth) cameras to obtain object appearance and shape information that allows us to build clusters of affordance-related features that will lead to object classes.

The motivation to select this approach is the capability to generalise the obtained model to novel situations, where the robot approaches a situation with objects not seen before. Based on the perceived object structure a matching with the affordance-related features becomes feasible that allows to, first, categorise the object and to, second, perform an action on the object such as grasping it. The advantage is that in case our model fails to generalise successfully to a new object, the system should generate actions that provide relevant information about the new object, e.g., a pushing of the hypothesized objects to disambiguate conflicting visual hypotheses.

Link to other topics

T2 and T3 (human motion, gesture and eye gaze in relation to objects/scene), T5 (self-awareness to protect from collision), T6 and T7 (target objects and relation to scene for human robot collaboration)

 

Back to top.





 




T5: Safe Human Robot Collaboration (Kugi, Schlund)

Problem description

In recent years, the barriers between robots and humans have been coming down and a new generation of robots is designed to closely collaborate with humans. Typical applications of collaborative manipulation in industry involve assembly, grasping, material shaping, and load sharing tasks. The ultimate goal is to provide humans with a highly flexible tool in the form of a cognitive collaborative robot. In this context, a safe human-machine interaction is a crucial prerequisite for this technology. Thus, there is a need to safely coordinate and orchestrate the motion of the robots involved based on an understanding and anticipation of the human's intention in the shared workspace and the tasks to be performed. A showcase task is the collaboration of a human and a robot in wrapping a textile or thin paper sheet over different cylindrical surfaces, where the right amount of tension forces has to be applied (high enough to avoid sagging or wrinkles and low enough to avoid tearing).

Aim of work (technical & social relevance)

Human robot coordination requires to closely integrating the detection and classification of objects, perceiving the human and her intention and actions, and then creating adequate robot actions. Moreover, robot actions need to be force- and moment-sensitive, achieve the requested positions while planning paths and avoiding collisions and movements in restricted areas. A robust and safe robot motion planning has to rely on an adequate fusion of all sensor information available (RGB-D cameras, 6D force sensors, generalized robot coordinates and velocities) and on the cognitive computing layer, which evaluates gestures and motions of the human, recognizes the body posture and movement, and gives an affordance-based understanding of the objects to be handled. In particular, special attention must be put on the real-time capability of the underlying motion planning and control algorithms. Clearly, the acceptance of robots collaborating with humans can only be attained if the robot is only performing actions that are evident and intuitively comprehensible for the human.

Method

The motion control approach for the problem at hand has to be able to control positions and contact forces simultaneously and to react to changes in the environment and human actions. In the subordinate control layer, we will pursue a combination of manifold stabilization and force/compliance control. Manifold stabilization aims at stabilizing submanifolds defined in the output space of a dynamical system without any a priori time parametrization and allows to exactly linearize and separate the dynamics in tangential and orthogonal direction to a manifold, which can be a one-dimensional path or a two-dimensional surface in robotic applications (Bischof et al., 2017). Hence, position, compliance, and force control approaches can be separately applied to the dynamics in tangential and orthogonal direction. With the references in orthogonal direction, a time-variant manifold can be generated. This allows to properly react on changes in the environment and human actions and to prevent collisions.

Moreover, if it is possible to perceive and at least partially anticipate the human's intention and action this information and additional environmental constraints can be systematically taken into account in the robot motion planning strategy, which consists of the time-variant manifold and the motion along the manifold. Here, we plan to employ a receding horizon control approach, see, e.g., (Du Toit and Burdick, 2012), in combination with so-called anytime algorithms to compute an estimation of the future safety tube around the trajectory. These algorithms feature the properties that they can be interrupted at any time and guarantee to provide useful results, even though they are not optimal. In this way, real time capabilities can be ensured and we provide a very flexible structure for the superordinate cognitive decision layer.

Link to other topics

T2 and T3 (human motion, gesture and gaze in relation to objects/scene), T4 (scene understanding and affordance-based object classification), T6 and T7 (target objects and relation to scene for human robot collaboration)

 

Back to top.





 




T6: Skepticism to Overconfidence - Trust in Autonomous Robot Decision Making and Operation in Human-Robot Collaboration. (Filzmoser, Purgathofer)

Problem description

Trust is a necessity in efficient and effective collaboration between humans. Lack of trust could result in unnecessary and therefore inefficient control steps, rework or caution. On the other hand exaggerated trust can lead to poor and therefore ineffective results due to lack of control or group think. The industrial use of robots at the moment is - with some exception - basically restricted to the clear separation of human and robot tasks, where the later are strictly programmed and conducted in an isolated work space. However, with the increasing level of automation and digitalization collaboration scenarios between humans and autonomous robots gain importance and are at the edge of realization.

Aim of the Work (technical & social relevance)

Currently, we know little about trust in the decision making and operation of autonomous robots in collaboration with humans. Confidence or overconfidence due to the computational capacity or unbiased execution of predefined functionality is possible. As could be scepticism due to the lack of possibility of plausibility checks of autonomous robot decision making and operation, due to the mere distinctiveness of humans and robots, the lack of access to the underlying big data bases for decision making or the lack of understanding of computational mechanisms like data mining (e.g. in automated selection), artificial neural networks, etc.

Lacking insights in area of trust in autonomous robots decision making and operation hinders the design of effective and efficient human-robot collaboration. The research question of this research topic therefore is: "What influences the level of trust in autonomous robot decision making and operation?"

Method

A viable method to address this research question are social experiments of human-robot interaction, comparable to the conformity experiments conducted by Asch with different tasks and settings adapted to the research question. Interaction scenarios with humans can be used as a control condition, which is compared with settings where human participants are replaced with robots. The answers of robots could be scripted/ programmed or - like in Wizard of Oz settings - robots could be tele-operated to facilitate the realization of the experiments (in the beginning with the aim to introduce the technical developments into the cycle starting from the mid term of the DC). The goal is to operate with fully autonomous robots towards the end of the DC.

Links to other topics

Due to the focus on collaboration in joint decision making tasks, there is a close relation to Topic 1 on social agency. Furthermore there are close links to Topic 8 follows similar research interests in the particular area of social spaces. Given the joint interest on perceived robot reasoning there exist also synergies with Topic 9.

 

Back to top.





 




T7: Division of Labour in Human-Robot-Interaction in Hybrid Manufacturing Settings (Schlund, Köszegi)

Problem description

Technological advances, recent price drops and positive experiences with industrial implementation of collaborative robotics have led to a widespread interest in lightweight robot (arm)s in manufacturing processes. Whereas some processes tend to be completely substituted by advances in robotics (loading, unloading, simple logistics processes), manufacturing trends such as challenging lead time requirements, demand volatility and decreasing lot sizes drive the need for true collaboration of workers and robots (work at the same time within the same workspace at the same work piece). Current research and industrial practise is largely focussing on the implementation of first demonstrators and (physical) safety issues. As the state-of-the-art and the experience in safe human-robot collaboration advances, future application scenarios will focus on the division of labour and the optimal way of harnessing the specific advantages of human- robot interaction.

Aim of work (technical & social relevance)

As trust in safe, helpful and productive technology is essential for first successful implementations of collaborative robotics in manufacturing, a new division of labour determines the long-term impact. The proposed research aims at the optimal division of labour between humans and robots in manufacturing processes, which is highly dependent on the specific application scenario. For a successful future of human-robot collaboration in manufacturing it is essential, that the relevant stakeholders in manufacturing companies and society trust and accept the outcome of the specific configuration of the division of labour.

Method

An in-depth analysis of current collaborative robots, their technological skills, opportunities and risks in a collaborative work environment serves as a starting point for a comparison of robotics and human skills in manufacturing. Based on the concept of 'engineering bottlenecks' (skill sets, where human skills outperform technology), future scenarios of manufacturing are compared in terms of the best fit of tasks to capabilities of robots and humans. Besides physical tasks (such as assembly tasks), a focus is placed on cognitive tasks and decision-making, as collaborative systems need to be fast in adapting to changing environments. A subsequent multi-perspective evaluation of the case-specific configuration of the division of labour between humans and robots combines economic criteria with ergonomics and social aspects such as motivation, task identity, qualification and de-qualification, problem-solving skills and social aspects (relevance of work, impact on employment, self-confidence in random high-risk situations).

Link to other topics

There is a strong interrelation with T4 since the division of labour is an extension of the classification and the perceived structure of objects, that is used to determine the potential actions robot and humans can execute. With respect to topic T5 there is a strong connection since often the limiting factors in the division of labour are not technologically or socially but determined by safety concerns and potential liability issues.

 

Back to top.





 




Topic 8: Confidence in Decisions and Actions - Space as a Dimension of Trust in Social Robots (Köszegi)

Problem description

At present, humanoid robots are penetrating into areas of everyday life and thus into social spaces also. Robotic systems that are supposed to operate in the human environment are characterized by the automatic execution of decisions and actions by means of behaviours in social space. This topic examines living space as a dimension of trust in life with social robots within Ambient Assisted Living environments (AAL) of various technological facilities. Confidence in the decisions and actions of technical assistance systems is essential for their acceptance. Space is understood here mainly as a holistic substrate of meanings and their relative positioning. The introduction of robotic technology into this substrate is about to transform human behavior concerning the creation and use of space. Like technology, spaces are never neutral, but always co-determined by all kinds of hegemonies. Hence, it is important to determine properties of this mutual relation that build up trust in humans towards humanoid robots in assistive life situations. In addition to technical challenges, residential environments put specific demands on support systems. Private living spaces excite especially the aspects of spatial proximity, temporal exposure, privacy and intimacy. Which spatial structures, atmospheres and technological configurations support or make it difficult to build and maintain trust in social robots? What can designers of humanoids robots as well as architects do to allow for trust to unfold, in terms of hardware, software, shape, form and behaviour? What are the boundary conditions under which habitats and humanoid robots will adapt to each other? Is it possible to develop aspects of robot ethics bottom up and if so, how to avoid biases?

Aim of work (technical & social relevance)

Through their autonomy in behaviour and decision making, robots possess an agency of action which in turn has all kinds of effects on everything and everyone that make up the specific social space of an assistive situation. The aim of the project is to provide guidelines and prototypes for the design of both, the technical and social aspects of humanoid robotics. People, technical objects and space form a socio-technical ensemble whose mutual relationship requires differentiated analysis. Research will be done towards a framework of design parameters unfolding interrelations between social space and social robots to foster trust. Development of approaches to and prototypes for technical solutions that are allowing for some variety of meaningful social contexts of socio-technical assistance. Contribute to the discourse on techno-ethical challenges by means of applications and prototypes.

Method

With the method of discourse and spatial analysis as well as technology assessment and prototyping (hard and/or soft), spatial preconditions and structures are examined with regard to their influence on the development and maintenance of trust in social robots. Starting from research on trust in technological systems, pilot projects with robots in the field of AAL will be analysed with regard to social space as a dimension of trust. Based on the analysis, concepts and prototypical applications for the integration of social robots in living environments will be developed on an architectural and spatial level. Applications will be implemented and experimentally examined in the Phd programme's Living lab.

Link to other topics

T1 (achieve social agency), T2 (mutual understanding of intentions), T3 (Nonverbal communication, provide improved situation-aware reactions), T4 (robots do not forget, study the classification of all objects), T5 (cognitive collaborative robot).

 

Back to top.





 




T9: The Interrelation of Safety Assessment and Trust in Uncertain and Dynamic Human-Robot Cooperation (Weiss/Fitzgerald, Kugi)

Problem description

Industrial robots, designed to increase productivity for automated tasks, have a long history in the manufacturing industry. In this use context, maintenance engineers perform the pre-programming of the line and the intervention in case of emergency and there is no cooperation between the human and the system. In other words, traditional industrial robots are designed and programmed for static environments, and the configuration of the robot is invisible for the operator, who works beside, but not with the robot. Due to this type of deployment and the primitive sensory capabilities of existing robotic platforms, robots were run in "work cells" free from interference by the environment and the operator. One of the major challenges in Industry 4.0 scenarios is to enable a more powerful human-robot relationship, which however needs to ensure safe and reliable cooperation between robots and operators. Above all in short-term interactions such as turn taking (an operator performs some work, then the robot continues, then the human tasks over again etc.), it is still an open issue how the ideal trusting and safe relationship between the human and the intelligent (autonomous) system should look like.

Aim of work (technical & social relevance)

This thesis should investigate novel safety assessment and motion planning methods for human-robot cooperation in dynamic and uncertain tasks. The integrated safety assessment approaches into a motion planning algorithm should then be evaluated with its impact on end user trust in cooperation. Technically this thesis will cover major aspects of safety assessment of robot trajectories in collaboration. Socially studying the interrelation of machine safety and user-centred perceived trust in robotic systems is crucial for successful future industry 4.0 scenarios. Only if operators trust robotics systems sustainable "shoulder-to-shoulder" cooperation in factories will be possible in future.

Method

Given the current state of the robot, a representation of its environment and an intended trajectory, the goal is to compute the risk that the robot will collide with any object (including the operator) in the environment when executing this trajectory to contribute to the shared task. This will require a robot perception approach and motion prediction. The safety assessment needs to take into account the robot's dynamics, the environment objects' future behaviour and needs to reason over an infinite time-horizon. For trust assessment, a user-centered and exploratory design approach will be used to define quality criteria for trustworthy cooperation. Different cooperation scenarios will be story-boarded and prototyped. A series of quantitative and qualitative evaluation studies will be performed in order to identify impacting factors on trustworthiness over time from an operator perspective.

Link to other topics

T1 (achieve social agency), T2 (mutual understanding of intentions), T3 (Nonverbal communication, provide improved situation-aware reactions), T10 (creating a useful mental model of robot reasoning)

 

Back to top.





 




Topic 10: Robot Anthropomorphism, Trust, and Machine Transparency (Purgathofer, Vincze)

Problem description

Siri, Alexa and all the other AI assistants are known to make little puns and jokes when you interact with them. Humanoid robots such as Pepper or Nao are playing with a high degree of anthropomorphising their appearance and behaviour. However, does a design focussing on human-likeness come at a cost? Since appearance and behaviour design of embodied and non-embodied agents can have an impact on the human-technology relationship, it is important to study their effect. On the one hand, first studies have indicated that highly human-like robots are perceived as less trustworthy and empathetic than more machine-like robots. On the other hand, faulty behaviour of a more machine-like robots reduces trustworthiness. There seems to be an interaction triangle between anthropomorphism (in appearance and behaviour), machine transparency (end user understanding of actual machine capacities), and trustworthiness that needs further exploration. Both trust and empathy can affect human-agent interaction and are not completely independent from each other. The science fiction movie Ex Machina makes a strong statement about human susceptibility to manipulation through emotion. A critical reflection and discourse with respect to ethical considerations is therefore essential for this PhD topic.

Aim of work (technical & social relevance)

Human-like agents and robots have great potential to become human companions. Advancements in Human-Robot Interaction research enable us to build adaptive systems that are capable of interacting with humans in their daily lives and assisting them in various tasks. However, it is crucial to study the role of appearance and behaviour of these systems in repeated interactions. The aim of the proposed thesis is practical research to investigate the "anthropomorphism, transparency, trust"-triangle. The following two main research questions should be addressed: How does a system's behaviour and level of human-like appearance affect trustworthiness and perceived empathy? Does the effect of a system's appearance and behaviour on its trustworthiness and perceived empathy change over time (i.e. repeated interactions)? Furthermore, ethical implications for the design of human-like systems should be derived.

Method

A series of exploratory design studies will be conducted in order to determine the effect of various types of different human-like designed systems, such as AI assistants and humanoid service robots, on trustworthiness, perceived empathy and machine transparency. Subsequently, follow-up studies will explore the potential change over time. The expected outcome is to use the gathered empirical data for a thick description of the hypothesised "anthropomorphism, transparency, trust"-triangle and deriving ethical design guidelines for future adaptive systems using human-like cues. The research should be performed from a user-centered perspective taking into account personality traits of end users as well as the task/application context of the technology.

Link

T1 (understanding social agency), T3 (body language), T8 (space as dimension of trust)



Back to top.