2017 Guest Speaker: Professor Katia Sycara
Carnegie Mellon University
Robotics Institute, School of Computer Science
Pittsburgh, Pennsylvania, US
Trust in Human Interaction with Robot Systems
As robotic platforms become cheaper and more reliable, they are increasingly going to be autonomously interacting with people for multiple tasks ranging from service robots in the home or work, to environmental exploration, search and rescue and crisis response. In all these interactions human trust in the autonomy is a very important ingredient. To engender trust, robots must be social, in the sense of considering social norms in their domain decision making. Additionally, as these agents become more sophisticated and independent via learning and interaction, it is critical for their human counterparts to understand their behaviors, the reasoning process behind those behaviors, and the expected outcomes to properly calibrate their trust in the systems and make appropriate decisions. In other words for human intelligibility of an agent's decisions, the agent needs to be transparent. Developing effective ways for autonomous systems to be socially-aware, trustworthy and transparent faces multiple challenges, foremost that the notion of trust, transparency have no unique definitions in the literature, and the role of social norms and their relations is not well understood. Moreover, human cognitive limitations, algorithmic scalability and opacity of sophisticated algorithms pose additional serious technical difficulties as to amount and type of information provided by the autonomous system to the human for trust based interaction.
In this talk, I will present some of our recent work on trust, transparency and social norms. In particular, I will present our trust and transparency framework in the context of human interaction with autonomously coordinating robotic swarms as well as our first attempts at transparency in deep neural networks for reinforcement learning and our work on social norm aware engineered systems.