Trust-Driven Human-Robot Interaction

In this project, we study how trust evolves when humans repeatedly interact with a robot recommendation system. We try to leverage quantitative models of trust to predict human behavior and design interaction strategies for the robot to promote trust in the robot’s recommendations.

We model the interaction as a trust-aware Markov Decision Process (trust-aware MDP) that consists of States, Actions, Transition Function, Reward Function, and Human Behavior Model. Our major focus is to study the effects of the reward function chosen on trust and to develop trust-based decision-making models for humans.

We have developed a high-fidelity 3D environment that simulates a human-robot team performing an Intelligence, Surveillance, and Reconnaissance Mission using Unreal Engine.

This project has led to 1 journal publication, 1 peer-reviewed conference proceedings, and 1 book chapter.

Selected Publications