Publications

You can also find my articles on my Google Scholar profile.

Journal Articles


Clustering Trust Dynamics in a Human-Robot Sequential Decision-Making Task

Published in IEEE Robotics and Automation Letters, 2022

In this paper, we present a framework for trust-aware sequential decision-making in a human-robot team wherein the human agent’s trust in the robotic agent is dependent on the reward obtained by the team. We model the problem as a finite-horizon Markov Decision Process with the trust of the human on the robot as a state variable. We develop a reward-based performance metric to drive the trust update model, allowing the robotic agent to make trust-aware recommendations… Read more

Recommended citation: S. Bhat, J. B. Lyons, C. Shi and X. J. Yang, "Clustering Trust Dynamics in a Human-Robot Sequential Decision-Making Task," in IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 8815-8822, Oct. 2022, doi: 10.1109/LRA.2022.3188902.
Download Paper | Download Slides

Conference Papers


Identifying Worker Motion Through a Manufacturing Plant: A Finite Automaton Model

Published in 2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN), 2024

Autonomous Guided Vehicles (AGVs) are becoming increasingly common in industrial environments to transport heavy equipment around warehouses. Within the idea of Industry 5.0, these AGVs are expected to work alongside humans in the same shared workspace. To enable smooth and trustworthy interaction between workers and AGVs, the AGVs must be able to model the workers’ behavior and plan their trajectories… Read more

Recommended citation: S. Yang et al., "Identifying Worker Motion Through a Manufacturing Plant: A Finite Automaton Model," 2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN), Pasadena, CA, USA, 2024, pp. 1970-1977, doi: 10.1109/RO-MAN60168.2024.10731360.
Download Paper | Download Slides

Evaluating the impact of personalized value alignment in human-robot interaction Insights into trust and team performance outcomes

Published in ACM/IEEE International Conference on Human-Robot Interaction, 2024

This paper examines the effect of real-time, personalized alignment of a robot’s reward function to the human’s values on trust and team performance. We present and compare three distinct robot interaction strategies: a non-learner strategy where the robot presumes the human’s reward function mirrors its own; a non-adaptive-learner strategy in which the robot learns the human’s reward function for trust estimation and human behavior modeling, but still optimizes… Read more

Recommended citation: Shreyas Bhat, Joseph B. Lyons, Cong Shi, and X. Jessie Yang. 2024. Evaluating the Impact of Personalized Value Alignment in Human-Robot Interaction: Insights into Trust and Team Performance Outcomes. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI '24). Association for Computing Machinery, New York, NY, USA, 32–41. https://doi.org/10.1145/3610977.3634921
Download Paper | Download Slides

Effect of Adapting to Human Preferences on Trust in Human-Robot Teaming

Published in AAAI Symposium Series, 2024

We present the effect of adapting to human preferences on trust in a human-robot teaming task. The team performs a task in which the robot acts as an action recommender to the human. It is assumed that the behavior of the human and the robot is based on some reward function they try to optimize. We use a new human trust-behavior model… Read more

Recommended citation: Bhat, S., Lyons, J. B., Shi, C., & Yang, X. J. (2024). Effect of Adapting to Human Preferences on Trust in Human-Robot Teaming. Proceedings of the AAAI Symposium Series, 2(1), 5-10. https://doi.org/10.1609/aaaiss.v2i1.27642
Download Paper | Download Slides

Book Chapters


Value alignment and trust in human-robot interaction: Insights from simulation and user study

Published in Discovering the Frontiers of Human-Robot Interaction. Springer, Cham., 2024

With the advent of AI technologies, humans and robots are increasingly teaming up to perform collaborative tasks. To enable smooth and effective collaboration, the topic of value alignment (operationalized herein as the degree of dynamic goal alignment within a task) between the robot and the human is gaining increasing research attention. Prior literature on value alignment makes an inherent assumption that aligning the values of the robot with that of the human benefits the team. This assumption, however… Read more

Recommended citation: Bhat, S., Lyons, J.B., Shi, C., Yang, X.J. (2024). Value Alignment and Trust in Human-Robot Interaction: Insights from Simulation and User Study. In: Vinjamuri, R. (eds) Discovering the Frontiers of Human-Robot Interaction. Springer, Cham. https://doi.org/10.1007/978-3-031-66656-8_3
Download Paper

Workshops


Clustering Trust Dynamics in a Human-Robot Sequential Decision-Making Task

Published in IEEE International Conference on Robotics and Automation (ICRA), 2022

In this paper, we present a framework for trust-aware sequential decision-making in a human-robot team. We model the problem as a finite-horizon Markov Decision Process with a reward-based performance metric, allowing the robotic agent to make trust-aware recommendations. Results of a human-subject experiment show that the proposed trust update model is able to accurately capture the human agent’s moment-to-moment trust changes. Moreover… Read more

Download Paper | Download Slides