New "classification models" sense how well humans trust intelligent machines they collaborate with, a step toward improving the quality of interactions and teamwork. The long-term goal of the overall field of research is to design intelligent machines capable of changing their behavior to enhance human trust in them. For example, aircraft pilots and industrial workers routinely interact with automated systems. Humans will sometimes override these intelligent machines unnecessarily if they think the system is faltering. The researchers have developed two types of "classifier-based empirical trust sensor models," a step toward improving trust between humans and intelligent machines. The models use two techniques that provide data to gauge trust: electroencephalography and galvanic skin response. The first records brainwave patterns and the second monitors changes in the electrical characteristics of the skin, providing psychophysiological "feature sets" correlated with trust.
Visit Website | Image credit: Purdue University/Marshall Farthing