One of the studies I did here at Human-Computer Studies on people’s interaction with adaptive and autonomous systems investigated user interaction with spam filters. While spam filters might not appear the most exciting subject, exploring users’ interaction with them actually offers quite some interesting insights for developers of adaptive and autonomous systems. Spam filters are one of the few types of systems that take semi-autonomous decisions on the user’s behalf AND are actually used in a real-life context by many, many people. They often can also be trained and sometimes operate on somewhat nontransparent criteria.
In this study, I investigated interaction with both adaptive (trainable) and non-adaptive, rule-based filters. Turns out that while many of our participants who used an adaptive filter invested a lot of effort in training, this didn’t increase their trust, nor the level of autonomy they granted their filters; investment doesn’t always translate into acceptance. Additionally, small, sub-optimal interface design features such as filters icons caused many participants to not understand interface items, induced ‘incorrect’ training behaviour and uncertainty about filter activity. It’s interesting that while research on developing adaptive and autonomous systems is on the rise, we haven’t as a community solved some of the seemingly ‘mundane’ interface design issues on less complex systems such as spam filters.
Paper will be available as: Henriette Cramer, Vanessa Evers, Maarten van Someren, Bob Wielinga, Awareness, Training and Trust in Interaction with Adaptive Spam Filters, CHI’09. Will post link to the paper as soon as it’s available.
Not only has the note been accepted, it’s also been nominated for a best note award!