This week we’ll be at MobileHCI’09 in Bonn, presenting a poster on our research within the Diadem project. One of the main goals of the Diadem project is to detect potentially hazardous airborne pollutants in urban-industrial areas using input from both a distributed sensor network and people through their mobile phones. In the proposed interaction model, a semi-autonomous system will use sensor data to detect abnormal situations, while people in the affected area will be requested by a mobile service to report additional observations, such as chemical smells (which may not be the easiest to describe).
This raises quite some interesting issues. Continue reading “Diadem project at MobileHCI’09”
One of the studies I did here at Human-Computer Studies on people’s interaction with adaptive and autonomous systems investigated user interaction with spam filters. While spam filters might not appear the most exciting subject, exploring users’ interaction with them actually offers quite some interesting insights for developers of adaptive and autonomous systems. Spam filters are one of the few types of systems that take semi-autonomous decisions on the user’s behalf AND are actually used in a real-life context by many, many people. They often can also be trained and sometimes operate on somewhat nontransparent criteria.
In this study, I investigated interaction with both adaptive (trainable) and non-adaptive, rule-based filters. Turns out that while many of our participants who used an adaptive filter invested a lot of effort in training, this didn’t increase their trust, nor the level of autonomy they granted their filters; investment doesn’t always translate into acceptance. Additionally, small, sub-optimal interface design features such as filters icons caused many participants to not understand interface items, induced ‘incorrect’ training behaviour and uncertainty about filter activity. It’s interesting that while research on developing adaptive and autonomous systems is on the rise, we haven’t as a community solved some of the seemingly ‘mundane’ interface design issues on less complex systems such as spam filters.
Paper will be available as: Henriette Cramer, Vanessa Evers, Maarten van Someren, Bob Wielinga, Awareness, Training and Trust in Interaction with Adaptive Spam Filters, CHI’09. Will post link to the paper as soon as it’s available.
Not only has the note been accepted, it’s also been nominated for a best note award!
My journal article on the effects of transparency on interaction with user-adaptive systems (finally) has been published in User Modeling and User-Adapted Interaction. We used transparent and non-transparent versions of a content-based art recommender to see whether user understanding affects trust and acceptance. Turns out that offering explanations did influence acceptance of systems decisions, but not trust in the system overall.
Cramer H., Evers V., Van Someren, M., Ramlal, S., Rutledge, L., Stash, N., Aroyo, L., Wielinga, B. (2008) ‘The effects of transparency on trust and acceptance in interaction with a content-based art recommender’, User Modeling and User-Adapted Interaction. ‘online first’ pdf
Our more recent research is focusing on concepts such as dealing with autonomy and perceptions of user control, but also on social and affective aspects of interaction, such as empathy.