From now on, fun will be mandatory at the Mobile Life Centre!
This January we’re starting a new project, all about playfulness – and the little quirks that make our lives better. Beyond our intense exploratory research (aka playing Wii Band Hero), we (=me, Helena Mentis, Ylva Fernaeus) have also produced a Very Serious Position paper outlining some challenges when trying to design for playfulness and fun in the workplace. People do appear to have the strange tendency not to enjoy enforced fun and not even to enjoy the same things, so what is ‘designing’ for playful experiences really about?
The paper has resulted in a nice trip to Savannah, GA for CSCW 2010 where we’ll participate in the ‘Fun, seriously?’ workshop. All workshop papers can be found here. Do check out the paper of Marleigh Norton and Philip Tan of the MIT GAMBIT Game Lab in Singapore on their work as game developers and on how Bill & Ted’s excellent adventures can guide your professional life. It’s both awesome and excellent really – and they’re bravely trashing the conference paper formatting guidelines too, chapeau!
One of the studies I did here at Human-Computer Studies on people’s interaction with adaptive and autonomous systems investigated user interaction with spam filters. While spam filters might not appear the most exciting subject, exploring users’ interaction with them actually offers quite some interesting insights for developers of adaptive and autonomous systems. Spam filters are one of the few types of systems that take semi-autonomous decisions on the user’s behalf AND are actually used in a real-life context by many, many people. They often can also be trained and sometimes operate on somewhat nontransparent criteria.
In this study, I investigated interaction with both adaptive (trainable) and non-adaptive, rule-based filters. Turns out that while many of our participants who used an adaptive filter invested a lot of effort in training, this didn’t increase their trust, nor the level of autonomy they granted their filters; investment doesn’t always translate into acceptance. Additionally, small, sub-optimal interface design features such as filters icons caused many participants to not understand interface items, induced ‘incorrect’ training behaviour and uncertainty about filter activity. It’s interesting that while research on developing adaptive and autonomous systems is on the rise, we haven’t as a community solved some of the seemingly ‘mundane’ interface design issues on less complex systems such as spam filters.
Paper will be available as: Henriette Cramer, Vanessa Evers, Maarten van Someren, Bob Wielinga, Awareness, Training and Trust in Interaction with Adaptive Spam Filters, CHI’09. Will post link to the paper as soon as it’s available.
Not only has the note been accepted, it’s also been nominated for a best note award!
As part of my PhD-studies into user interaction with semi-autonomous systems, we conducted a small survey-based, experimental study comparing participant reactions to different interactions between an in-vehicle agent and a driver. I’ll be presenting the first part of our in-vehicle agent studies at the Workshop on Human Aspects of Ambient Intelligence (HAI) at the Int. Conference on Intelligent Agent Technology in Sydney, early December.
In-vehicle agents can potentially avert dangerous driving situations by adapting to the driver, context and traffic conditions. However, perceptions of system autonomy, the way an agent offers assistance, driving contexts and users’ personality traits can all affect acceptance and trust. This paper reports on a survey-based experiment (N=100) that further investigates how these factors affect attitudes. The 2×2, between-subject, video-based design varied driving context (high, low density traffic) and type of agent (providing information, providing instructions). Both type of agent and traffic context affected attitudes towards the agent, with attitudes being most positive towards the instructive agent in a light traffic context. Participants scoring high on locus of control reported a higher intent to follow-up on the agent’s instructions. Driving-related anxiety and aggression increased perceived urgency of the video scenario.
As soon as the online proceedings are available, I’ll post the link to the full paper.
My journal article on the effects of transparency on interaction with user-adaptive systems (finally) has been published in User Modeling and User-Adapted Interaction. We used transparent and non-transparent versions of a content-based art recommender to see whether user understanding affects trust and acceptance. Turns out that offering explanations did influence acceptance of systems decisions, but not trust in the system overall.
Cramer H., Evers V., Van Someren, M., Ramlal, S., Rutledge, L., Stash, N., Aroyo, L., Wielinga, B. (2008) ‘The effects of transparency on trust and acceptance in interaction with a content-based art recommender’, User Modeling and User-Adapted Interaction. ‘online first’ pdf
Our more recent research is focusing on concepts such as dealing with autonomy and perceptions of user control, but also on social and affective aspects of interaction, such as empathy.