71% of office workers stopped in the London Underground seemed willing to give their password in exchange for a chocolate bar, but we don’t know if those passwords were real. MailFrontier ran an online phishing IQ test, but it’s not externally valid because the user has the wrong primary task.
Rob Miller highlighted three challenges that we face:
- In the real world, security is a secondary task for most users, so we can’t make it a primary task in a controlled user study.
- In the real world, users are protecting real personal data, but security may seem less important in an artificial scenario.
- In the real world, users’ rights are really violated. But in a study, we can’t ethically do that.
I’ll add one more, which was suggested to me by Doug Tygar:
- In a typical controlled study, experimenters devise a fixed attack. But in the real world, attackers adapt their attacks in response to security measures.
How can we conduct studies that test how users behave under attack, while achieving a proper balance of ethical considerations and validity of the results?
Usability folks evaluate their work by running controlled user studies, because they recognize that the only way to know human behaviour is to observe real humans. Security folks evaluate their work by trying to crack it, because they recognize that attackers adapt to the available vulnerabilities. So, how about running a competitive user study where grey-hat teams compete to attack subjects? Could such a study be conducted in an ethical way?