From time to time, I’ll highlight some of the special challenges faced by designers of usable security. Let’s start with a fairly obvious problem that’s often exploited in security attacks on people:
The “Obedience to Authority” problem occurs when the safe course of action requires the user to reject or contravene an apparently authoritative command.
“Obedience to Authority” refers to Stanley Milgram’s famous experiment in which surprisingly many people were successfully persuaded to inflict what they believed were painful or even lethal electric shocks on a stranger.
At the beginning of the experiment, each subject met a stranger who appeared to be just another experimental subject, but was actually an actor. The actor was strapped into a chair in another room and experimenter then instructed the subject to administer a simple word test. With each wrong answer given, the subject was to press buttons delivering electric shocks to the man in the other room, initially at 15 volts and then increasing in 15-volt increments. At 120 volts the man could be heard complaining loudly and demanding to be released from the experiment; at 285 volts the man would scream. When the subject hesitated, the experimenter would ask them to continue. In the first run of the experiment, despite showing great discomfort, 65% of participants nonetheless proceeded to give shocks right up to the maximum level of 450 volts, a level marked “XXX” on the instrument panel.
It’s hard to say no to a direct command. But a deeper insight that can be learned from Milgram’s experiment is that it’s especially hard to change your mind and say no after you’ve already said yes. I’d bet that far fewer participants would have been willing to give the first shock immediately at the maximum level. Starting at 15 volts and gradually increasing the voltage gives the subject some time to get used to following orders before the ethical dilemma becomes serious. It’s not just a matter of obedience; it’s obedience to someone that has come to be considered as an authority. Having started down a path on the assumption that the instructions are coming from an authority makes it that much harder to go back and reject them.
The lessons of Milgram’s experiment apply to defenses against attacks on the human-computer system. Suppose Uncle George downloads a fancy new security tool that’s supposed to protect him from spoofing. One day he comes across a message from his bank telling him to log in right away because there’s a problem with his account. The security tool detects something suspicious and interjects, warning him not to log in. At this point the technical security expert may think the problem is solved. Just heed the warning, and all will be well.
But look at it from George’s perspective. Now he has conflicting instructions from two different sources. As far as he’s concerned they’re both just things that were downloaded to his computer. Why should he trust either one of them over the other? In fact, the security tool is at a disadvantage because George has already read the message. He has already accepted that it’s a message from his bank and is already in the process of following the instructions. It’s going to take a lot to make him go back and reject that authority.
Furthermore, keep in mind that many people see computers as just things you buy to help you get work done. To them, computers and computer programs are appliances, like a dishwasher or a toaster. Which has the greater authority, your toaster or your bank?
This is a really tough problem and I don’t claim to have a good answer. I speculate that one avenue to pursue would be to find ways for trustworthy software (in security terms, the TCB) to more recognizably assert authority — perhaps using images of familiar faces that the user has selected, showing the user’s own face, giving instructions in the user’s own words, or using the user’s prerecorded voice. For developing a stronger long-term trust relationship with one’s computer, maybe a computer could use its own, permanent, uniquely generated face or voice.
Let’s hear your ideas.