Challenges: Obedience to Authority

July 19, 2005 by Ping

From time to time, I’ll highlight some of the special challenges faced by designers of usable security.  Let’s start with a fairly obvious problem that’s often exploited in security attacks on people:

The “Obedience to Authority” problem occurs when the safe course of action requires the user to reject or contravene an apparently authoritative command.

“Obedience to Authority” refers to Stanley Milgram’s famous experiment in which surprisingly many people were successfully persuaded to inflict what they believed were painful or even lethal electric shocks on a stranger.

At the beginning of the experiment, each subject met a stranger who appeared to be just another experimental subject, but was actually an actor.  The actor was strapped into a chair in another room and experimenter then instructed the subject to administer a simple word test.  With each wrong answer given, the subject was to press buttons delivering electric shocks to the man in the other room, initially at 15 volts and then increasing in 15-volt increments.  At 120 volts the man could be heard complaining loudly and demanding to be released from the experiment; at 285 volts the man would scream.  When the subject hesitated, the experimenter would ask them to continue.  In the first run of the experiment, despite showing great discomfort, 65% of participants nonetheless proceeded to give shocks right up to the maximum level of 450 volts, a level marked “XXX” on the instrument panel.

It’s hard to say no to a direct command.  But a deeper insight that can be learned from Milgram’s experiment is that it’s especially hard to change your mind and say no after you’ve already said yes.  I’d bet that far fewer participants would have been willing to give the first shock immediately at the maximum level.  Starting at 15 volts and gradually increasing the voltage gives the subject some time to get used to following orders before the ethical dilemma becomes serious.  It’s not just a matter of obedience; it’s obedience to someone that has come to be considered as an authority.  Having started down a path on the assumption that the instructions are coming from an authority makes it that much harder to go back and reject them.

The lessons of Milgram’s experiment apply to defenses against attacks on the human-computer system.  Suppose Uncle George downloads a fancy new security tool that’s supposed to protect him from spoofing.  One day he comes across a message from his bank telling him to log in right away because there’s a problem with his account.  The security tool detects something suspicious and interjects, warning him not to log in.  At this point the technical security expert may think the problem is solved.  Just heed the warning, and all will be well.

But look at it from George’s perspective.  Now he has conflicting instructions from two different sources.  As far as he’s concerned they’re both just things that were downloaded to his computer.  Why should he trust either one of them over the other?  In fact, the security tool is at a disadvantage because George has already read the message.  He has already accepted that it’s a message from his bank and is already in the process of following the instructions.  It’s going to take a lot to make him go back and reject that authority.

Furthermore, keep in mind that many people see computers as just things you buy to help you get work done.  To them, computers and computer programs are appliances, like a dishwasher or a toaster.  Which has the greater authority, your toaster or your bank?

This is a really tough problem and I don’t claim to have a good answer.  I speculate that one avenue to pursue would be to find ways for trustworthy software (in security terms, the TCB) to more recognizably assert authority — perhaps using images of familiar faces that the user has selected, showing the user’s own face, giving instructions in the user’s own words, or using the user’s prerecorded voice.  For developing a stronger long-term trust relationship with one’s computer, maybe a computer could use its own, permanent, uniquely generated face or voice.

Let’s hear your ideas.

I wonder if the battle is already lost if two commands can come into conflict? Making the White Team’s commands “better” by doing things more authoritively isn’t going to help as the Black Team can simply copy them. And at the end of the day, it’s a crap shoot which command is accepted by the user, which works in the attackers favour if a mass phish can be launched.

The way high security systems deal with this is by training in the system, so that redundant information is available for the user to then analyse which is correct. Also, most security systems have the ability to cause costs to the attacker; the net doesn’t.

Personally, I can’t see any way forward without bifurcating if strong security is required. One machine for trusted stuff, and only really really truley trusted stuff gets put on there, hoever it is done. And another machine for games. Which gets wiped and reinstalled every time one gets sick of the smell of burning privacy.

Making the White Team’s commands “better” by doing things more authoritively isn’t going to help as the Black Team can simply copy them.

Yes, if the Black Team can copy them. Security folks have mainly focused on how the White Team can give commands in a way that can’t be imitated. That’s essential, but it’s only half the problem. The real challenge is to give commands in a way that can’t be imitated and also conveys greater authority to the human user.

This is a very conceptual problem, it seems. I’m having trouble putting meat on the bones.

What is authority? If one thinks in terms of relationship, one could say that authority is greater likelihood of being “within the relationship”. Which would suggest more information that could only be found within; leveraging the relationship again. If I was a bank, I’d use information only known via other means, such as a one-time-codeword sent on mailed paper statements.

(As the White Team pushes out to the user, it is likely to be email, chat, or SMS. For email and for SMS there are ways of making that more authoritive internally. S/MIME could be used, or the user could be encouraged to petname the phone number for SMS messages.)

If one thinks in terms of relationship, one could say that authority is greater likelihood of being “within the relationship”. Which would suggest more information that could only be found within; leveraging the relationship again.

Pictures and words selected by the user are one kind of information that can be confined within a relationship. But we don’t know to what extent users will understand that such indicators are unspoofable, or to what extent users associate unspoofability with authority. I think this would be a fruitful area for study.

(Comments won't nest below this level.)
 
 
 

I can’t see any way forward without bifurcating if strong security is required.

“Strong” is a continuum. It’s not so black-and-white. I think we can still move forward by looking for ways to make trustworthy things look and feel more trustworthy and less trustworthy things look and feel less trustworthy.

 
 

It really depends on how you phrase the problem. It isn’t necessarily a question of getting the user to resist authority; the phishers do not HAVE authority…merely the appearance of authority. That appearance persists because when it comes to computers in general and the Internet in particular people do not have a refined ability to spot an impostor. Hence, authority is wrongfully attributed to the phisher, and the phisher takes the place of man in the white lab coat in Milgrom’s experiment.

To keep the parallel with Milgrom…what if another researcher had come into the room and countered the first…saying that the subject should stop? What if the opposing researcher had some symbol indicating a higher rank than the first? I suspect most subjects would have stopped. THAT is what we need to do through the user interface…counter the claims of the phishers to remove the impression that they HAVE authority.

As a workable experiment…I wonder how many users would go through with a the phishing attempt if they immediately received some sort of feedback that included a sense of legitimate authority rather than just an automatic feeling pop up…

Like…say…if the the words “paypal” are detected in the URL or body of a site and the site is not signed by an SSL cert registered to PayPal a window came up specifically warning about phishing with a big PayPal logo in it. Okay…fine…that might not be incredibly doable…but it could be a service that companies register with…or the like…*random ponderings*

It isn’t necessarily a question of getting the user to resist authority; the phishers do not HAVE authority…merely the appearance of authority.

When it comes to the user interface, it’s the user’s beliefs that matter.

There are two steps in a typical phishing attack: first the e-mail message, then the website.

The e-mail step creates the initial appearance of authority (by appearing to come from a known party and issuing a command). I totally agree that preventing masquerading in e-mail is an important goal.

However, when we talk about browser-based measures, we’re already in the situation where an authority has been established. So we really are talking about rejecting authority.

 

THAT is what we need to do through the user interface…counter the claims of the phishers to remove the impression that they HAVE authority.

How do we give the counterclaim greater authority?

 

I don’t think the solution will be technical. Most of the respondents feel that the problem is that the system messages must somehow distinguish themselves from the email. Or that users must be trained to know the difference. Why would that help?

A naive user does not even understand the hierarchy of security here, that messages from my system are to be trusted, whereas messages from the outside world aren’t.

And why should they? The computer didn’t recognize the damn printer last week. On the other hand that email from work was really important.

Basically the insight here is that a human’s perception of authority is dynamic, and whatever protocol we put into the computer is static.

……..

That said the TCPA people had ideas about this (I saw a talk circa 2002 about it). The idea is to use ubiquitous encryption at a low level in hardware, even up to the monitor, and have another processor watching over everything, checking the signatures of programs to be executed.

One thing you can do here is, at install, to have the user feed the computer an image or something that it will use later to show its authority. This would be stored somewhere absolutely inaccessible to the rest of the system. Maybe it could be a picture of the scowling sysadmin down the hall. ;)

TCPA has a lot of nastier issues but that might do some good.

The idea is to use ubiquitous encryption at a low level in hardware, even up to the monitor, and have another processor watching over everything, checking the signatures of programs to be executed.

i like this concept applied to xen hypervisor instances, with domain0 doing the monitoring, firewalls, traffic shaping, IDS and anomoly / error detection.

the cost of doing this kind of pervasive encryption and multiple instances is dropping rapidly due to increasing availability of hardware crypto (via C5 with padlock) and multiple core designs.

so yeah, there will hopefully be lots of positive things coming out of TCPA rather than negatives like DRM :)

rather than negatives like DRM

I don’t know if you’re serious. TCPA’s only reason for existence is DRM. I heard this straight from the guy who did the first iteration of the project.

The technology *could* be applied to making a visually distinct class of system messages. But a) you’d need to be Microsoft or Intel to make that happen. b) the problem is deeper than just identifying a class of messages. It’s in making ordinary people trust and believe them, when they usually offer technobabble or cry wolf.

(Comments won't nest below this level.)

I don’t know if you’re serious. TCPA’s only reason for existence is DRM.

Please see:

invisiblog.com/1c801df4aee49232/article/0df117d5d9b32aea8bc23194ecc270ec

“Interesting Uses of Trusted Computing”

just because it was built with DRM in mind does not imply that it is only useful toward such ends.

The street finds its own uses for things. — William Gibson, Neuromancer

 
 

Anyway, Ping *said* all this so I am just reiterating all that stuff with the Palladium or TCPA or TCB or whatever it’s called this week. Sorry.

If I’m adding anything: the main reason why people don’t trust system messages is because they are abused, by the OS manufacturer and the whole industry. So their credibility is very diluted.

I don’t know if cosmetic changes, or even a new technology or output device for extra super important messages, could ever fix that credibility problem.

We’d need a cultural change in our industry to write programs that just NEVER bugged the user unless it was extremely important.

Or a new approach to trusted relationships that happens at a lower level than just the web browser or email program.

(Comments won't nest below this level.)
 
 
 
 

Most scams are initiated by some sort of e-mail message, we all know that, and most links in these scam mails use IP numbers instead of domain names, so displaying a warning for the end user when he clicks on such link might be a very important first step to start with.

A delay of two seconds before the yes/ok or whatever button is enabled will prevent people from stupidly clicking yes/ok. Note that the message should be short, clear and inform you where the target site is located.

Would you visit a bank, located in some far away random country, when your bank is in fact only two block away?

 

You’ve set the paramaters of the thought experiment that the claim and counter-claim are already present. So in some sense the bad claim’s authority is already accepted. In some measure, otherwise we wouldn’t be here.

> How do we give the counterclaim greater authority?

The only high level or conceptual thing I can think of is more information, and more redundancy. More factors. This in a sense does not change the nature of the attack, just raises the cost. If more factors can be brought into play, ones that are uncorrelated in their channel with each other, the higher the cost in duplicating them as the more uncorrelated channels have to be attacked.

Printing out IP numbers helps, maybe, but also bear in mind that banks might validly use IP numbers - why ever not? Same with a delayed YES button; why wouldn’t a bank want to do the same thing? They are both - attacker and bank - appealing to the same emotions.

 

“Printing out IP numbers helps, maybe, but also bear in mind that banks might validly use IP numbers - why ever not?”

You don’t have to display any IP numbers, but a link with IP number should trigger the notification, and again, in combination with the geographic location of the site…that should make the difference. You must be a complete moron if you think that a bank in the US is located in some random far away country, right?

“Same with a delayed YES button; why wouldn’t a bank want to do the same thing?”

You are simply confirming what I’ve been saying already so many times, chrome alerts/confirmation should be clear. They should tell you that it is a chrome alert/confirmation and not some random alert/confirmation triggered by a website!

Now you say: “they might use images or animated images to simulate the button delay.”

No, that won’t work, if you do it right. The alerts/confirmation windows should not use the domain name as title, but some user oriented information stored in your profile, simply because websites can’t read your profile. However, displaying the IP number/domain name in that alert/confirmation window will be handy additional information.

It all comes down to lack of information and lack of education but this will change when browsers start displaying the additional information. A browser should assist the end user, to help them, and that is clearly not the case, not in any (Mozilla) browser at this moment.

> You don’t have to display any IP numbers, but a link with IP number
> should trigger the notification, and again, in combination with the
> geographic location of the site…that should make the difference.

That’s an arms race suggestion. Yes, it will work, but only because the hacker isn’t expecting that check, if I understand your geographic reference. Given the capability of botnets, that’s the sort of check I would use myself personally, but not bother to code up in a serious distributed tool, because it would just invite response, and it was what certs were supposed to fix.

> You must be a complete moron if you think that a bank in the US
> is located in some random far away country, right?

HJ, I don’t think american hackers will agree with you calling them complete morons! Even the journeyman script kiddie college dorm hacker would know how to hack a US machine so as to acquire and present a US IP number :-) You won’t catch your average Ukrainian ex-rocket scientist hackers falling for that silliness, they still have geography lessons over there.

 
 
Amir Herzberg wrote:

I think we should accept that whatever indicators we put, there will be a significant number of users who will fail to notice them and believe the message from the attacker. Defense should involve several complementing measures:
1. Highly visible security indicators
2. Training users to identify attacks - possibly, by emulating attacks?
3. Automatically _blocking_ suspected frauds.

There are currently many tools for (1), of course I develop (and recommend :-) TrustBar… Few web tools try #3, but it is common for e-mail; and I can’t think of programs for #2. We now work on an extension, Hey!, to perform 2+3 for browsers, challenging users to detect real and emulation attacks. I’m not sure this will be acceptable to users, though - I am concerned many naive users may be intimidated. But we will experiment and learn…

1. Highly visible security indicators

I would propose adjusting this criterion: indicators need to be not only noticeable but perceived as authoritative. (And for completeness, perhaps we can say “noticeable” rather than “visible”, since other senses may be involved.) Would you agree?

Amir Herzberg wrote:

Sure, I agree with that.

 
 
 

Have you seen the scam e-mails? I do, and there wasn’t a single target server based in the US.

 

a fundamental and difficult / multi-faceted security question indeed.

here is where i think tenable solutions may lie:
- recognizable security that uses direct personal relationships to build a trust network. web of trust tied to self copying live iso and hdd linux distributions. you inherit trust in a given provider of a distribution by receiving a copy of the person / quorum pub key for auth purposes later. profiling of application behavior using authenticated sessions (file system i/o, system call invocations, network i/o) can also be so inherited and supports anomoly based attack detection / avoidance.

- trust in the security of identities themselves by clearly defining identity as something you have, know, or are (hardware tokens, pass phrases or PIN’s, and biometrics respectively) and easily supporting combinations of secure identification techniques as requisites for certain tasks: biometric + interactive passphrase to access my digital currency wallet; passphrase to view email; hardware iButton for server startup, etc.

- trust in the software and process itself by requiring infrastructure that is open and well vetted by the person / quorum that produces the distribution you use. for example, a quorum system that produces source code snapshots and filesystem images (live ISO?) signed with unanimous agreement. the code / configuration changes from a previous version would be highlighted and individually approved (via signature) by one or more of the quorum members. making the interface to this type of security intuitive and implicit is a hard task.

- distinct domains of security and trust; you may keep financial details on a cryptoloop partition accessible only from a minimal console based openbsd instance. you would keep nothing personal or persistant on a public linux image used for web surfing and IRC. xen hypervisor and other virtualization techniques are making such configuration of multiple distinct operating system images attractive. especially given the ability to isolate network services and stacks this way. what least privileges you are allowed in each of these instances is clearly based on your identity within that domain and the level of trust it is given.

these are some general themes where i think security improvements will find the best result. any thoughts on related techniques or other newly developed approaches?

 

Authority is a useful optimisation for coping with the complexity of our world. It means I can trust somebody else to operate on me in an emergency without having to know the details, because they are an authority on surgery. (And if they’re not, but they claim to be, a very different sort of authority will help me prosecute the psycho - assuming I’m still alive afterwards :) However, all optimisations rely on certain assumptions, and the optimisation may fail if the assumptions are not met.

Uncle George’s first problem is that he misidentified the source of the email and consequently he misattributed authority to the email. He assumed he could attribute identity based on the email headers and content. The Authority optimisation relies heavily on correct identification of the relevant authority. Con-artists have developed to take advantage of this opportunity to profit from misidentification of an authority. In cyberspace, I think this problem can be solved with petnames and the elimination of human-readable email addresses.

Uncle George’s second problem is that he’s human. You may be interested in Robert Cialdini’s “Influence - Science and Practice”, which explains the details better than I can here. You should read the chapters on “Commitment and Consistency” and “Authority”. As these are specifically issues fundamental to human psychology, I think the only way for people to avoid being screwed over is to become more aware of the issues themselves. If we were to quiz each kid in high school on a copy of “Influence” instead of classic literature, they’ll end up with a significantly more practical education. (While you’re at it, throw in a Finance for Dummies book.)

Then again, I first became aware of the Milgram experiment after learning about Zimbardo’s Stanford Prison Experiment. Really, what authority did Stanley Milgram and Philip Zimbardo have to convince people to behave the way they did?

Perhaps people just *want* to be told what to do (it’s certainly been my experience with many people in work situations), and there is nothing we can do about it. The nice thing about doing what you’re told to do is that you don’t have to take responsibility for your actions. Sometimes you can even claim ignorance and put the blame on somebody else entirely (say, the person who ordered you around)!

As long as people are in that mindset, they’ll just blame “the computer” for whatever went wrong. I suspect that this is more likely to occur with work-related security issues (where indifference is rampant and blame is easy to pass on) than with more personal stuff (like bank accounts). I wonder if this means that we need different software for work than at home.

 
R_Daneel_Olivaw wrote:

Interesting Dilemma.

As Ping already pointed out, there are two steps in a typical phishing attack: first the e-mail message, then the website. So when the end-user clicks on the link to the web-site, (s)he has already accepted an authority twice. Unfortunately for us, the authority of the phisher…

People being people and all end-users being dumb ;) we now have a steep mountain to climb to win back the user’s trust.

Milgram not only raised the issue that Ping is describing here, but also points us to a solution as he found out that when the immediacy of the victim was increased, compliance decreased. Therefore we are only faced with establishing a higher authority to the end-user then the one of the phisher in a way that can’t be imitated by getting more intimate with the user.

The KISS solution (Keep It Simply Stupid) to getting this message across in the GUI is:
1/ Use a funky background and font colour: GMail uses a white font on a red background.
2/ Use sound: An authorative voice telling the end-user “SECURITY WARNING! You are being ripped off.”
3/ Use animation: An animated GIF of a wallet being drained of money.
4/ All of the above :)

Or as a famous person once said “…when you have eliminated the impossible, whatever remains, however improbable, must be the truth.”

:)

Daneel

 

Outstanding Questions on Federated Identity…

I previously asked several questions around Federated Identity and figured I would throw out a couple of additional ones in hopes that someone may know the answer?…