get_col("DESC $table_name", 0) as $column ) { if ($debug) echo("checking $column == $column_name
"); if ($column == $column_name) { return true; } } //didn't find it try to create it. $q = $wpdb->query($create_ddl); // we cannot directly tell that whether this succeeded! foreach ($wpdb->get_col("DESC $table_name", 0) as $column ) { if ($column == $column_name) { return true; } } return false; } } function btc_altertable() { global $tablecomments; $sql = "ALTER TABLE $tablecomments ADD COLUMN comment_reply_ID INT NOT NULL DEFAULT 0;"; maybe_add_column($tablecomments, 'comment_reply_ID', $sql); } function btc_alter_comment($new_id) { global $tablecomments, $wpdb; $sql = "UPDATE $tablecomments SET comment_reply_ID=".$_POST['comment_reply_ID']." WHERE comment_ID = $new_id;"; $wpdb->query($sql); } function briansnestedcomments() { global $font_gets_smaller; if (!($withcomments) && ($single)) return; // You can safely delete the single line below if your threaded comments are up and running btc_altertable(); ?> Usable Security » Blog Archive » The Armed Butler

The Armed Butler

May 12, 2005 by Ping

Read a typical news article about computer security and you will see words like “attack” and “defend.” People speak of software being “strengthened” or “hardened” as though it were some kind of physical substance.  That might cause one to envision cannonballs smashing into the high walls of a fortress, where the only hope for safety is to build higher and thicker walls.  Or it might give the impression that software is an imperfect building material, with lots of little holes all over the place, and software engineers must diligently look for these holes and patch them over one by one.  But that is not the case at all.  All of those holes were created by people.

For example, look at the term “Trojan horse” as it is commonly used to describe software that does bad things to your computer.  The story this term conjures is something like the following:

  • You go to the store and buy a television.
  • The store gives you a box.
  • You come home and bring the box into your living room, intending to set up your television.
  • Instead of a television, the box contains a bomb.  When you open it, boom!  You are toast.

If you think in terms of this story, the implied solution is “You had better check your boxes carefully before you open them.” In the physical world, this is indeed the best you can do.  You have to use an X-ray machine or a machine that sniffs for explosives.  If you want to be safe, you have to install these machines at every entrance to your house and post guards to make sure that all boxes are scanned.

Today’s so-called “solution” to Trojan horses is exactly analogous: scan all incoming software, and try to cover all entrances.  The scanning accuracy is far from perfect, but real-world scanners are imperfect too; the analogy makes us demand no better.  When you look at it this way, the situation seems hopeless.

But the above story, while familiar, is totally inaccurate.  A bomb has the physical capacity to affect everything in its vicinity, whereas a software program can do nothing but communicate.  Programs cannot directly damage anything; all they can do is make requests of the operating system.

Thus, here is a more accurate analogy:

  • You go to the store and buy a television.
  • The store gives you a box.
  • You come home and give the box to your butler, asking him to set up your television for you.
  • The butler opens the box.  Inside is a piece of paper.  The piece of paper says “Please kill your master.”
  • The butler dutifully pulls out a gun and shoots you.

This is more like what is really happening in your computer–and it immediately raises some questions.  Why is the butler following instructions from a stranger?  Where did the butler get a gun?  Why does the butler need a gun to set up your television set?

If your butler insists that he requires a gun just to plug in your television, the solution is also obvious.  Fire the butler!

Now, the problem with these examples is that they are all rather extreme. I wouldn’t really consider a bomb in a box a Trojan…I think that is closer to a malicious virus. My example would be something more like:

# You go to the store and buy a television.
# The store gives you a box.
# You come home and bring the box into your living room, intending to set up your television.
# The box does indeed contain a television, which you set up.
# What you do not necessarily realize (unless you thouroughly examine and possibly dismantle the television) is that the television is not only keeping track of what you watch, but contains small cameras used to survey your reactions to what you are watching.

Now, the television appears to be serving its intended purpose (i.e. delivering pre-recorded crap directly to your home in all its digitized glory), but at the same time is serving a bunch of other purposes that you didn’t ask for and aren’t aware of.

You could always try and build your own television to get around this problem…but that quickly becomes a pain as you have the same sorts of threats from all the electronics you purchase. You could trust some underwriting lab to survey the products, but it could have been modified after they examined it.

No, what is needed is a tech-savey butler who can examine all of your components for you. Except…you have to hire this guy somewhere…and somehow you have to evaluate HIS credentials and make sure your butler there isn’t 1) purposely letting trojaned electronics into your home and 2) isn’t ADDING trojan capabilities to electronics that were benigned before he got his hands on them. For him to do this he needs a lot of leeway, and since the whole point for most users is that they need this butler precisely because they do not know enough about their devices to examine them on their own, they pretty much have to trust the butler.

It’s a real shame when the butler is paid more by the companies producing the trojans (somewhere in there it becomes nice and commercial enough that it is no longer called a trojan, but the same idea applies) than he is by you.

Anyhoo…all that aside…back to the butler with the gun. That analogy is also rather extreme. The problem isn’t that the butler has a gun so much as the fact that the butler probably has keys to your gun cabinet (which he might so that he can clean them and otherwise inspect them to assure they haven’t been tampered with in your absence) and can go and GET the gun at whim. And, it is really had to NOT give the butler that key without taking a lot of responsibility on yourself. So there is a trade-off.

That said…I agree with the gist of your point…that the way the whole system is currently arranged is a bit daft. The trade-off point can be pushed to a much safer place than it currently is. There could always be cameras on the gun cabinet so that you be able to catch the butler in time to do something about his misbehaving. Things *could* be made transparent enough that even a naive user could keep basic tabs on the butler, and since the butler would likely behave most of the time things would likely work out a lot better overall.

All we have to do is figure out how to do all that. Let’s get to work. ^_^

(Comments won't nest below this level.)

Talk about dangerous analogies! What about the following: you go to the store and buy a recipe that you then give to your butler to prepare. You’re hoping for a delicious meal, but the “trojan” recipe, rigorously followed by your butler, makes you very ill.

(Comments won't nest below this level.)

The analogy with the recipe isn’t quite right, because the way that the recipe makes you ill is that you ingest the food, giving it direct physical access to your body. That’s what would happen if you gave root access to a Trojan, but you don’t necessarily have to give root away freely.

Michael wrote:

I would be really interested in Pings opinion about your objections. As far as I can tell that are the kind of arguments one is always able to destroy a certain security model and I’m not sure your examples are less constructed (The butler has access to the weapon room and this kind of thing.)
I guess the problem is here that Ping used the example to show that the butler in order to do this particular job shouldn’t have too much rights (access to the gun cabinet in this case). In the real world the butler is really one person and he might have these rights, but in software I guess there doesn’t have to be a piece of software which is able to control everything in behalf of a too dumb user. If there would be a piece of software cleaning the guns and keeping them in shape, but this piece can’t leave the gun room (in order to use it) and the tv-install software isn’t able to access the gun room, but can check the box, then the problems aren’t necessarily there.
So if used right with software less could be more and software is anyway not the same as the real world.

Besides that, interesting Blog. :-)

(Comments won't nest below this level.)

In regard to your “Armed Butler” analogy I submit an alternative analogy which I believe to be closer both to day to day experience and to the situation we face with running software on today’s operating systems:

You decide to hire somebody to - let’s say - landscape your garden (service your vehicle, whatever). You meet them and give them a key to the gate to your yard (key to your vehicle, whatever) in your key chain along with the keys to your house, vehicles, safe deposit boxes, luggage, office, mail box, neighbors house, etc. and then you tell them where your house is and what you want done.

Huh? Better, I believe (more Principle Of Least Authority, POLA), if you meet them and give them just the key to the gate (vehicle, whatever) and then tell them where your house is and what you want done. You keep the rest of your keys in your key chain with you. You may pay them something up front and something upon completion and ask them to leave the key to the gate under the door mat.

The former situation (give them all keys/authorities) is how we are forced to operate in today’s operating systems - e.g. Windows and Unix. This is the “ambient authority” model. It is the model under which running software acts as a surrogate for the human user that loaded and started it and enjoys all the authorities of that human user.

A better model, I believe, is the classical capability model. Capabilities are analogous to and the computer embodiment of physical keys. They are the computer representations of authority. Under the capability model whenever you run software you explicitly give it just the authorities (as “capabilities”) that it needs to do its work (POLA). For example, a text or graphics editor you give access to the file(s) to edit and any needed multiplexed input/output (e.g. display, keyboard) devices. Multimedia software packages you give access to their input (perhaps some audio/video files) and output (e.g. a rendered file or perhaps a multiplexed display window). A Web browser you give access to the Internet. Any such software can ask you for additional authorities that it may need that arise dynamically such as a needed authority to write a downloaded file. It can do so via a pop-up window that is actually asking for an authority to write and not just asking for a location where it assumes it already has the authority to write. You may choose to grant or deny or ignore any such requests just as if a service person asked for a key to your house or office.

With the capability/POLA model software is constrained by being given only the minimum set of authorities (access rights) that it needs to do its requested work. With this model the damage that software can do (e.g. as a “Trojan Horse”) is limited to what it can do with the authorities it needs to do its minimally requested work. Certainly if you had a choice with your computer authorities as you do with your physical key authorities you would not consider it reasonable or wise to give all your authorities as a system user to software that you request a limited service from.

With today’s “ambient authority” model (Windows, Unix, nearly all commercially significant systems) all software is given all the authorities of the user that runs it. With this ambient authority model any software you run is a potential “Trojan Horse” that can damage your system or give away your resources, etc. as much as you yourself can. Today’s ambient authority model is broken and is the source of most of the security malaise in today’s IT infrastructure. This model simply needs to be replaced with a model that grants only needed authorities (POLA) to the software that we run on our computer systems.

A change from ambient authority to POLA is a significant undertaking. Fortunately it’s an undertaking that can be accomplished with very few changes to the “user” interfaces that people must deal with. This change requires substantial modifications to the application programming interface to allow programs to designate which authorities should be given to other programs that they run - e.g. a shell or window manager would limit the authorities given to a text/graphics editor or a Web browser or any other program that’s executed on behalf of a user. Such a change would be expensive (not only in terms of design/programming, but also in terms of education for programmers, etc.) and so will of course only be undertaken if some way can be found of generating return from the required investment. I believe a change from ambient authority to POLA is the only opportunity that we have for a real solution to the computer security problems that have been plaguing us evermore in recent years.

(Comments won't nest below this level.)

i’m enjoying these discussions. i’ve come to the conslusion that distinct domains of authentication (user/process/system authentication) is best supported by providing hypervisor/VM environments for these distinct domains to execute in.

since end user compliance is such a critical aspect of security, the act of ensuring proper dilineation between security domains must be intuitive and prefereably implicit.

i’m using live CD based distribution for this software/security model. promiscuous copy and share of ISO media is a handy mechanism for out of band key [pre]distribution. the constraints inherent in manual distribution of ISO images places effective limits on gaming within this ad-hoc network.

real security is hard; i’m confident that only open and cooperative endeavors will be able to address this issue acceptably. keep up the excellent work.

(Comments won't nest below this level.)
David Hopwood wrote:

“I’ve come to the conclusion that distinct domains of authentication (user/process/system authentication) is best supported by providing hypervisor/VM environments for these distinct domains to execute in.”

I don’t think there’s much doubt that the security model used by Unix-like operating systems (including WinNT) has failed. But it hasn’t failed because it is being enforced by an operating system rather than a hypervisor or VM. Any of those — or a language — are technically capable of enforcing security. Rather, it has failed for the reason Jed Donnelley says: that applications run with all the authority of their users, and depend on APIs that make it easy to confuse an app into misusing that authority.

A hypervisor/VM is in a somewhat worse position to enforce security models that fix this problem (such as capability models) than an OS or language, because it is separated from applications by another layer — the guest OS — which is not under the control of the hypervisor designer. (This is related to the point Jed was making recently on cap-talk and in about interfaces.) In addition there is a severe complexity penalty for hypervisors to work on an architecture that is not naturally virtualizable like x86.

The main reason why hypervisors/VMs seem attractive is that they allow running existing operating systems. As long as the problems in those operating systems are not fixed — and there is no reason to think that they can be — this is at best a stopgap. Even if each app runs in its own OS instance (which is likely to be too inefficient and involve licensing problems for closed-source OSes), it assumes that you can statically assign trust levels to applications that will stop “untrusted” applications from doing “bad” things. What is really needed is to replace the interface between apps and the layer immediately below them (whatever that layer is called) with something that allows components (at a finer grain than current apps) to be given only the authority that they need, and that makes the path of least resistance in using this interface as secure as possible.

Having said all that, a stopgap may be useful, if it is clearly labelled as such, and provided as a compatibility layer to encourage migration to systems that also provide pure capability-based APIs. What I would be worried about is, first, the potential for this to provide only a false sense of security, and second, the developer time and effort that it would take away from working on longer-term solutions.

David Hopwood wrote:

“(This is related to the point Jed was making recently on cap-talk and in … about interfaces.)”

The URL intended to include here (that got eaten by WordPress) is


A flaw with the butler analogy, I think, is that in the case of computers, everything is done through requests made of the butler (OS). All you can do is give the butler instructions and lists of things to do.

Say you tell the butler, “Follow the instructions on this list,” and hand him one of two pieces of paper. Once says “1) Buy a new TV, 2) Follow all instructions in the manual,” and the other says “1) Buy an Acme Suicide Kit, Bullets Included, 2) Follow all instructions in the manual.” (Assume these are both valuable products.)

This brings us to the real issues. Should you have multiple butlers, only some of which have a gun, and send a no-gun butler to buy the TV? Should all butlers be required to ask you “Are you sure?” before using a loaded firearm? How about before installing a new TV?

What if both pages of instructions will cause the butler to approach you with a big cardboard box and say, “This action requires administrator privileges. Are you sure you want me to execute all instructions in this box, signed by Acme Corp., whatever the consequences?” What if they’re the same box?

(Comments won't nest below this level.)
David G wrote:

Whoops, too many Davids. Call me David G.

(Comments won't nest below this level.)

The analogies on armed butlers and comparisons with bridge building are dancing around what I’d characterise as a distinction over ‘goods’ to use the economics term. In ’safety goods’, we note there are models of failure that lend themselves to statistical analysis and other mathematics like statics (for bridges), as well as good testability.

In ’security goods’ this modelling breaks down. The biggest difference is the presence of an active attacker - something that no other good or science tends to worry about. Butlers don’t follow notes to kill their masters, and bridges aren’t nuilt to withstand demolitions. Statistics fails to predict, as does testability. Notwithstanding an ability to make proofs at a low level, taking proofs to a high level is fraught and hasn’t really made a mark in the real world. How does one build under these conditions?

(Comments won't nest below this level.)


(Comments won't nest below this level.)

[...] and sinker. This is exactly the type of mistaken comparison I was talking about in recent entries. Viruses are dangerous when they enter your body because they have complete physical [...]

(Comments won't nest below this level.)


The Armed Butler

(Comments won't nest below this level.)

mpe download

The Armed Butler

(Comments won't nest below this level.)

download music sites

The Armed Butler

(Comments won't nest below this level.)

un mundo separado por el mismo dios mp3

The Armed Butler

(Comments won't nest below this level.)

free goo goo dolls music videos

The Armed Butler

(Comments won't nest below this level.)

asian myspace music video code

The Armed Butler

(Comments won't nest below this level.)

aries music

The Armed Butler

(Comments won't nest below this level.)