In the design of the web whenever there was a trade-off between usability and security, usability always won. Worse, those raising the usability issues were often not usability experts, they were just using it as a wedge to get what they wanted.
Usable security should be considered as part of Total Cost of Ownership. In a typical system the licensing costs are dwarfed by the TCoO, which includes training, maintenance, and interoperability while constraining adaptability. A truly usable system would have a large impact on the TCoO.
It is especially common to externalize security costs by dumping them on the user.
The Administrators are often, in a sense, “the enemy of usability”. It is not necessarily in the interest of the Administrators and IT staff to push for especially usable systems because their job security is most stable when the end users are clueless and in need of constant help. The more helpless the users, the more valuable those that assist them and provide support.
How would common metrics of usability help the problem? Some of the most elusive metrics are trust metrics. It is very hard to make a universal usability metric because oftentimes products will be perfectly usable in one way but not in another way. For instance, if I download a file from iTunes it is perfectly usable in the ways Apple endorses, but not in the general sense.
So what does it mean for a system to be “usable”? Does context matter? Who defines and sets the context? Are the contexts exclusively set by the producer, or does the user have some say?
Metrics could be established based on dollar value lost by employee’s lost time. As an example, in many cases accessing corporate resources through the firewall is so painful that many workers just choose not to remotely login, which has a direct impact on work that would otherwise be more readily completed.
Simply saying to estimate risk and the potential of vulnerability is tempting, but these are so very subjective that they’ll tend to vary widely and likely will continue to do so.
There is no good, generally accepted taxonomy for discussing security risks.
There is often a temptation to model and base risk assessment on worse case scenarios.
There seems to be an implicit assumption that it is valuable to squash bugs. But in some cases it may well be that the company benefits by leaving the bugs in, because doing so enables them to sell patches and perhaps helps sell the next version of the product. Having dealt with the bugs, the company may well feel invested in the product and loyalty to the producer for fixing them. There are real costs, both psychological and direct, that are incurred by fixing bugs.
Costs keep coming up in a negative sense. How do you frame security and usability as a value-added rather than a cost avoided? Which is more motivating? Some members reported getting much more traction with the positive approach than the more typical negative approach.
More and more systems seem to offer insurance against incurred losses. Does these programs really affect purchase decisions? What kind of compensation is realizable? How does one distinguish between a security incident and an availability incident?
How much should humans be included in the loop? It is tempting to take humans out in many cases because their reactions are uninformed and can exacerbate the problem; in addition, many users would rather not have to make security decisions since they wouldn’t know how to respond anyways. On the other hand, there is value in direct communication among security professionals in responding to many types of problems.
Trying to immediately respond to everything may overload internal resources and cause a huge increase in complexity. The Roman Empire succeeded by responding precisely rather than immediately with more of a focus on containing system effects rather than maintaining strict perimeter integrity. Such a model could prove very valuable to the security industry.