Discussion Session: Invisible HCI-SEC: Ways of re-architecting the operating system to increase usability and security

July 16, 2009 by Richard Conlan

Discussion session lead by Simson Garfinkel.  Free form discussion follows.
(there were other sessions, but as I only attended this one, this is the only one I got to blog)

Simson wants to talk about system constraints rather than usability constraints.  In practice, focusing on one at the detriment of the other simply creates an insecurity at one end or the other.  Instead, focusing on both, ideally by leaving the UI constant, allows for a balanced approach.

Too often people assume that security can be achieve through minor tweaks to the UI.  We should perhaps be more focused on architectural changes that bring the system more in line with expectations in the first place.

It feels like we cannot get ourselves to ask the right questions - too often we expect the humans in the loop to do things humans are known to be bad at.

Least privilege should be adhered to and seriously addressed, in particular, by allowing for a finer granularity.

Least privilege is very nice, but it has a fundamental assumption of static capabilities.  If it isn’t static then it requires privilege management, which becomes more difficult the more privileges there are to manage and the more often they may change.

There are some things that users are pretty good at doing, such as knowing which specific file they want to open.  This alignment is invaluable, because it allows for leveraging of the user’s intuition.

Architecture is key.  UI is, in the end, the presented abstraction.  A simpler, well-designed underlying architecture can allow for simpler, more usable abstractions.

The firewall is a perfect example of a flaw in practice.  The firewall was invented to buttress failures at the OS level.  If there is a sufficiently strong firewall then an organization can make useful assumptions about the environment within the firewall.

Windows, Mac, and Linux all run applications with the capability to touch all of the user’s files.  In many ways this is too large a level of granularity.  What about the web?  In many ways, it has a much more usable security model because sites are genuinely separate from one another.

What are areas where we can have big changes with small amounts of work?  For instance, we have encrypted filesystems.  How could we make use of these to, say, enforce least privilege at an architectural level, such as an installer limited to accessing only a specific folder.

What if the installer was part of the OS instead of trusting third-parties to decide what gets installed on your system?  Then the OS has an opportunity to monitor and restrict the capabilities of the installer and limit dangerous behaviors.  A good example is the installer for Google’s Android OS.

Another idea for a trusted path is an equivalent of Alt-Tab on the PC, where there is an app-switching UI that is rendered by the OS.

When you delete a file, it should actually be deleted.  In Windows Vista, when you format a hard-drive it finally actually formats the hard-drive.

If you have version control and good tracking of changes, then write access matters less because you can always go back to an old version.  However, there is a danger then that sharing the file allows those viewing it to see older versions.  One solution to this is for sharing to only share current and future versions but not past versions.

Most organizations, given the choice, would rather have better backups that cannot be erased rather than have an ensured ability to securely redact old data.

Automatic updates are a double-edged sword.  On one hand, they ensure security patches are distributed and installed.  On the other, they cause the system to be vulnerable to newly introduced bugs and security configuration issues.

HP has a process called CATA, which builds security reviews in from the earliest design states.  They have found an 87% reduction in the number of vulnerabilities that have occurred after release.  It is mainly an up-front cost to ensure discipline in conducting reviews.

For a long time something like half the CERT advisories were buffer-overflow adversaries.  These days the dominant advisories are cross-site scripting attacks.

Evidence carrying patches could be required to prove that they are sufficiently restricted/limited before execution.

Assigning semantic tags to files and setting permissions based on these tags may allow for a closer alignment with users’ mental models.

If the average user logs into their mail account and finds their mail is missing, are they more likely to think that a user hacked into their account and deleted the messages or that there was a system failure that lost the data?  It was suggested that users tend to assume an error occurred before suspecting an attacker.

Trusted favorites - i.e.  ways of handing links to another use in a way that the user is ensured that the link they received and clicked upon goes to the location that the person that sent it intended.

Application deletion on most platforms is hard, but it is especially hard on Windows due to some design decisions made for Windows 3.1.  An OS that does it well is Android or OLPC.

Some of these are great ideas.  But what is the low-hanging fruit we can do simply?

Hi Richard,

Thanks for blogging these! One request–can you include the papers authors in the blog post? It adds a lot of useful context to not have to peek into the PDF (or infer a lead author name from the filename.)

Adam