Ubiquitous Systems and the Family: Thoughts about the Networked Home

July 16, 2009 by Richard Conlan

http://cups.cs.cmu.edu/soups/2009/proceedings/a6-little.pdf
Linda Little, Elizabeth Sillence and Pam Briggs

Overall the well-being of a family is dependent on how well the members of the family communicate and interact.  If we are creating products and services for families it is important to recognize that the dynamics of a different families can be very different.  The project organizers sought to consider this in designing an accessible vision of the future, getting different stakeholders on-board by eliciting user feedback at a series of workshops.

The researchers developed four scenarios related to everyday tasks and showed them to thirty-eight focus groups in the UK, organizing participants by gender, technical education level, and age.  Overall there were 325 participants ranging from age sixteen to eighty-nine.  The scenario presented here at SOUPS was focused on the networked home, which is envisioned as a smart home able to respond to wants, needs, desires, while delivering personalized services.  A key question is what trust, privacy, and security issues emerge.

The participants initially discussed how neat some of the futuristic ideas were, but then delved into questions of who controls the system and who was able to see what.  Questions were asked such as: Who controls the system?  Who adds/removes items and determines which alerts are display, and to whom?  Who within the family is able to see the information?  Who outside the family?  Is it usable by everyone?  Or limited to parents?  Is it compatible in a way that it can be extended by external parties?

Challenges in Supporting End-User Privacy and Security Management with Social Navigation

July 16, 2009 by Richard Conlan

http://cups.cs.cmu.edu/soups/2009/proceedings/a5-goecks.pdf
Jeremy Goecks, W.  Keith Edwards and Elizabeth D.  Mynatt

Privacy and security management often talk about users engaging in boundary management, where decisions are made about what can cross the boundary.  However, as the boundary often changes due to context and task, this can be very hard to automate.  Social navigation is seen on Amazon, NYT, Slashdot, and elsewhere.  It is present whenever a system provides information about what others have done to help the current user make decisions.  It turns out that the motivations for social navigation can be made to help and support privacy and security management tasks.

Acumen was a system defined to expose privacy information about websites in a compact presentation.  It also indicates what choices other users have made given the available information.  It was then extended to indicate not only what the community tended to do, but what privacy experts tended to do, thus providing another level of data to the end user.  Herding occurs when users follow the crowd rather than digging into the details of the choice to be made; it was hoped that expert opinions would encourage good herding.  The preliminary deployment of Acumen ran with nine participants for about six weeks across 2,650 websites.  It was difficult to evaluate the motivation underlying user decisions.  Unfortunately, experts weren’t trusted as much as had been expected and people tended to herd with the crowd rather than the experts.  Building upon Acumen, the study organizers then developed Bonfire.  Bonfire allowed for user tagging with explanations for why they made the choices they did.  It was hoped that this would reduce blind herding.

The lessons learned from Acumen and Bonfire are that it is important to support the use of multiple information sources for decision making.  Managing herding turns out to be difficult because of “information cascading,” and because of a general distrust of the experts.  Information cascading occurs when a user goes against their own inclinations because the majority have chosen otherwise -an initial cascade causes a larger majority, which amplifies the effect.  Information cascades occur in all kinds of systems, but are especially problematic in privacy and security decisions.  Privacy and security decisions are often different because there is often incomplete information and a lot of variability in expertise and personal preferences.

A “Nutrition Label” for Privacy

July 16, 2009 by Richard Conlan

http://cups.cs.cmu.edu/soups/2009/proceedings/a4-kelley.pdf
Patrick Gage Kelley, Joanna Bresee, Lorrie Faith Cranor, and Robert W.  Reeder

Privacy polices in their current form are typically long, dense, and ignored by users.  P3P is an XML-format allowing websites to specify their privacy policy in a machine-readable manner.  The study’s initial attempt at visualizing the P3P data was to expand it into a grid that tried to specify all available options - but the result was massively cumbersome and not very easy to decipher.  As the study organizers sought a way to compress the information into a more digestible format, they began looking into nutritional labels on food in supermarkets.  They also looked at the draft labels being considered in the financial services industry.

They brought people into the lab and showed different designs of P3P-based labels to build up a sense of what users understood the labels to communicate.  This included five focus groups with seven to eleven participants each, with design iterations between group sessions.  The study proper included twenty-four participants asked to answer questions about eight single policies with four comparison tasks, either using the privacy label or the natural language policy.  The most noticeable result was when asked for something that wasn’t collected, because natural language policies tend to not mention things they do not collect.  Another interesting result was when the question was complex enough that it required combining different aspects of the policy, which was done much more easily with the label than the having to read over the entire natural language policy to find the relevant details.

The study also included a likability survey and found that participants found the label understandable, easier to use, and more enjoyable to use for making comparisons than the natural language policy.  The final label is color-coded such that a high-level understanding can be understood at a glance, uses well defined labels with a limited set of icons, a legend describing the icons, and is designed to fit well within a single-page printout for those not wanting to read it off the computer screen.  After another focus-group session following the study they decided to indicate when a given type of information was collected but an opt-out option was available.  They are preparing for a larger online study and exploring a condensed version of the label.

School of Phish: A Real-Word Evaluation of Anti-Phishing Training

July 16, 2009 by Richard Conlan

http://cups.cs.cmu.edu/soups/2009/proceedings/a3-kumaraguru.pdf
Ponnurangam Kumaraguru, Justin Cranshaw, Alessandro Acquisti, Lorrie Cranor, Jason Hong, Mary Ann Blair and Theodore Pham

How do we train users to not be phished?  There are existing materials out there that are pretty good, but they could be better.  Regardless, most people don’t proactively go looking for security training materials and “security notice” e-mails sent to users tend to be ignored.

This study explored the use of PhishGuru.  Essentially, the tactic is to send users phishing e-mails where a phished user is directed to a site explaining that they were phished and how they can avoid being phished in the future.  The idea is that users who realize they have just been fooled may be in a “teaching moment” where they would be interested in improving their defenses.

Past lab studies in the past showed that PhishGuru was effective in training users.  This study sought to evaluate the effectiveness of PhishGuru training in the real world.  It investigated user retention of anti-phishing knowledge and tactics after one week, two weeks, and four weeks.  It also sough to compare the effectiveness of having sent the user two training messages instead of only one.  The study included 515 participants divided into a control group, a group that received one training message, and a group that received two training messages.  Over a twenty-eight day period they sent out seven simulated phishing messages and three legit e-mails from CMU’s Information Security Office.  The campus help desk and all spoofed departments were notified before messages were sent (these was deemed necessary after a previous study attempt was foiled by ISO sending out a campus-wide phishing warning).

On day zero, 52% of the control and 48% of the PhishGuru-trained users fell for phishing attempts.  On day twenty-eight, 44% of the control but only 24% of those trained with PhishGuru-trained users fell for phishing attempts!  Across the entire study period those that had been trained with PhishGuru in response to their initial fall for phishing were consistently better than the control at avoiding phishing attempts.  There was little difference between those trained once compared to those trained twice.  After the study a survey was sent out to participants, with 80% recommending PhishGuru training be continued.

Social Applications: Exploring A More Secure Framework

July 16, 2009 by Richard Conlan

http://cups.cs.cmu.edu/soups/2009/proceedings/a2-besmer.pdf
Andrew Besmer, Heather Lipford, Mohamed Shehab and Gorrell Cheek

Social applications are apps built on top of social network platforms such as Facebook or Google’s OpenSocial.  They are intended to leverage the social network to provide value to users.

Typically when installing the app they are presented with a screen prompting the user to approve access to some set of the user’s personal information.  As these are social apps, this generally includes information about their friends…who may not be okay with the app having their profile information.  But the app didn’t have to ask anybody except the installing user since the installing user already had access to all the other info!  On Facebook the policies are highly permissive by default.  Google’s OpenSocial is more adaptable, allowing for site-specific defaults, but it generally similar.

This study proposed the addition of a User-to-Application policy that would allow the user to expose a selected subset of their user info rather than granting blanket permission.  Whether this improves user privacy depends on whether users actually use these settings appropriately.  The study implemented a series of Facebook apps that extended the UI, allowing the the application is able to specify which bits of information were required to run at all, and which were optional.  The extended UI also made explicit which friend info is implicitly shared by using the app.  The study included 17 participants and had each install all 11 applications, resulting in 187 “application scenarios”.  Across the 187 scenarios, 150 were installed, with 103 just choosing the fully permissive defaults.  Motivated users averaged 23 seconds per scenario and tended to consider which information to share and allowed only contextually appropriate information, whereas unmotivated users averaged 10 seconds per scenario and just accepted the defaults even when the app asked for information clearly unrelated to its intended usage.

Revealing Hidden Context: Improving Mental Models of Personal Firewall Users

July 16, 2009 by Richard Conlan

http://cups.cs.cmu.edu/soups/2009/proceedings/a1-raja.pdf
Fahimeh Raja, Kirstie Hawkey and Konstantin Beznosov

A tenet of Usable Security put forth by Ka-Ping Yee and others is that the user should always be able to view and understand their current security state.  As users become more mobile this becomes even more important because the underlying state may be dynamic and so security implications are in flux.

Windows Vista Firewall can be configured to automatically adjust its security based on the type of network connection in use.  The user can change these configurations through two interfaces.  The advanced interface lets the user twiddle settings per network location, but is hard for normal users to understand.  The basic interface is easier to use, but drops the contextual info and changes twiddle the settings for whatever the current network location happens to be.  It is reasonable for such a user to think changes made apply everywhere, but this is not the case.  Worse, a user wishing to ensure security in all contexts has to remember to actually return and change it when they actually enter the new context.

The study added UI elements to the advanced view to try and make it more understandable, adding graphics to communicate the effects of the settings for each context.  They also presented the settings in an NxN grid to make it more obvious what the settings were for each context.  The study was a within-subjects lab study with 13 pilot testers with 60 participants in the study proper (half using Vista’s UI followed by the study UI and half using the study UI followed by Vista’s UI).  The majority of participants used Vista on a laptop as their normal OS, but the study did include users of Windows XP/2K, Mac OS, and Linux on laptops and desktops.

Success of the UIs was evaluated by examining the users mental model of what was going on while using the UI.  In both orderings participants had a better mental model after using the study UI than prior - generally improving to 100% accurate.  Startlingly, the mental model of the group that used the study UI followed by the default Vista UI saw their mental model deteriorate after using Vista’s UI!  93% of participants reported preference for the study UI.

Invited Talk: Redirects to login pages are bad, or are they?

July 16, 2009 by Richard Conlan

Speaker: Eric Sachs

Usability “experts” claim that websites should just ask a person for their login information instead.
Security “experts” claim that redirects promote phishing (and want to shoot the usability experts).

Turns out, sites prompting for a password is annoying!

Some % of users couldn’t immediately remember their password.  Another large group just found it annoying and indicated web engineers were being lazy.  A significant % of users were concerned something phishy was going on.

Redirect for login:

Most users don’t notice the redirection.  They think of it like signing a form to let a department store check their credit.
A big usability concern has been users were not logged in, but that is very small, and actually has minimal impact on approval rate.
The primary factor impacting approval is simplicity of the page.
OAuth (and equivelents) provide ongoing access to the data, which is a usability win!

Unfortunately the service provider can easily hurt usability by adding additional text, explanations, warnings, & disclaimers just confuse users.  Generally it is more usable to get to the point and have a link to learn more.  Forcing the user to make more decisions typically makes those users angry.

Key lesson: If usability is poor, websites will stick with screen scraping.

But what about phishing?

The alternative is that website asks for the person’s e-mail/password directly and scrapes their address book.  There is a reason Microsoft, MySpace, Facebook, Google, Yahoo!, Twitter, etc., all support OAuth (or equivelents) because their data indicates it is a better overall win.

So, it may theoretically promote phishing, but the alternative is worse.

Let’s move on to OAuth’s cousin, OpenID.

Usability “experts” claim OpenID is too hard to use.
Security “experts” claim that redirects promote “phishing”.

In 2008, Yahoo!, Google, and Microsoft all took a stab at being OpenID IDPs.  None were especially usable.  Though competitors, it became apparent that sharing results of usability of these UIs was a win-win scenario.  Since then all have improved their UIs and usability.  All are still exploring possibilities.

What did we learn?

The primary factor impacting approval is simplicity of the page.  Minimal text + one button is best.
Auto-approval is a usability win!

Providing the user’s e-mail address is a huge usability win.  Users understand it.  Websites need it.  Provides an invisible upgrade.

Security “experts” claim this is a privacy loss.  If the alternative of e-mail+password logins on websites didn’t exist, maybe they would be right.  But real world data shows that e-mail addresses are helpful regardless of federated login.  Specialized websites can still use OpenID without e-mail addresses.

What are the alternatives to using e-mail address?

Usernames without federate login?  C
Per-RP IDs with federate login?  C
Client-side software?  F

Security “experts” claim that IDPs will track who you visit.  But if they are your e-mail provider they can look through your e-mail anyways, and your ISP can just track this via IP addresses.  Why don’t they?  Because of Terms of Service agreements and the like, which could also exist with IDPs.

We’ve made some unintuitive usability discoveries.  Don’t bother telling users about the benefit of federated login - it actually hurts completion rate because it is distracting.  Don’t go directly from the e-mail to the OAuth+OpenID page - some context is needed.  When you are done, tell users how to sign-in in the future.  Target success rate is 80-90%.

So…does OpenID increase phishing?

Well, what is phishing?  The reality is that users reuse passwords and prompting to login with e-mail & password, for any site, will tend to get the users e-mail password.  Creating multiple logins across multiple sites tends to leak passwords anyways.

Maybe OpenID is phishable.
But its probably better than the alternative.

SOUPS 2009

July 16, 2009 by Richard Conlan

Welcome to SOUPS 2009! 

SOUPS 2009 logo

SOUPS 2009 is being held at
Google logo in Mountain View, CA.

http://cups.cs.cmu.edu/soups/2009/

SOUPS 2009?

July 25, 2008 by Richard Conlan

This brings us to the close of SOUPS 2008.  Hope y’all learned something interesting.

SOUPS 2009 will be held from July 15-17, 2009 in Mountain View, CA.

Analyzing Websites for User-Visible Security Design Flaws

July 25, 2008 by Richard Conlan

http://cups.cs.cmu.edu/soups/2008/proceedings/p117Falk.pdf

Media buzz about this paper:
* Information Week: Most Bank Sites Are Insecure
* Slashdot: Most Bank Websites Are Insecure
* Network World: Bank Web sites full of security holes, University of Michigan survey finds
* Ars.Technica: Study: websites of financial institutions insecure by design

The study was highly motivated by personal experiences dealing with banks and banking.  Online banking tends to have login boxes on insecure pages.  When needing to reach customer service the contact information is also commonly on an insecure page.  Setting up a retirement account online require using SSN as an ID.

The goals of the study was to not examine bugs or browser flaws, but design flaws that would confuse users and even cause problems for security-savvy users.  The study analyzed 214 websites (mostly banks) and searched for design issues.

One of the most dangerous was the tendency of banks to use HTTP pages.  In the presence of a DNS attack everything about the legit page is indistinguishable from a non-legit page since even the browser URL would be correct.  Many many banks do this and it is completely insecure - even a security-aware user would not distinguish the page unless they somehow detect the DNS spoofing.  Given the recently reported DNS vulnerability this is a very realistic and dangerous attack vector.

It is also quite dangerous to put Contact Information on an HTTP page.  Once again, the page could be spoofed and include the attackers contact information instead of the bank, allowing for a trivial social engineering attack when the customer calls in.

Another common vulnerability is that practice of the bank delegating certain tasks to third-party sites.  Often the third-party site has no clear connection to the bank, and therefore there is a break in the chain of trust since the user cannot distinguish between being sent to a legit or malicious third-party site.  This is especially bad because even if the bank’s site were HTTPS, an attacker could detect the point at which the session tries to change IP address (implying a change of servers) and present some other third-party site in place of the actual site.  Again, even a careful user would not have any real method to detect such an attack.

Many banks have unclear policies that don’t allow the customer to predict the security of the bank’s actions.  For instance, many banks offer to “e-mail statements”.  Most likely this means that they will send an e-mail notifying of the availability of the statement online, but as worded the user cannot predict this and must decide amidst the ambiguity.

76% of banks analyzed had at least one of the above mentioned flaws.

The Challenges of Using an Intrusion Detection System: Is It Worth the Effort?

July 25, 2008 by Richard Conlan

http://cups.cs.cmu.edu/soups/2008/proceedings/p107Werlinger.pdf

This paper sought to examine, as it’s title suggests, whether IDSs help or hinder incident detection and response.  It was motivated by a discussion group a CHI 2007.

Current IDSs still need human intervention to account for false positives and make use of the results.  The study included 34 interviews with those related to security and intrusion, 9 of whom were confirmed to have experience with IDSs.  They also conducted ~15 hours of participatory observation.

Those who supported IDSs suggested:
- IDSs help identify problems
- reduce uncertainty about the effectiveness of security measures
- allows monitoring of the network without overly compromising user privacy

Those who were against IDSs suggested:
- they were expensive
- much work and time required to tune the system
- they were unreliable, buggy, and caused dropped packets
- lack of clear utility; hard to see a concrete improvement
- often sit idle because of the cost overhead of using it

During the participatory observation there were a number of issues encountered deploying the IDS.  To connect the IDS 2 ports were needed, but they were unable to find two available points where they wanted them so they ended up choosing two ports in a less interesting network.  The “quick tuning” option in the GUI was insufficient to configure anything of any complexity.  Because they were trying to configure it in a distributed environment they encountered extra overhead trying to get approval from all of the stakeholders.

Ideally the IDS would have been deployed in a critical network, but they were unable to do so.  It is hard to assess the IDS utility without full deployment.

A User Study of Off-the-Record Messaging

July 25, 2008 by Richard Conlan

http://cups.cs.cmu.edu/soups/2008/proceedings/p95Stedman.pdf

Instant messaging has become a common form of information on the Internet, but most of the available services are not secure.  There are available solutions, such as SecureIM, Pidgin-Encryption, and SILC, but they all have shortcomings compared to OTR (Off-The-Record).

The goal of OTR is to make conversations online as private and secure as face-to-face conversations.  OTR was recently redesigned to be more easily used by non-technical users.  The researchers for this study performed a user study on the new version of OTR.

Optimally using OTR requires initiating encryption per conversation and authenticating the user at the other end of the connection.  In the original version of OTR the only way to authenticate was by manually verifying each users’ key fingerprint.  The newer version allows users to authenticate by entered a shared secret, such as the place they first met.

The study was conducted using the “think aloud” method and included four pairs of friends.  In some sessions friends were paired, and in others one friend from on pair of friends was talking to somebody from another pair of friends.  This latter setup was intended to test the usability among users who didn’t know each other well.  To test learnability of the system they ran a second session in which the users were paired differently.

By default OTR initiates encryption automatically, so nobody had problems getting the crypto going.  Participants did, however, have trouble authenticating one another.  The most common first attempt was to press the OTR button, but this does not actually authenticate a session (it actually rekeys the session).  The next step was commonly to click the injected “authenticate” link provided in the IM window, which brings the user to a help page.  Unfortunately, this did not actually help any participants because it did not say to “right-click”.  Many users just looked at the images on the help page, which unfortunately lead to authentication errors because there is an image of “how not to authenticate” pictured before one describing how to do it properly.

Two participants tried to perform the “old style” authentication, which lead to much confusion as one buddy had thought they were verified while the other was not because the fingerprint verification method is one-way and must be performed on each side of the connection.

From these results the researchers proposed:
- have the OTR menu open when left-clicking the button
- the help page needs clearer information, such as saying to right-click on the button
- the help page should make it more clear that the “what not to do” image was what not to do by crossing it out or otherwise pictorially indicating the danger
- the authentication interface should itself help guide the user towards proper use of the system
- the interface should provide a box for a “question” in addition to the shared secret input