Skip to content

Off Your Face On Facebook Case Study

Privacy and Facebook: The Visible and the Invisible

The privacy concerns delineated above are confirmed by several reports and studies on Facebook. In a report on 23 Internet service companies, the watchdog organization Privacy International charged Facebook with severe privacy flaws and put it in the second lowest category for “substantial and comprehensive privacy threats” (“A Race to the Bottom,” 2007). Only Google scored worse; Facebook tied with six other companies. This rating was based on concerns about data matching, data mining, transfers to other companies, and in particular Facebook's curious policy that it “may also collect information about [its users] other sources, such as newspapers, blogs, instant messaging services, and other users of the Facebook service” (“Facebook Principles,” 2007, Information We Collect section, para. 8).

Already in 2005, Jones and Soltren identified serious flaws in Facebook's set-up that would facilitate privacy breaches and data mining. At the time, nearly 2 years after Facebook's inception, users' passwords were still being sent without encryption, and thus could be easily intercepted by a third party (Jones & Soltren, 2005). This has since been corrected. A simple algorithm could also be used to download all public profiles at a school, since Facebook used predictable URLs for profile pages (Jones & Soltren, 2005). The authors also noted that Facebook gathered information about its users from other sources unless the user specifically opted out. As of September 2007, the opt-out choice was no longer available but the data collection policy was still in force (“Facebook Principles,” 2007).

Even the most lauded privacy feature of Facebook, the ability to restrict one's profile to be viewed by friends only, failed for the first 3 years of its existence: Information posted on restricted profiles showed up in searches unless a user chose to opt-out his or her profile from searches (Jones & Soltren, 2005). This glitch was fixed in late June 2007, but only after a technology blogger made the loophole public and contacted Facebook (Singel, 2007). Recent attempts to make the profile restrictions more user-friendly and comprehensive seem mostly PR-driven and still include serious flaws (Soghoian, 2008a).

In September 2006, Facebook introduced the “News Feed,” which tracks and displays the online activities of a user's friends, such as uploading pictures, befriending new people, writing on someone's wall, etc. Although none of the individual actions were private, their aggregated public display on the start pages of all friends outraged Facebook users, who felt exposed and deprived of their sense of control over their information (boyd, 2008). Protest groups formed on Facebook, among them the 700,000-member group “Students Against Facebook News Feed” (Romano, 2006, para. 1). Subsequently, Facebook introduced privacy controls that allowed users to determine what was shown on the news feed and to whom.

The implementation of a platform for programs created by third-party developers in summer 2007 and the ensuing flood of applications that track user behaviors and/or make information from personal profiles available for targeted advertising do not inspire trust in Facebook's privacy policy (Schonfeld, 2008; Soghoian, 2008b). Most notably, the Facebook Ads platform has raised serious questions. In an attempt to capitalize on social trust and taste, Facebook's “Beacon” online ad system tracks user behavior, such as online shopping. Initially information was broadcasted to users' friends. This led to angry protests in November 2007, and the formation of a Facebook group called “Petition: Facebook, Stop Invading My Privacy!” that gained over 70,000 members within its first two weeks. Facebook responded by introducing a feature that allowed users to opt out of the broadcasting, yet Beacon continues to collect data “on members' activities on third-party sites that participate in Beacon even if the users are logged off from Facebook and have declined having their activities broadcast to their Facebook friends” (Perez, 2007).

Additional concerns have been raised about links between Facebook and its use by government agencies such as the police or the Central Intelligence Agency. In a rather benign example, a police officer resorted to searching Facebook after witnessing a case of public urination outside a fraternity house at University of Illinois at Urbana-Champaign and the only other witness on the scene claimed not to know the lawbreaker. Once on Facebook, the officer searched the man's friend list and the lawbreaker he was looking for. The first man received a $145 ticket for public urination; the other received a $195 ticket for obstructing justice (Dawson, 2007). Additionally, the Patriot Act allows state agencies to bypass privacy settings on Facebook in order to look up potential employees (NACE Spotlight Online, 2006). An online presentation “Does what happens in the Facebook stay in the Facebook?” (2007) points out a number of connections between various Facebook investors and In-Q-Tel, the not-for-profit venture capital firm funded by the CIA to invest in technology companies for the CIA's information technology needs. The chief privacy officer of Facebook, Chris Kelly, accused the video of “strange interpretations of our policy” and “illogical connections” but did not substantially rebut the allegations (Kelly, 2007).

Further criticism is based on the fact that third parties can use Facebook for data mining, phishing, and other malicious purposes. Creating digital dossiers of college students containing detailed personal information would be a relatively simple task—and a clever data thief could even deduce social security numbers (which are often based on 5-digit ZIP codes, gender, and date of birth) from the information posted on almost half the users' profiles (Gross & Acquisti, 2005). Social networks are also ideal for mining information about relationships or common interests in groups, which can be exploited for phishing. For example, Jagatic, Johnson, Jakobsson, and Menczer (2005) launched a phishing experiment at Indiana University on selected college students, using social network sites to get information about students' friends. The experiment had an alarmingly high 72 percent success rate within the social network as opposed to 16 percent within the control group. The authors add that other phishing experiments by different researchers showed similar results, “We must conclude that the social context of the attack leads people to overlook important clues, lowering their guard and making themselves significantly more vulnerable” (Jagatic et al., 2005, p. 5). A high level of vulnerability is also engendered by the fact that many users post their address and class schedule, thus making it easy for potential stalkers to track them down (Acquisti & Gross 2006; Jones & Soltren 2005). Manipulating user pictures, setting up fake user profiles, and publicizing embarrassing private information to harass individuals are other frequently reported forms of malicious mischief on Facebook (Kessler, 2007; Maher, 2007; “Privacy Pilfered,” 2007; Stehr, 2006).

While Facebook's privacy flaws are well documented and have made it into the news media, relatively little research is available on how exactly these problems play out in the social world of Facebook users and how much users know and care about these issues. In their small-sample study on Facebook users' awareness of privacy, Govani and Pashley (2005) found that more than 80 percent of participants knew about the privacy settings, yet only 40 percent actually made use of them. More than 60 percent of the users' profiles contained specific personal information such as date of birth, hometown, interests, relationship status, and a picture.

The study by Jones and Soltren (2005) showed that 74 percent of the users were aware of the privacy options in Facebook, yet only 62 percent actually used them. At the same time, users willingly post large amounts of personal information—Jones and Soltren found that over 70 percent posted demographic data, such as age, gender, location, and their interests—and demonstrate disregard for both the privacy settings and Facebook's privacy policy and terms of service. Eighty-nine percent admitted that they had never read the privacy policy and 91 percent were not familiar with the terms of service. This neglect to understand Facebook's privacy policies and terms of service is widespread (Acquisti & Gross, 2006; Govani & Pashley, 2005; Gross & Acquisti, 2005). In their before and after study, Govani and Pashley (2005) noticed that most students did not change their privacy settings on Facebook, even after they had been educated about the ways they can do so. Several studies found that there is little relationship between social network site users' disclosure of private information and their stated privacy concerns (Dwyer, Hiltz, & Passerini, 2007; Livingstone, 2008; Tufekci, 2008). However, a recent study showed that actual risk perception significantly correlates with fear of online victimization (Higgins, Ricketts, & Vegh, 2008). Consequently, the authors recommend better privacy protection, higher transparency of who is visiting one's page, and more education about the risks of posting personal information to reduce risky behavior.

Tufekci (2008) also asserted that students may try “to restrict the visibility of their profile to desired audiences but are less aware of, concerned about, or willing to act on possible ‘temporal’ boundary intrusions posed by future audiences because of persistence of data” (p. 33). The most obvious and readily available mechanism to control the visibility of profile information is restricting it to friends. However, Ellison, Steinfield, & Lampe (2007) discovered that only 13 percent of the Facebook profiles at Michigan State University were restricted to “friends only.” Also, the category “friend” is very broad and ambiguous in the online world; it may include anyone from an intimate friend to a casual acquaintance or a complete stranger of whom only their online identity is known. Though Jones and Soltren (2005) found that two-thirds of the surveyed users never befriend strangers, their finding also implies that one-third is willing to accept unknown people as friends.

This is confirmed by the experiment of Missouri University student Charlie Rosenbury, who wrote a computer program that enabled him to invite 250,000 people to be his friend, and 30 percent added him as their friend (Jump, 2005). Similarly, the IT security firm Sophos set up a fake profile to determine how easy it would be to data-mine Facebook for the purpose of identity theft. They found that out of 200 contacted people, 41 percent revealed personal information by either responding to the contact (and thus making their profile temporarily accessible) or immediately befriending the fake persona. The divulged information was enough “to create phishing e-mails or malware specifically targeted at individual users or businesses, to guess users' passwords, impersonate them, or even stalk them” (“Sophos Facebook ID,” 2007)

These findings show that Facebook and other social network sites pose severe risks to their users' privacy. At the same time, they are extremely popular and seem to provide a high level of gratification to their users. Indeed, several studies found that users continually negotiate and manage the tension between perceived privacy risks and expected benefits (Ibrahim, 2008; Tufekci, 2008; Tyma, 2007). The most important benefit of online networks is probably, as Ellison, Steinfield, & Lampe (2007) showed, the social capital resulting from creating and maintaining interpersonal relationships and friendship. Since the creation and preservation of this social capital is systematically built upon the voluntary disclosure of private information to a virtually unlimited audience, Ibrahim (2008) characterized online networks as “complicit risk communities where personal information becomes social capital which is traded and exchanged” (p. 251). Consequently, social network site users are found to expose higher risk-taking attitudes than individuals who are not members of an online network (Fogel & Nehmad, 2008).

It can therefore be assumed that the expected gratification motivates the users to provide and frequently update very specific personal data that most of them would immediately refuse to reveal in other contexts, such as a telephone survey. Thus, social network sites provide an ideal, data-rich environment for microtargeted marketing and advertising, particularly when user profiles are combined with functions that track user behavior, such as Beacon. This commercial potential may explain why Facebook's valuation has reached astronomical levels, albeit on the basis of speculation. Since Microsoft's fall 2007 expression of interest in buying a 1.6 percent stake for $240 million, estimates of the company's value have ranged as high as $15 billion (Arrington, 2008; Sridharan, 2008; Stone, 2007).

For the average user, however, Facebook-based invasion of privacy and aggregation of data, as well as its potential commercial exploitation by third parties, tend to remain invisible. In this respect, the Beacon scandal was an accident, because it made the users aware of Facebook's vast data-gathering and behavior surveillance system. Facebook's owners quickly learned their lesson: The visible part of Facebook, innocent-looking user profiles and social interactions, must be neatly separated from the invisible parts. As in the case of an iceberg, the visible part makes up only a small amount of the whole (see figure 1).

The invisible part, on the other hand, is constantly fed by the data that trickle down from the interactions and self-descriptions of the users in the visible part. To maintain the separation (and the user's motivation to provide and constantly update his or her personal data), any marketing and advertising based on these data must be unobtrusive and subcutaneous, not in the user's face like the original version of Beacon.

Theoretical Approach

The conceptual framework of our research is a combination of three media theories: the “uses and gratifications” theory, the “third-person effect” approach, and the theory of “ritualized media use.”

While this study does not test these three media theories, they are relevant as an analytical background and a framework from which to explain and contextualize our findings. The uses and gratifications approach looks at how people use media to fulfill their various needs, among them the three dimensions of (1) the need for diversion and entertainment, (2) the need for (para-social) relationships, and (3) the need for identity construction (Blumler & Katz, 1974; LaRose, Mastro, & Eastin 2001; Rosengren, Palmgreen, & Wenner, 1985). We assume that Facebook offers a strong promise of gratification in all three dimensions—strong enough to possibly override privacy concerns.

The third-person effect theory states that people expect mass media to have a greater effect on others than on themselves. This discrepancy between self-perception and assumptions about others is known as the perceptual hypothesis within the third-person effect approach (Brosius & Engel, 1996; Davison, 1983; Salwen & Dupagne, 2000). Though this approach has far-reaching implications with respect to people's support for censorship (known as the behavioral hypothesis), our interest is mostly focused on the perceptual side: How do Facebook users perceive effects on privacy caused by their use of Facebook and which consequences do they draw from this? Together with the uses and gratification theory, the third-person effect would explain a certain economy of effect perception, (i.e., negative side effects are ascribed to others, while the positive effects are ascribed to oneself).

The theory of ritualized media use states that media are not just consumed for informational or entertainment purposes, they are also habitually used as part of people's everyday life routines, as diversions and pastimes. Media rituals are often connected to temporary structures, such as favorite TV shows at a particular time, and to specific social rituals, such as ritualized meetings of friends to watch a favorite TV show, etc. (Couldry, 2002; Liebes & Curran, 1998; Pross, 1992; Rubin, 1984). It can be expected that the use of Facebook is at least to some degree ritualized and (subcutaneously) built into its users' daily life—a routinization (Veralltäglichung) in the sense of Max Weber (1972/1921). In conjunction with the two other approaches, this theory would further explain the enormous success of Facebook and users' lack of attention to privacy issues.

Based on the literature and theories examined above, the following four hypotheses for the survey and four open-ended research questions to guide the interviews were proposed:

H1: Many if not most Facebook users have a limited understanding of privacy issues in social network services and therefore will make little use of their privacy settings.

H2a: For most Facebook users, the perceived benefits of online social networking will outweigh the observed risks of disclosing personal data.

H2b: At the same time, users will tend to be unaware of the importance of Facebook in their life due to its ritualized use.

H3: Facebook users are more likely to perceive risks to others' privacy rather than to their own privacy.

H4: If Facebook users report an invasion of personal privacy they are more likely to change their privacy settings than if they report an invasion of privacy happening to others.

Research questions:

RQ1: How important is Facebook to its users and which role does it play in their social life?

RQ2: To what extent is Facebook part of everyday rituals or has created its own rituals?

RQ3: Which role does Facebook play in creating and promoting gossip and rumors?

RQ4: Which negative effects, particularly with respect to privacy intrusions, does Facebook have?

This site may earn affiliate commissions from the links on this page. Terms of use.

Like you and everyone else on the internet, I was dumbstruck when Facebook’s Zuckerberg announced that his company would be acquiring Oculus VR, the makers of the Oculus Rift virtual reality headset, for $2 billion. The two companies are so utterly different, with paths so magnificently divergent, that it’s hard to see the acquisition as anything more than the random whim of a CEO who was playing a billion-dollar game of Acquisitory Darts. Or perhaps Zuckerberg just enjoyed playing Doom as a child and thought, what’s the point in being one of the world’s richest people if you can’t acquire your childhood idol, John Carmack?

Anyway, instead of writing something reactionary and vitriolic like every other journalist, I decided to sleep on it. Now, after a night of vivid fever dreams (and more scenes involving a topless Zuckerberg than I initially anticipated), I can tell you that I’ve seen the future of Facebook, Oculus Rift, and virtual reality — and it’s pretty damn awesome.

One wonders what Carmack’s long-term plans are, after being acquired by Facebook

Be patient

First, it’s important to remember that, in the short term, the Oculus Rift is unlikely to be negatively affected by this acquisition. According to Oculus VR co-founder Palmer Luckey, thanks to Facebook’s additional resources, the Oculus Rift will come to market “with fewer compromises even faster than we anticipated.” Luckey also says there won’t be any weird Facebook tie-ins; if you want to use the Rift as a gaming headset, that option will still be available.

Longer-term, of course, the picture is a little murkier. Zuckerberg’s post explaining the acquisition makes it clear that he’s more interested in the non-gaming applications of virtual reality. “After games, we’re going to make Oculus a platform for many other experiences… This is really a new communication platform… Imagine sharing not just moments with your friends online, but entire experiences and adventures.”

Facebook for your face: An Oatmeal comic that successfully predicted Facebook’s acquisition some months ago.

Second Second Life

Ultimately, I think Facebook’s acquisition of Oculus VR is a very speculative bet on the future. Facebook knows that it rules the web right now, but things can change very, very quickly. Facebook showed great savviness when it caught the very rapid consumer shift to smartphones — and now it’s trying to work out what the Next Big Thing will be. Instagram, WhatsApp, Oculus VR — these acquisitions all make sense, in that they could be disruptive to Facebook’s position as the world’s most important communications platform.

While you might just see the Oculus Rift as an interesting gaming peripheral, it might not always be so. In general, new technologies are adopted by the military, gaming, and sex industries first — and then eventually, as the tech becomes cheaper and more polished, they percolate down to the mass market. Right now, it’s hard to imagine your mom wearing an Oculus Rift — but in five or 10 years, if virtual reality finally comes to fruition, then such a scenario becomes a whole lot more likely.

Who wouldn’t want to walk around Second Life with a VR headset?

For me, it’s easy to imagine a future Facebook where, instead of sitting in front of your PC dumbly clicking through pages and photos with your mouse, you sit back on the sofa, don your Oculus Rift, and walk around your friends’ virtual reality homes. As you walk around the virtual space, your Liked movies would be under the TV, your Liked music would be on the hi-fi (which is linked to Spotify), and your Shared/Liked links would be spread out on the virtual coffee table. To look through someone’s photos, you might pick up a virtual photo album. I’m sure third parties, such as Zynga and King, would have a ball developing virtual reality versions of FarmVille and Candy Crush Saga. Visiting fan pages would be pretty awesome, too — perhaps Coca-Cola’s Facebook page would be full of VR polar bears and and happy Santa Clauses, and you’d be able to hang out with the VR versions of your favorite artists and celebrities too, of course.

And then, of course, there are all the other benefits of advanced virtual reality — use cases that have been bandied around since the first VR setups back in the ’80s. Remote learning, virtual reality Skype calls, face-to-face doctor consultations from the comfort of your home — really, the possible applications for an advanced virtual reality system are endless and very exciting.

But of course, with Facebook’s involvement, those applications won’t only be endless and exciting — they’ll make you fear for the future of society as well. As I’ve written about extensively in the past, both Facebook and Google are very much in the business of accumulating vast amounts of data, and then monetizing it. Just last week, I wrote about Facebook’s facial recognition algorithm reaching human levels of accuracy. For now, Facebook and Google are mostly limited to tracking your behavior on the web — but with the advent of wearable computing, such as Glass and Oculus Rift, your real-world behavior can also be tracked.

And so we finally reach the crux of the Facebook/Oculus story: The dichotomy of awesome, increasingly powerful wearable tech. On the one hand, it grants us with amazingly useful functionality and ubiquitous connectivity that really does change lives. On the other hand, it warmly invites corporate entities into our private lives. I am very, very excited about the future of VR, now that Facebook has signed on — but at the same time, I’m incredibly nervous about how closely linked we are becoming to our corporate overlords.