Privacy: The Need for Digital Nudging to Limit Oversharing on Social Media
Grant Powell
School of Behavioral and Brain Sciences, University of Texas at Dallas
PSYC 6532 Cognitive Psychology Essentials for Cybersecurity
Professor Daniel Krawczyk
May 4, 2022
The Behavior of Oversharing
Oversharing, also called self-disclosure, is a common behavior that has been observed on social media sites (SMSs) such as Facebook and Twitter. Sharing information, in and of itself, is one of the foundational objectives that SMSs were created to achieve. It allows for users to create accounts called profiles to allow users to get in touch with long lost childhood friends, classmates, distant family members, like-minded people, etc. by sharing personal information about themselves to search for and connect with certain people or make it easier for those people to search for them. The more users interact with SMSs and socialize with others, the more information they share. This is a boon to SMSs because the more information users share, the more revenue they generate through advertising. Not only will more relationships between users become strengthened and attract more primary users, but it will also attract more secondary users, the advertisers. This raises concerns over privacy and cybersecurity.
One concern is becoming victims of cybercrime (Ziegeldorf, 2016, p. 226). There is always the risk of information abuse that comes with increased sharing on SMSs (Kroll, 2021, p. 3). One’s information could be shared with and collected by third parties such as advertisers, for example, and used for malicious purposes such as cyber stalking and identity theft (Kroll, 2021, p. 3). Cyber stalking can involve users experiencing threats, sexual harassment, and conflicts (Kroll, 2021, p. 4). Identity theft could be the vulnerability of falling prey to a cybercrime called spear-phishing. Spear-phishing is where emails are targeted to a specific individual based on personal information that the attacker knows about that individual. The attacker uses that information to coax that individual into clicking on an email that looks very legitimate in the hopes of getting that individual to divulge more sensitive personal information. One example is login information to gain access to bank accounts. One of the ways one’s personal information can fall into the hands of an attacker is through oversharing on SMSs. Attackers can use any information one shares on SMSs and figure out a pattern that helps them determine one’s lifestyle and habits. Ways those attackers can collect information about you is by either acting as a troll or employing social bots.
Trolls are real human users on SMSs with fake accounts. Social bots are computer programs posing as real human users on SMSs with fake accounts. They can take advantage of algorithms on SMSs that advertisers use to help them figure out what advertisements to share with certain users that they think a specific user would most likely be interested in seeing. This is determined based on information that users share, post, like, click, and divulge. This helps trolls and social bots figure out patterns about a specific user’s lifestyle and habits. It also gives them opportunities to attempt swaying the outcome of elections during election season by spreading inflammatory political content to users who are most likely to believe it, true or not. Thus, creating “echo chambers” and “us versus them” confrontations on SMSs by preying on our cognitive biases. This can cause users to experience missed career opportunities and embarrass themselves towards either never getting hired for a job or fired from a job if such content pulls them into using harmful language on SMSs that they otherwise would never use in face-to-face interactions (Ziegeldorf, 2016, p. 226). Researchers have come up with different reasons why users overshare on SMSs and put themselves at risk.
Despite privacy concerns they may have, users do not always observe privacy-protecting behaviors that reflect their concerns (Wisniewski, 2017, p. 95). This could be a result of limited motivation to control their privacy by taking the route that is good enough instead of one that is more optimal (Wisniewski, 2017, p. 95). Or biases may be playing a role against users’ better judgement because users may be experiencing information overload or feel they have nothing to hide (Bergram, 2020, p. 2). Bergram et al. (2020) pointed out that 62% of a sample of users surveyed in the U.S. believed that if a website had a privacy policy that it meant it could not share their data with other companies. Not surprising because it is hard to get a sense of any SMSs position on the problem of over-sharing since they are still businesses (Kroll, 2021, p. 2). They are still going to do what is necessary with whatever security system they put in place to increase user interaction and sharing (Kroll, 2021, p. 2). It is hard to know if the feeling of a secure and trustworthy platform they are promoting is all just security theater or a genuine attempt to toughen security measures (Kroll, 2021, p. 2). This is because the desire to share is influenced by trust in the SMS and a feeling of control (Kroll, 2021, p. 4). Even if privacy controls are available to users, they may also be either unwilling or unable to take the time to learn and educate themselves on the controls that they can use to their advantage (Wisniewski, 2017, p. 95).
This could be the result of users having trouble navigating elaborate privacy options that are available to them (Wisniewski, 2017, p. 95). Wisniewski, et al. (2017) found that 48% of users on SMSs have expressed difficulty understanding how to manage their SMSs privacy settings. Most Facebook users do not seem to know the consequences of setting up their own privacy settings in a certain way based on the fact common in most users, they share content that goes against the disclosure intentions they uphold (Wisniewski, 2017, p. 96). Because of these behaviors, users do not always use the privacy controls available to them to take control over their data. By staying with the permissive privacy control defaults and having a lack of knowledge or motivation to control their own privacy, oversharing becomes predominant among users (Wisniewski, 2017, p. 96). To curb the problem of oversharing, researchers should consider or continue considering some research possibilities.
Potential Research Ideas for Solutions to Limit Oversharing
One research possibility could be to investigate how many users have added random strangers and by investigating further how often users add random strangers on a certain scale (i.e., all the time, most of the time, neutral, sometimes, never). However, to advance this further as it relates to the basis of why users overshare, researchers should consider understanding why users add random strangers as friends or request to be their friends in the first place (Kroll, 2021, p. 4). There is probably a psychological basis to this that involves the issue of low self-esteem and the feeling of being wanted and accepted by others. Involvement on social media for some users may be more of a popularity contest since there are users who have been successful as social media influencers and made a living out of that lifestyle. This could bring about potential privacy concerns regarding oversharing because a user may not be aware of whether a random stranger is a potential troll, social bot, hacker, or attacker who could prey upon that user’s oversharing behavior to commit crimes such as cyber-stalking or identity theft. Research in this area could help lead towards ways of designing privacy nudges, or better ones, to help users avoid or reconsider adding certain users to their circle of friends that they do not know. Yet, it could also reveal coping strategies users utilize that could be helpful towards the design of privacy nudges.
Research is needed to investigate what coping strategies users use to limit their own self-disclosure (Kroll, 2021, p. 4). Researchers could ask users if they have stopped disclosing information by not creating further posts, deleting past posts, deleting their account, changing the visibility of new and old posts, or deleting friendships with remote strangers (Kroll, 2021, p. 4). From there, researchers can glean some more coping strategies from users that users have used to deal with difficult users online that they do not know and have regretted adding as friends or have used after regretting sharing certain online content. This could help create a classification system of coping strategies to help design or improve designs of privacy nudges used to nudge users towards consideration of certain coping strategies to help limit oversharing.
Another research possibility could be to investigate if privacy campaigns lead to an increased awareness amongst users of privacy and the privacy settings available to them to help them make informed decisions on how to control their sharing (Kroll, 2021, p. 5). There is a need to discover which privacy settings are more desirable to users and which ones SMSs are creating are the desired or actual kinds of privacy settings that researchers and policymakers are looking for or are needed to be created and presented to users (Kroll, 2021, p. 5). However, it might be a good idea to investigate the role that attention plays on users’ privacy awareness in terms of their awareness of privacy settings and features.
Research is needed to investigate the role attention plays in most users displaying privacy behavior such as oversharing on SMSs that does not align with the privacy concerns that they have. The amount of information laid out on the screen of SMSs is so great. Each piece of information is competing for our attention constantly. It could be the reason why some users claim to not notice or remember seeing a privacy notification about their current privacy setting to control oversharing (Kroll, 2021, p.13). Eye tracking data of users through research could be used to provide insights into where on the screen of certain SMSs that users fixate their attention on the most. The data could serve as a guide towards better placement of privacy nudges to where they are in the correct area of the screen in a user’s visual focus of attention.
Another research possibility could be to continue investigating how well personalized privacy nudges tailored to a specific user to limit oversharing is better than privacy nudges that are designed to be standardized and a one-size-fits-all model (Wisniewski, 2017, p. 96). Further research in this area could help continue to create or build upon existing classification systems of specific user types. This could be done according to privacy management strategies based on a user’s privacy behavior and awareness of privacy features on a SMS and then grouping users from there into privacy proficiency levels (Wisniewski, 2017, p. 106). It needs to be investigated not only with one SMS such as Facebook but with other SMSs such as Twitter, Instagram, Reddit, etc. that exist. The reason is that privacy behaviors amongst users is going to be different with each SMS because they are all going to have different privacy features and settings. But let’s look at Facebook as an example, anyway.
The classification system for Facebook that Wisniewski et al. (2017) has created is very helpful for potentially creating machine learning applications and programs, specifically, for Facebook that can help Facebook sort out what type of users on their platform falls into a specific categorical group based on the privacy management strategies they utilize. That way Facebook can use that to help them create personalized privacy nudges for each specific user based on where a user falls in the classification system.
To understand how this works, the group of Facebook users that Wisniewski et al (2017) analyzed were grouped into the categories of “Privacy Minimalists,” “Self-Censors,” Time Savers/Consumers,” “Privacy Balancers,” and “Privacy Maximizers” based on which 11 privacy behaviors they commonly utilized, and which 6 privacy features they were commonly aware of. The privacy management strategies utilized by each group based on their privacy behavior of using Facebook and privacy feature awareness of utilizing Facebook’s privacy features were also used to further classify them as being either “Novice,” “Near-Novice,” “Mostly Novice,” “Some Expertise,” “Near-Experts,” and “Experts” (Wisniewski, 2017, p. 103). To see how this came together, Wisniewski et al (2017) classified “Privacy Maximizers” as either “Experts” or “Near-Experts;” “Privacy Balancers” as either “Experts,” “Near-Experts,” “Some Expertise,” or “Novice;” “Privacy Minimalists” as either “Mostly Novices” or “Near-Novices;” “Selective Sharers” as either “Experts” or “Near Experts;” and both “Time Sharers/Consumers” and “Self-Censors” as either “Mostly Novice” or “Some Expertise.”
As one can see, such a classification system like this still needs more refinement through more research because there is not really a one-to-one linear relationship between these categories (Wisniewski, 2017, p. 103). However, it is a step in the right direction towards helping make privacy nudges more effective at limiting oversharing.Another idea that has shown promising results, but still needs more research to help refine it is a system devised by Ziegeldorf et al, (2020) called Comparison-based Privacy (CbP).
It is built on the same concept as the classification system created by Wisniewski et al (2017) in that the goal is not to create standardized, one-size-fits-all privacy nudges, but to personalized them for a specific type of user. The CbP approach is not really a classification system but more of a comparison-based system that seems to be designed towards being programmed into a computer application that utilizes aspects of machine learning. Basically, CbP compares a user’s sharing behavior to the sharing behavior of different groups of users within a SMS (Ziegeldorf, 2016, p. 227). That user’s sharing behavior is not just being compared to random groups of users but based on certain comparison metrics and comparison groups (Ziegeldorf, 2016, p. 227).
How this works is, first, the sharing behavior of a user is determined based upon certain comparison metrics such as amount of shared content, usage patterns, etc. (Ziegeldorf, 2016, p. 227). Second, the overall profile of the user such as family, friends, colleagues, profession, age, etc. is collected to determine certain user groups that the user can relate to that can be used as the comparison group (Ziegeldorf, 2016, p. 227). Third, the averaged overall sharing behavior of the comparison group is compared to the sharing behavior of the user (Ziegeldorf, 2016, p. 227). Finally, personalized privacy nudges are generated and provided to the user to limit any sharing behavior that may not have been commonly found in the comparison group (Ziegeldorf, 2016, p. 227). The CbP approach found some relative success when it was used with a collection of tweets from the SMS, Twitter (Ziegeldorf, 2016, p. 232).
Based on half a million tweets from 1,839 Twitter users with 659 users being teachers, 542 users being nurses, 559 users being journalists, and 79 users being U.S. senators, it was discovered that all groups were very restrictive about their location by keeping their tweets from being tagged with a geo-location (Ziegeldorf, 2016, p. 232). Above 90% of users disclosed their location in less than 7.8% of their tweets (Ziegeldorf, 2016, p. 232). Ziegeldorf et al (2016) found that the CbP approach was able to reflect that fact. For the “abusive language” metric, it was discovered that journalists and politicians use very little abusive language, whereas nurses and teachers used quite a bit of it (Ziegeldorf, 2016, p. 232). However, Ziegeldorf et al (2016) found that the CbP approach would rather nudge the politician more so than the nurse due to the relative norm of privacy and lack of user-specific comparison groups because some amount of abusive language was tolerable in certain groups.
Furthermore, with the collection of the top 300 job-haters from a site called “FireMe!”as a contrast group by using the metrics such as disclosing more locations than others, having significantly higher rates of abusive language, and tweeting with clearly more negative sentiment, the CbP approach was not able to reveal to individual users that they were at risk of losing their job, but it was still able to nudge them away from those harmful sharing behaviors. This shows that although this approach has the potential to effectively nudge users towards limiting their sharing behavior, its functionality still needs to be polished through more study.
Finally, another research possibility could investigate which privacy features and settings from each SMS performs better in helping users display more privacy awareness. This could help researchers and policymakers figure out which types of features and settings could be used as privacy nudges to help improve the design of interfaces for privacy and security purposes across all SMSs. Now, let’s take what is known about the problem of oversharing, its research, and the research possibilities being considered and think of ways a possible applied cybersecurity policy could be implemented.
Potential Applied Cybersecurity Policy to Help Limit Oversharing
One idea that can be implemented into an applied cybersecurity policy is to encourage users through privacy nudges to have them reconsider avoiding accepting friend requests from users that they have weak ties to. Users with weak ties are those who are random strangers or volatile acquaintances with low intimacy whereas users with strong ties are those who are good friends or relatives that a user is already familiar with and has a close friendship (Kroll, 2021, p. 4). Encouraging users to either reconsider being friends or deleting friendships with remote strangers could protect them from being victims of potential cyber-stalking or identity theft (Kroll, 2021, p. 4).
A second idea would be to organize privacy controls and settings in such a way that they are more transparent, consistent, and user-friendly to the user because just the availability and presence of privacy settings, themselves, can help increase the willingness amongst users to protect their personal data (Kroll, 2021, p. 4). The reason for this is that we must consider the possibility that the reason most users’ privacy behavior does not reflect their privacy concerns is that privacy nudges designed to limit oversharing are not always placed in their visual focus of attention. This is because SMSs have such a high volume of content on the screen and it is hard to wade through all the pieces of information competing for our attention (Kroll, 2021, p. 13). With this in mind, an important privacy nudge that could be placed in the visual focus of attention of first-time SMS users would be redirecting users to set up their privacy preferences if they have not already done so. That way when they are making their first post, they are not staying with the default settings (Kroll, 2021, p. 5).
A third idea would be informing the user of where their post is going and who it is going to reach to help users be aware of where the content being shared is going. This may help prevent users from sharing unwanted posts (Kroll, 2021, p. 5). Mainly, because experiments with Facebook users according to Kroll, et al (2021) have shown that showing the post’s reach or delaying it to give the user a chance to reconsider sending it was valued by users and reduced regret over sharing content, especially amongst users who have employed certain coping mechanisms to counter the feelings of regret after sharing content they felt they should not have shared. Reducing regret over sharing content amongst users can help make their social networking experience more enjoyable.
A fourth and final idea would be to provide privacy nudges that are personalized for a specific user, not a standardized, one-size-fits-all nudge (Wisniewski, 2017, p. 95). Mainly, because research has shown personalized nudges may be more effective since if a nudge is contrary to a user’s own privacy management strategy, then it could be viewed as a hindrance (Wisniewski, 2017, p. 96). Every default setting, feedback, or optimal hint may need to be different for each user (Wisniewski, 2017, p. 96). It is important to remember that privacy education designed to help users learn how to use an elaborate set of privacy controls fails to take a user’s existing proficiency into account most of the time (Wisniewski, 2017, p. 96). Every user is different and complex.
The classification system laid out by Wisniewski et al (2017), although it may need some refinement, is a good framework from which to work from to help design and provide personalized privacy nudges to limit oversharing. The CbP approach by Ziegeldorf et al (2016), which also needs some refinement, is also a good system to work from to help design and provide personalized privacy nudges for limiting oversharing. However, the best practice recommendation in employing an application that utilizes the CbP approach is to leave it out of the hands of third-party operators. This avoids requiring the user to trust an additional entity for privacy purposes (Ziegeldorf, 2016, p. 231).
Two ways to do this is to, first, allow the site operator of the SMS, itself, to run the application or, second, allow the user to run the application as a browser plugin (Ziegeldorf, 2016, p. 231). A second-best practice recommendation is to avoid information leaks to keep a user’s information private by applying differential privacy (Ziegeldorf, 2016, p. 231). What this means is that the user’s information is withheld from the site operator when publicly sharing information about a dataset used to describe the behavioral patterns of groups within it (Ziegeldorf, 2016, p. 231). The final best practice recommendation, considering the site operator of an SMS may not always be trustworthy, is to ensure the application with the CbP system filters out outliers. This is to either prevent an attacker from manipulating the aggregated behavior of a comparison group or prevent the same data, itself, from unintentionally steering the user towards the wrong direction of making ill-advised privacy decisions.
Overall, it is the hope that delineating the problem of oversharing on SMSs in this era helps show how it was begotten by the concept of what SMSs are all about. Moreover, understanding this, hopefully, has brought about an awareness of the potential consequences of engaging in this behavior on SMSs. Further understanding of the psychological underpinnings of oversharing, hopefully, makes it clear why we as users need to take heed of those potential consequences because anyone could be at risk of becoming victims of cybercrime from engaging in this behavior. Finally, it is the hope that presenting potential research and applied cybersecurity policy ideas can help advance our understanding of oversharing on SMSs and continue developing better solutions.
References
Bergram, K., Gjerlufsen, T., Maingot, P., Bezençon, V., Holzer, A. (2020). Digital nudges for
privacy awareness: From consent to informed consent?. Twenty-Eighth European
Conference on Information Systems. 1-13.
Kroll, T. & Stieglitz, S. (2021). Digital nudging and privacy: Improving decisions about self-
disclosure in social networks. Behaviour & Information Technology, 40(1), 1-19.
https://doi.org/10.1080/0144929X.2019.1584644
Wisniewski, P. J., Knijnenburg, B. P., & Lipford, H. R. (2017). Making privacy personal:
Profilingsocial network users to inform privacy education and nudging. Int. J. Human-
Computer Studies, 98, 95-108. http://dx.doi.org/10.1016/j.ijhcs.2016.09.006
Ziegeldorf, J. H., Henze, M., Hummen, R., & Wehrle, K. (2016). Comparison-based privacy:
Nudging privacy in social media. Springer International Publishing Switzerland, 226-
234. 10.1007/978-3-319-29883-2.15