Given the continuous and exponential rise in complexity in the field of information security, most often, it is the actual employee that is observed to be the weakest link. End users are repeatedly found to make poor security decisions that appear to be irrational and careless, thus placing their employer at significant business risk.
One emerging school of thought to combat the seemingly nonchalant user attitude against information security is to enforce a consequence-driven work culture that aims to promote positive cultural change by maximizing accountability (e.g., employees that are caught placing confidential data on unencrypted stores or those who bring in personal wireless access points to work are terminated per formal policy).
This discussion brings together insights from the field of behavioral economics and recent research literature to demonstrate why 1) a pure consequence driven approach that solely relies on accountability does not promote risk reduction to the enterprise, and 2) why employees are rational in rejecting information security rules even when clear consequences are published and enforced.
1. Introduction
When it comes to following basic information security best practices, many employees appear to be irrational despite well advertised consequences: they go trigger-happy while browsing the web and download malware to their corporate laptops, they get around password complexity policies to choose the weakest passwords they can get away with [1][2], they ignore certificate errors and accept security warnings without reading them [3] and they inadvertently expose confidential data on social media sites [4].Clearly, there are specific cases of malicious intent on the part of unethical employees who are aware of their actions. However, the scope of this discussion is limited to employees who do not have malicious intent, yet place the enterprise at risk by rejecting repeated security advice.
In this discussion, I argue that a purely consequence and accountability driven approach to influencing user behavior is a myopic strategy that is not likely to succeed in promoting significant risk reduction to the enterprise. I argue that in order to influence users to promote positive cultural change in security related behavior, the enforcers must comprehend additional variables such as the difference in the perspective of risk to the individual, psychological biases and simple behavioral economics.
2. The Perspective of the End User
Using basic risk assessment principles, it is possible to estimate the risk to a business should certain events occur. While these events may pose a significant risk to the enterprise as a collective, the probability and cost to the individual employee can differ. The realization of this variance in perspective between the individual and the collective sets the stage for further discussion on psychology and economics covered in the following sections.
2.1 Collective Risk to the E
nterprise Versus the Individual
Consider the perspective of an enterprise when a number of employees do not follow best practice security advice, such as securing their laptops at their work place. This behavior can contribute to a high probability of adverse consequences to the business. However the same amount of risk is not borne by the individual employee.
As an illustration, consider the case of laptop theft. Let us assume a situation where an employer repeatedly warns employees to secure laptops using cables provided. In a mid-sized company of 5,000 employees, even if 90% of the employees were to follow instructions, the remaining 500 employees who reject the advice pose a significant risk to the business since the probability of at least 1 out of 500 unsecured laptops being stolen on an annual basis is a realistic (and conservative) estimation.
The cost to the enterprise per stolen laptop (unencrypted) is estimated to be around $49,246 [5]. More importantly, the value of this cost is based on the following components: replacement cost, detection, forensics, data breach implications, lost intellectual property, lost productivity and legal, consulting and regulatory expenses. In addition to the measurable cost to the business, the loss due to brand damage can be and often is significant
Given our conservative guess of 1 laptop stolen annually for every 500 employees that do not bother to secure their laptops, the risk to the enterprise is a sure thing. But what about the individual employee? The employee risks termination, yet the probability of the individual having her laptop stolen from the pool of other 499 employees who do not follow security mandates, is low.
This purpose of this hypothetical discussion is to purely point out the variance of risk from the perspective of the enterprise as a collective versus the individual employee. In the majority of situations, the risk borne by the business as a whole is easily comprehended by those who comprehend enterprise risk, yet the individual worker may not share similar perspective of risk.
2.2 Game Theory
In the previously discussed scenario, the calculations demonstrate variance of cost to the individual versus cost to the collective enterprise. However, it can be argued that the employee’s job security relies upon the overall well-being of the enterprise. This sentiment promotes the notion that employees can be expected to collectively cooperate and follow security mandates. In this sense, the context of the operative word “cooperate” should test to see if employee’s actions in following policies for the overall good of the enterprise are influenced by what they notice their peers doing.
The “Prisoner’s Dilemma” [6] is a popular game theory [7] problem that has been used to show why two people may not cooperate even if it in both their interests to do so. This problem has been extended to study the cooperation, or lack thereof, of individual entities to promote a common interest. For example, environmental studies have clearly provided evidence of the upcoming perils mankind is doomed to face given the climate change crisis. Given this situation, individual countries know they will ultimately benefit from everyone doing their part to promote a stable climate in the future. However, with game theory related experiments deriving from the “Prisoner’s Dilemma” approach, results repeatedly demonstrate that while individual countries agree with the rationale that everyone must contribute for the greater good of all, they are unable to rationalize on an individual level to do their part in the equation [8].
Extending the general findings from “Prisoner’s Dilemma” experiments to the hypothetical scenario presented in the earlier section, it is possible to see why individuals who are consciously aware of the need to cooperate in following security mandates may fail to do so. This situation lends to the phenomenon where users who do not follow security
requirements appear to “free-ride” on the notion that the remaining majority of the users are following the security advice thereby lowering the probability of harm to the well-being of the collective [9].
Our acknowledgement of differences in perspectives of risk and our comprehension of cognitive decision making processes will assist us in enforcing security mechanisms that are well designed to engage active participant from the end users. In the next section, we will build on our discussion to include examples of psychological and economical underpinnings at work that can help facilitate further improvement in our understanding.
3. Psychology and Economics at Play
Given that the scope of this discussion is to hypothesize why individuals do not follow mandates even when consequences are clearly advertised, let us first take a look at examples of how psychological biases are often active in this regard. Furthermore, let us briefly discuss the cost:benefit calculation individuals are likely to make prior to deciding whether to invest effort into performing requested tasks.
Once we go through examples of how psychology and economics are used make decisions, we will be in a better position to discuss recommendations on how to leverage research within these fields to drive better adoption of information security mandates.
3.1 Psychological Biases
Security controls are often designed with the misguided assumption that human rationality is void of biases. On the contrary, human decision consistently contain psychological biases that are predictable and measurable. As such, let us discuss two examples of biases that are often activated when individuals seek to comprehend and act upon information security events.
Valence Effect: The Valence Effect is the tendency of individuals to overestimate the probability of positive outcomes. For example, in one experiment, all things being equal, participants assigned a higher probability to picking a card that had a smiling face on its reverse side than which had a frowning face [10].
Extending the knowledge of results from repeated experiments performed to demonstrate the valence effect, it is easy to see how this bias influences employee behavior: individuals who do not follow security mandates have a psychological bias promoting the idea that other individuals are more likely to cause adverse incidents. In a similar vein, studies have shown that online social media users believe that providing personal information publicly could cause privacy problems to other users (the same users don’t seem concerned about the probability of privacy issues they could face for sharing similar information) [11].
Anchoring: Anchoring is a cognitive bias that describes the common human tendency to rely too heavily (“anchor”) on one trait or piece of information when making decisions.
Research has shown that individuals often believe “neat looking” websites are more trustworthy from a privacy and security standpoint [12] by anchoring their bias using their visual experiences to correlate website design with previously successful transactions. It is easy to understand how this bias can cause individuals to bypass advice from security awareness programs on how to identify and steer clear of risky situations.
Research in psychology has uncovered empirical evidence to support various categories of biases. As described in the examples, such biases can be clearly correlated to understand why many individuals fail to execute security requirements.
3.2 Rational Rejection of Security Mandates
Individuals implicitly perform a cost:benefit calculation when deciding whether to execute a previously taught security mandate or not. This hypothesis derives from work in the field of behavioral economics as well as information security research experiments performed recently. Coupled with the comprehension of individual perspectives and psychology, the understanding of how individuals perform implicit cost:benefit decisions will ultimately help organizations create security requirements that are designed to appeal to the human psyche, thus driving increased adoption.
When people make decisions to perform a given task, they quickly perform a calculation to ascertain if the cost of performing it is worth the return. The cost to the individual can be bounded in terms of financial harm, time taken and effort required. In addition to biases discussed in previously, users quickly decide if the total gain from following security advice is worth the effort. In many security research experiments, the data shows that users reject security advice because the cost to complete the security requirements is too high.
Consider the case of phishing websites that have the potential of stealing corporate and user data by posing as legitimate sites. Employees are repeatedly taught to investigate their web browser’s address bar to make sure they are browsing legitimate websites. However, even the most well-known domain names for well respected institutions repeatedly redirect the user to multiple locations. In this situation, the user must have the technical ability to dissect and parse the browser address bar and distinguish the host name, the domain name, followed by the path to the website resources, and any applicable parameters. To the average non-technical individual, the burden is too high [3]. Should the individual expose corporate data to a malicious website, the cost of this data breach is borne by the corporation.
Also consider the cases of SSL certificate warnings displayed by web-browsers. Users are instructed to be cognizant of such warnings because they may be the indicator of an ongoing Man-in-the-Middle attack that can jeopardize corporate information. However, research has shown, that from the end user’s perspective, close to 100% of such warnings are false positives [3]. It is easy to see how the high probability of a false positive warning with minimum or close to zero return and effort costs makes it rational for users to anchor against following security advice.
Having discussed examples of how psychology and decision economics influence decisions, it is easy to see why it is vital that these variables are accounted when developing security requirements. If we want users to actually adopt and execute on security mandates, we need to make sure the requirements are designed to appeal to the human cognition.
4. Recommendations
Based on the investigation of psychological perspectives and cost:benefit analysis using behavioral economic principles, the research community has gained further insight into why individuals often reject following security mandates. In these situations, with all other variables being equal, accelerating accountability by enforcing stricter consequences is not likely to positively influence user behavior.
Businesses that seek to positively influence their risk posture by influencing users and promoting positive culture change in information security should consider the following recommendations:
Identify and automate security responses that can be machine parsed and computed instead of relying on human decisions.
Re-evaluate business risk assessment methodologies to account for differences in collective and individual perspective based on risk, cost, and probability.
Discover and calculate the influence of popular psychological biases to ascertain why employees may have the tendency to bypass advertised security requirements.
Detect cases where security requirements may have a high cost rate for the individual. In such cases, evaluate whether the issue is promoting risk to the enterprise, and if so, consider redesigning the usability or altering the control such that the user is psychologically influenced to engage.
Leverage well known psychological biases for the benefit of information security related communications.
Security mandates are important and it is only fair that employees who do not follow instructions that put the enterprise at risk should face clear consequences. However, the risk of solely depending on this approach ignores vital variables such as individual perspectives, psychology and simple behavioral economics. Information security personnel should be monitored to make sure they are not solely pushing for a consequence driven culture that makes their job easier by promoting irrational and high cost security requirements to the end users in the guise of accountability.
References
[1] Florencio, D., Herley, C. A Large-Scale Study of Web Password Habits. 2007.
[2] Morris, R., Thompson, K. Password Security: A Case History. Comm. ACM, 1979.
[3] Herley, C. So Long, and No Thanks for the Externalities. Microsoft Research, 2009.
[4] Statement by CEO of Tri-City Medical Center, 2010. Retrieved from http://www.tricitymed.org/news/2010/patient-privacy-and-social-media.aspx.
[5] Ponemon Institute. The Cost of a Lost Laptop, 2009.
[6] Flood, M., Dresher, M. Prisoner’s Dilemma, 1950.
[7] Neumann, J. Theory of Games and Economic Behavior, 1944.
[8] The Economist. Playing Games with the Planet, 2007.
[9] Baddeley, M. Security and Human Behaviour, Workshop on Security and Human Behaviour, 2010.
[10] Taylor, N. Making Actuaries Less Human: Lessons Learned from Behavioral Finance, 2000.
[11] Acquisti, A., Gross, R. Imagined Communities: Awareness, Information Sharing, and Privacy on the Facebook, Carnegie Mellon University 2006.
[12] Kahneman,D.,Tversky,A. On the Psychology of Prediction, Psychological Review, 1973.
Comments