This article details the prevalence of risk acceptance within organizations, why IT security departments may be putting too much confidence in their controls, and how excessive risk acceptance is often cultural.
Originally published in the April 2018 issue of the ISSA Journal
Risk management is a basic and fundamental principle in information security. Techniques related to handling risks typically include avoidance, transference, mitigation, or acceptance, with many information security risks being addressed by the latter two. Given the frequency and types of attacks organizations are handling, are security practitioners accepting more risk than they realize? Or worse, are they accepting too much risk to in order to simply meet a compliance requirement or avoid a failing report? This article details the prevalence of risk acceptance within organizations, why IT security departments may be putting too much confidence in their controls, and how excessive risk acceptance is often cultural.
Risk management is a part of many industries. The entire insurance industry is built on risk management; however, there are many relevant examples in other verticals, such as the following:
- Technology – What is the possibility a natural disaster results in a shortage of a critical component within a supply chain?
- Banking – What is the probability too many lenders default on a mortgage?
- Automotive – What is the acceptable testing parameters for safety parts?
In each of the examples, companies have to accept a certain amount of risk. However, by accepting too much risk the company may be in jeopardy of staying in business. In 2011, floods in Thailand affected many technology companies who required a steady supply of hard drives. Many technology manufacturers had multiple suppliers, but many of the primary plants for suppliers were located in one country, Thailand. The result was that primary, secondary, and if they even existed tertiary suppliers all had supply issues. Companies such as Western Digital had a sharp decline in production to end the year, and there was a lot of uncertainty, which could be seen in quarterly filings and comments.1 The shortage affected business operations for many technology companies, but the shortage was not catastrophic to the business. Unfortunately, this isn’t always the case, as can be seen in the second and third examples. In 2009 and 2010, nearly 300 banks failed.2 The cause was generally due to excessive risk taking and non-performing loans during the 2008 financial crisis. Finally, in the automotive example, Takata, until recently one of the world’s largest automotive safety suppliers, filed for bankruptcy after their airbags were found to have caused deaths and injuries. The root cause of the problem was degradation of a key chemical used in their airbag inflators when it was subject to certain environmental conditions and time.3 It was Takata’s responsibility to test their product and ensure it met the required safety standards in a variety of conditions for an extended time frame, a risk potentially overlooked, or possibly accepted, when the company decided to use certain products as a propellant.
Information security risk frameworks and tools
The three examples provided are different than what information security teams address, but the approach to understanding and handling the risk is similar. When addressing a risk within information security, the equivalent question may simply boil down to what is the probability of a breach? Mature organizations may go further and consider what is the likely cost of a breach. However, while vendors and organizations such as the Ponemon Institute publish annual studies on the cost of a breach, very rarely is this cost quantified by an individual organization.
Frameworks and assigning controls
Organizations must assess and handle risk on a daily basis. There are many risk frameworks and formulas available to assist companies in quantifying and qualifying risk. Some of the most commonly cited include:
- OCTAVE - Operationally Critical Threat, Asset, and Vulnerability Evaluation from the Carnegie Melon University (CMU) Software Engineering Institute (SEI)4
- COBIT - Control Objectives for Information and Related Technologies from the Information Systems Audit and Control Association (ISACA)5
- NIST 800-30 - Guide for Conducting Risk Assessments from the National Institute Standards and Technology (NIST)6
To better understand the various risks a company faces, categories of risk can be used such as financial risk (e.g., loss of revenue) and reputational risk (e.g., brand image in the media). Additionally, there are controls that can be applied to address the risk. Organizations can use high-level categories such as preventative, detective, and corrective controls or get more granular with something like the Center for Internet Security (CIS) Top 20 Controls.7 Finally, it is commonly said that controls can be implemented with people, processes, and technologies.
Lack of implementation
Much of the content in the aforementioned frameworks and approaches has been around for many years. COBIT celebrated a 20-year anniversary in 2016; NIST 800-30 was originally published in 2002; and some practitioners (or CISSP candidates) may recall the Department of Defense (DoD) Rainbow Series dating back to the mid 1980s.8 The content in these frameworks is still very relevant today, particularly since some of them are continually updated. However, many organizations only pay lip service to the frameworks through reference in policy, while the detailed components and best practices behind them are barely used. This practice alone indicates a lack of a true commitment to risk management within an organization.
The cost of not implementing controls
Organizations have limited resources, in particular time and money, and firms need to use them wisely. While reports from Gartner9 and others research firms forecast increased yearly security spending, within an organization security enhancements may fall behind other initiatives, especially when the improvement requires time from a development team. For example, a feature request from a large client may take higher priority than a security improvement to a web service used by a subset of clients.
To take the software development example a step further, one only needs to look at the cost of software defect. It is well known that the cost of a software bug is dramatically less if it is found early in the development life cycle, such as in the requirements gathering or initial design phases, as compared to identifying the issue in production. Security improvements, when possible, should be fixed early in the life cycle as well. The longer they exist, the higher the cost will likely be to fix them later.
There is also a hidden cost of not prioritizing security improvements: if they are not addressed today, they may never get implemented. This may also lead to a long-term tendency to downplay other security enhancements.
Having a culture of risk acceptance
There are organizations that have great defense-in-depth strategies, have the latest security technologies in place, or possess the some of the smartest minds in information security. However, many organizations do not have the resources to have all three and even if they do there are going to be some risks that have to be accepted. Risk acceptance is needed in practice, as organizations can neither afford, nor should, be spending their limited resources on trivial risks. Additionally, many of the most prevalent compliance frameworks take into consideration compensating controls and mitigating factors when assessing risk.
There is a tendency to being comfortable accepting small amounts of risk and then, over time, becoming more comfortable accepting larger amounts of risk, often on the basis that other risk acceptance practices caused no harm. More importantly, meeting minimum compliance requirements often gives organizations a false sense of security that they
can accept risks that fall outside those requirements.
What types of risks are accepted?
This is a broad question, but it goes towards what is discussed below related to rationalizations and responses to identified risks. Examples that organizations may deal with include:
- Unpatched systems
- Systems configured with less than ideal security parameters
- Application vulnerabilities
- Lack of end-to-end encryption
- Human/social engineering threats
The following sections address common rationalizations and arguments for accepting risks.
On compensating controls – How attackers bypass single controls
When an information security risk is identified and cannot be easily corrected or mitigated, there is a propensity to want to reduce the risk through compensating controls. There is nothing wrong with this approach, and when a set of controls is properly implemented, there is often a valid reason to do so. However, reducing the risk of a given finding based on the presence of a single control is likely giving too much credence to a particular scenario, as individual controls can be bypassed. Below are ten of the most common controls cited when attempting to reduce risk and a consideration to keep in mind when using them to reduce risk.
- Strong or two-factor authentication (2FA) – Requiring an extra level of authentication beyond username and password is a necessary control to mitigate the potential of weak passwords, brute force attacks, and other attacks against authentication mechanisms. Man-in-the-middle (MitM) attacks can be difficult, but they do occur. NIST issued SP 800-63B, Digital Identity Guidelines,10 which includes a comprehensive list of threats and security considerations against many commonly used technologies for additional authentication.
- Transport layer security (TLS) – Encryption in transit is a necessary control to protect data in transit, particularly over the Internet, but there are many other attack vectors an attacker would likely try before capturing the traffic in transit.
- Not Internet facing – While assets on the Internet do face more frequent attacks, non-Internet facing assets are frequently less hardened and likely contain the sensitive data an attacker is looking to find. Also, encryption in transit may not be in place at all times, such as database traffic on the internal network.
- Segmented network – Well-designed network segmentation can create an additional layer of defense, but there are always some assets that can get into the segmented network, such as a jump server, assets on a management virtual local area network (VLAN), or a vulnerability scanning box. Additionally, some segmented networks can grow to be quite large and over time essentially
About the Author
Matt Wilgus is a Practice Director at Schellman & Company, Inc. Matt leads the Security Testing and Assessment offerings. In this role he heads the delivery of Schellman’s penetration testing services related to 3PAO and PCI assessments, as well as other regulatory and compliance programs. Matt has over 19 years’ experience in information security, with a focus on identifying, exploiting and remediating vulnerabilities, in addition to extensive experience enhancing client security programs while effectively meeting compliance requirements.More Content by Matt Wilgus