Informal Risk Acceptance
Why known security risks remain - and what they cost your P&L
I see the same pattern again and again in mandates: a risk is known, a measure is decided, and yet nothing happens for a long time. The reasons are rarely irrational. On the contrary, they seem plausible: implementation is complex, resources are tied up, other issues seem more urgent. However, this is precisely where the problem lies. In fact, in these situations, a decision is not avoided, but made - only implicitly. The risk is not actively accepted, but borne over time.
I refer to this phenomenon as Informal risk acceptance. And in many organizations, it is significantly more expensive than it appears at first glance.
Carsten Reffgen
March 2026
The most expensive category of risks
From a management perspective, risks can be roughly divided into three categories: unknown risks, correctly treated risks - and a third category that is particularly relevant in practice: known risks where the necessary measures are not implemented. These risks in particular do not disappear. They reappear in every report, attract attention and generate costs over time that are rarely recorded systematically.
These costs are not abstract. They manifest themselves concretely in higher insurance premiums, additional audit requirements, restrictions on business decisions or operational risks in the event of an emergency. Many organizations take this state of affairs for granted. In fact, it is the result of repeated decisions.
The key question is therefore not just how great a risk is, but why it persists.
Why known risks persist
In the investigation of social engineering attacks, we found a recurring pattern in human decision-making processes. I refer to this as Emotion-Bias-Principle (EBP) Effect Model. It describes how emotional relief, cognitive plausibility and social legitimacy work together to stabilize decisions - even if they are objectively disadvantageous.
The model is not only used to analyze security contexts, but can also be applied to dealing with risks in general. Under uncertainty, people and organizations tend to prefer decisions that provide orientation and emotional security in the short term - and thus unintentionally reinforce existing risks.
The model thus adds a central dimension to the classic risk assessment: it not only explains, like risks are assessed, but Why they persist over time.
A typical case from practice
One example that I regularly see in this or a similar form is postponed network segmentation. The underlying risk is known, the measure is technically undisputed, but the implementation involves considerable effort. As a result, the decision is repeatedly postponed. The reasons remain consistent: too complex, too resource-intensive, not currently prioritized.
Over time, this creates a state that was not actively chosen, but which stabilizes. This is precisely the crucial point: the organization makes a decision without treating it as such. The risk is not formally accepted, but implicitly passed on.
If an incident occurs in such constellations - in the form of a ransomware attack, for example - the consequences are immediate. Instead of being able to isolate individual systems, larger parts of the infrastructure often have to be shut down. The resulting costs and business interruptions often significantly exceed the original effort required.
What organizations can do in concrete terms
Adopting this perspective also shifts the approach to improvement. The key lever lies not primarily in prioritizing individual risks, but in understanding the decision-making logic that leads to their persistence.
A pragmatic first step is to identify risks that appear unchanged in reporting over several cycles. In these cases, not only the level of risk should be considered, but in particular the question of why implementation is not taking place. In many cases, this shift in perspective already provides clarity as to where implicit decisions are being made - and where there are specific starting points for reducing risk and costs.
Conclusion: The biggest risks are often the known ones
Many organizations assume that their biggest risks are the ones they don't know about. In practice, however, the picture is often different: the biggest risks are those that are known - and yet remain.
In many cases, this is precisely where the greatest leverage for a sustainable improvement in the safety and cost situation lies
Contact and deepening
At EOS, we support organizations in making these decision-making logics visible and changing them in a targeted manner. I developed the underlying EBP model in the analysis of social engineering and transferred it to decision-making processes in risk and security management.
If you have 2-3 risks that have been in your reporting for years, it is worth taking a closer look.
👉 Arrange a 30-minute executive call and we will analyze your situation together - from a security and P&L perspective.
What is Informal Risk Acceptance?
Informal risk acceptance describes the situation in which organizations do not consciously accept known risks, but instead implicitly bear them over time. This form of decision stabilization leads to hidden costs and increased risk exposure.
If you would like to delve deeper, you will find the underlying analysis in the following working paper:
Reffgen, Carsten (2026): The Stability of Known Risk: Decision Stability and Informal Risk Acceptance under Normative Uncertainty
DOI: 10.2139/ssrn.6267599
What is the EOS EBP assessment?
In the Emotion-Bias-Principle (EBP) Assessment we make visible why risks persist in your organization. We analyze how decisions are actually made under uncertainty - and identify the mechanisms that lead to the stable continuation of risks.
This reveals where risks are not decided but borne - and where avoidable costs arise.
Literature
Kahneman, D. (2011). Thinking, fast and slow / Daniel Kahneman (1st ed.). Farrar, Straus and Giroux.
Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise: A flaw in human judgment. William Collins.
Lerner, J. S., Li, Y., Valdesolo, P., & Kassam, K. S. (2015). Emotion and Decision Making. Annual Review of Psychology, 66(1), 799–823. https://doi.org/10.1146/annurev-psych-010213-115043
March, J. G., & Olsen, J. P. (1989). Rediscovering institutions: The organizational basis of politics. Free press.
Reffgen, C. (2026). The Emotion-Bias-Principle (EBP) Effect Model; An Analytical Framework for Social Engineering as Decision-Making under Uncertainty. SSRN. https://doi.org/10.2139/ssrn.6139207
Reffgen, C. (2026). The Stability of Known Risk: Decision Stability and Informal Risk Acceptance under Normative Uncertainty. SSRN. https://doi.org/10.2139/ssrn.6267599
Suchman, M. C. (1995). Managing Legitimacy: Strategic and Institutional Approaches. The Academy of Management Review, 20(3), 571. https://doi.org/10.2307/258788
Thornton, P. H., Ocasio, W., & Lounsbury, M. (2012). The institutional logics perspective: A new approach to culture, structure and process. Oxford Univ. Press. https://doi.org/10.1093/acprof:oso/9780199601936.001.0001
Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science (New York, N.Y.), 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124