Q&A Follow-up from “Risk Assessment on the Shoulders of Scientists”

Category

Risk Modeling, Quantitative Risk

Risk Level

 

In our Hive Live episode Risk Assessment on the Shoulders of Scientists, Corey walked us through what the research says about expert estimates and how we can use that knowledge to improve cybersecurity decision-making. We received a number of follow-up questions (Keep those coming!) and we’ve answered the latest batch of questions below!

“How do you recommend starting a quantitative approach to assessing risk?”

You can get started by modifying your existing risk assessment tool or by trying out our free risk assessment spreadsheet.

To modify your existing quantitative risk tool , add three new fields (or columns) next to where your experts currently enter their risk impact scores. You can name those new columns Least, Most Likely, and Highest. Next time you collect scores from your experts, collect their “ballpark” impact estimates here in dollars. This will give you practice performing quantitative risk assessment, give your estimators practice giving quantitative estimates, and give your stakeholders a preview of what a fully quantitative program would look like. At some point you’ll be able to hide the “vestigial” score columns and eventually delete them. For more guidance and an example risk assessment tool, check out our upcoming ACT post. The first tab in our free spreadsheet tool shows a fully quantitative assessment while the second tab shows columns with scores side-by-side with quantitative estimate fields.

“Girgenzer makes a case for the use of heuristics (i.e., fast and frugal trees) and reference classes in certain situations. Can you please comment on when it is appropriate to use those methodologies in cybersecurity?”

We haven’t come across scientific research suggesting that heuristics or fast-and-frugal trees are appropriate for cybersecurity problems.

The examples of successfully applied heuristics that we did find discussed them in the context of sports, medicine, litigation, and counterterrorism. We didn’t find strong enough similarities between these fields and cybersecurity that would allow us to say “this heuristic applied in X field is sufficiently similar, so it probably works on cybersecurity problems.” We’ll keep searching and keep you informed!

Outside of the scientific literature, heuristics are behind many attempts to automate risk assessment, security event categorization, and email filtering. For example, does your data classification policy have something like the table below?

Description Examples Data Risk Classification
Data that is already publicly available or has no negative consequences if made public Public-facing website readable text and images. Low Impact
Data that if made public or destroyed incurs moderate cost or decreases value of company assets. Internal documents, databases, and other files. Medium Impact
Data is regulated or incurs significant cost or loss of value if destroyed or accessed by unauthorized parties. PCI, PII, ePHI, FERPA, trade secrets. High Impact

Tables like this one encode a person’s heuristic rule set, e.g. “IF a thing contains regulated data THEN risk = High.” We’re not advocating the use of such tables. There is no evidence that they improve decisions or make us more secure. A quantitative approach would at least enable us to apply bias-reducing methods to our decision-making process, including Gigerenzer and colleagues’ findings on Reference Classes.

Let's continue the conversation on heuristics and fast-and-frugal decision trees in cybersecurity.

“How to deal with lack of data regarding future risks? For example, we know that X ransomware attack appends every N years, but what about proactive security? Is there a way to implement it to this method?”

Use what you do know to model the future and update that model as the future unfolds. Humans do this anyway in their head to make decisions. Putting the human thought process on paper (or spreadsheet) allows us to account for more factors than the brain can handle. It also gives us the ability to collaborate and build something more reliable and robust.

If you have statistics about ransomware attacks that you trust are applicable to your organization, then I recommend using that to establish your baseline probability of what the future might look like.

One way to do that in a transparent and systematic way is to perform a quantitative risk assessment. You can use the results of such an assessment to prioritize your security projects and understand how much money is too little or too much to spend on them.

”Where can people find training for calibration?”

Check back for our upcoming ACT post on Calibration Training in Cybersecurity to learn more about the science supporting calibration and for a list of free resources, web apps, and question sets to get you started.

“How can you translate the "gut instinct" of cyber professionals into real data?”

Gut instincts are not reliable decision-making sources unless your occupation is “high-validity.” If your experts haven’t routinely and directly experienced the events that you are asking them to estimate, they do not have a gut instinct for them. There may be aspects of cybersecurity jobs that are high-validity and help us develop an instinct (e.g., I have a sense for what kind of queries will crash the log monitoring system), but cybersecurity professionals and their management rarely, if ever,  experience the full lifecycle of a cybersecurity event. For a more thorough discussion of this issue, check back for our upcoming ACT post There Are No Experts In Cybersecurity.

This doesn’t mean you can’t use what your experts do know to make increasingly well-informed decisions. That is why we build models and integrate data with expert estimates! 

If you’re interested in reading more, check out our upcoming ACT post on Starting Quantitative Risk Assessment.

 

Follow us - stay ahead.


Read more of the ACT

Previous
Previous

Password Hygiene: LastPass Edition!

Next
Next

Log4Shell: The New ‘Heartbleed’