class: center, middle, title-slide, title-slide #
context and strategy ## navigating an uncertain world ### (updated: 2022-04-22) --- # What and Why * .large[Governance: collection of capabilities driving principled performance] * Example: requiring code reviews * Example: Forced 2 week vacations [FDIC](https://www.fdic.gov/news/news/financial/1995/fil9552.html) * .large[Compliance: externally imposed standards] * UL listing for electrical appliances * PCI standard for card processing * FIPS 140-2 for crypto modules * Whether your workers are contractors or employees * .large[Risk: Market, Credit, Operational] * Great introduction in the free and open source book [_Financial Analytics Using R_](https://bookdown.org/wfoote01/faur/) ??? One way to think about governance is that it's typically about setting up forces that are going to act on a situation where you don't know any of the details or the people. It as to be vague enough that it's possible for people to do the right thing. Code review, imagine trying to make a control that was going to make things safer regardless of technology, language, or team size. Imagine trying to make a rule that was going to prevent fraud in companies that haven't been formed yet, using technologies that haven't been invented yet! In highly technical fields, compliance standards can be vague or lag technology so much that the challenge is interpreting the standards in a responsible and balanced way. Healthcare and cybersecurity in particular, over prescriptive The only reason we care about governance and compliance is because we care about risk. Risk is a very overloaded word, even worse than security. In addition to market risk, credit risk, might also think of liquidity risk. Often security people lose sight of the fact that cybersecurity is only one component of operational risk. Correctly placing cybersecurity as a component of operational risk is important for making balanced business decisions. This opens the door for folks working on cybersecurity to use more mature and structured ways of measuring and prioritizing risk. The old RED/YELLOW/GREEN arbitrary rankings are not sufficient --- # Defining Operational Risk ## RISK = Loss Event Frequency x Loss Magnitude - A measurable event - No such thing as "a risk". Probability of losses associated with scenarios over a period of time. - a **loss event** is a **threat** acting on an **asset** causing an **effect** - Effects are CIA - Confidentiality (data breach) - Integrity (fraud) - Availability (theft, DOS, outage) - Effects have a magnitude, or cost ??? There is no such thing as "a risk", there are amounts of risk associated with scenarios over a period of time. Typically we would set the time to 1 year. Example scenario would be laptop loss/theft over a year. You know how many laptops, how much travel, how many were lost last year: in 5 minutes you have some of the necessary inputs to calculate the probabilities of loss over the next year. What is NOT risk? * A control deficiency: untested recovery process, weak passwords * A threat community (disgruntled employees, cybercriminals) * An asset (database of customer records) * An attack that does not cause loss (successfully defended, or asset of no value) --- # What are losses? ## FAIR six forms of loss * Primary or direct losses * Productivity (business interruption) * Replacement (capital assets) * Response (crisis management, forensic investigation) * Secondary or indirect losses * Fines & Judgements * Reputation * Competitive Advantage ??? Sometimes it's useful to split these into two buckets, primary and secondary losses. Primary losses incurred directly by the organization, secondary losses imposed later from forces outside the organization (market, government) We are NOT trying to completely eliminate losses! Trying to make better decisions about them, manage them. Can you imagine an investor deciding that they were going to pursue a zero-losses investment strategy? They might do it, but they would also have zero gains. --- # How to express the measurement? ## Probability of loss for a scenario * Probability of loss event * Probability of magnitude * Not a single point ordinal number! * Example from [TidyRisk Evaluator](https://evaluator.tidyrisk.org/reports/evaluator_risk_analysis.html) ??? often get hung up on measurement and expert estimation. How can an estimate be a measurement? Two super important points * measurement is reduction in uncertainty. Lets talk through an example * when we record an estimate, we want to INCLUDE the uncertainty so that it doesn't get lost when we combine multiple estimates. If you want to go deep on this, recommend the book Bayesian Methods for Hackers, it's on github. * Also recommend Doug Hubbard book, how to measure anything in cybersecurity risk --- background-image: url(img/evaluator-loss-exceedance.png) background-position: 10% 50% background-size: 80% class: center ??? This is an example from the example free and open source TidyRisk toolkit by David Severski. A common way to review the expected losses in a year. This is an example aggregating 56 different risk scenarios. The probabilities can be added up so you get a similar curve whether looking at a single scenario or rolled up for the entire enterprise, or split out by category, etc. This example is showing the 80% likelihood of losses exceeding a certain level. So 80% another way to think about that is 4 out of 5 years. So based on the estimates that went into this example risk analysis, 4 out of 5 years they are expecting at least $4.2M in losses. The immediately useful thing about this is that you can have a business conversation around a budget. And you can focus your risk reduction efforts in the right spot! Maybe the right thing to do is reducing threats, but maybe the right thing to do is reducing magnitude of loss! Would you spend $5 million to prevent $4M in loss? --- # heatmaps - red flag for poor analysis * cannot add risk scenarios together - how much risk if you add 2 red risks? * range compression - just barely red vs very red * doesn't allow for expression of uncertainty * cannot calculate reduction in risk per dollar spent on security * Use online tools to get a feel for building a model from range estimates * Qualitative labels as summaries of ranges is fine https://www.fairinstitute.org/blog/heat-maps-dont-support-iso-31000 https://www.researchgate.net/publication/266666768_The_Risk_of_Using_Risk_Matrices https://medium.com/guesstimate-blog ??? One of the most crucial concepts to take away here is that measurement is reduction in uncertainty. Experts making subjective estimates is still useful, but we want to be careful to capture the uncertainty in those estimates so that when we combine estimates to build a model, we don't throw away information and make gross math errors. There are solid, well known math techniques for working with uncertainty (probabilities), and counting the number of "Red" threats is actively harmful. Cyber security should be expected to learn the same basic numeracy that we expect from finance folks. --- background-image: url(lines.png) background-position: bottom -135px center background-size: 30% class: center, middle # "You can outsource your operations, but you cannot outsource your risk" US Department of Homeland Security Cyber and Infrastructure Security Agency (CISA) [Awareness Briefing on Chinese Cyber Attacks](https://www.us-cert.gov/event/chinese-cyber-activity) [(slides)](https://www.us-cert.gov/sites/default/files/c3vp/Chinese-Cyber-Activity-Targeting-Managed-Service-Providers.pdf) ??? In February 2019 Kindly Ops attended this government briefing reviewing a sustained campaign by the chinese government against Managed Service Providers. During the briefing they referenced the January 2019 WorldWide Threat Assessment of the US Intelligence Community published by Daniel Coats, director of national intelligence, which stated "Cyber is the top threat to national security". The WHY is when a breach occurs at any company, we want to be sure that commercially reasonable defenses were in place, and that we had proactively analyzed and managed the risks, not just done the minimum mandated for compliance without considering organization specific risk scenarios. --- class: center, middle # STRATEGY ??? All the moving parts are too big to fit inside any one experts head. We don't want to waste time and money overprotecting things, the security vendors will happily play off the growing concern and hype. While technical details of cybersecurity are changing rapidly, we have managed operational risk in the face of grave threats and significant uncertainty as long as we have had insurance companies. need to use good tools to help us make structured judgements. Some useful models and frameworks --- background-image: url(img/kindlyops-grc-infographic.png) background-position: 10% 50% background-size: 80% class: center ??? Often friction between technology builders, product and business owners, and security and governance teams. Starting with empathy makes the rest of the process much more efficient. It is common for companies to work on this baseline from a top-down and bottom-up approach at the same time. Next will describe some of the control and risk frameworks that are useful to accelerate development of a security program. Has your organization done initial scenario-based risk analysis based on prioritized threats? Do you have metrics or KPIs such as FORCE that encourage a learning organization? Build empathy starts with this presentation, helping people understand the bigger picture rather than expecting them to react perfectly to ever-increasing tensions. --- # Mental Models and Organizational Learning * .large[Lance Hayden _People-Centric Security_] * [CC-licensed toolkits](http://lancehayden.net/culture/) * .large[Security Culture Diagnostic Tool] * [Haven open source SCDS implementation](https://github.com/kindlyops/havengrc) * .large[Security FORCE metrics] * Maturity models like CSF give an initial lift, not sustained improvement * are these metrics more governance or compliance? --- class: center, middle # Baseline, top-down # NIST Cyber Security Framework (CSF) ??? This is the top down cybersecurity framework good list of best practices around cybersecurity and risk management that organizations can follow as a guide. Can be used as a maturity model, where activities can be ranked on ordinal scales 1 to 5. This is what many firms are doing enterprise-wide. Important: NIST CSF does not define what cyber risk is, and does not provide guidance on how to assess what areas have the biggest probable loss exposure or prioritizing based on risk reduction. --- class: center, middle # Baseline - bottom up # CIS Top 20, AWS Foundations Benchmark ??? This is the bottom-up control framework. Prioritized set of controls that protect against known cybersecurity attack vectors. Informed by consortium of industry experts and prioritized based on analysis of attacks across the industry. --- class: center, middle # baseline - breadth # AWS Well-Architected Framework Security Operational Excellence Reliability Performance Efficiency Cost Optimization ??? 12 years of cloud operations experience distilled into best practices across 5 areas of focus. Consider that many of the attacks that happen in the industry are against companies running on AWS, there is a great deal of detailed knowledge about how to run securely. --- class: center, middle # FAIR (Factor Analysis Information Risk) https://www.fairinstitute.org/ ??? Factor Analysis of Information Risk (FAIR) has emerged as the standard Value at Risk (VaR) framework for cybersecurity and operational risk. Originally developed at an insurance company, then published as an Open Group Standard, and now being adopted rapidly across the industry. Complementary to and compatible with CSF. This is something that being rapidly adopted across industry, and provides a way of adding structure to any discussion about controls. --- # ASSESSING CURRENT STATE * .large[CSF Maturity Model assessment] * .large[CIS Cloud foundations benchmark] * .large[Security Culture Diagnostic] --- # Emerging work * .large[FAIR Privacy] * [NIST work on quantitative privacy risk for individuals](https://github.com/usnistgov/PrivacyEngCollabSpace/tree/master/tools/risk-assessment/FAIR-Privacy) * .large[NIST further endorsement of FAIR] September 2019 * [NIST mapping of FAIR to CSF](https://www.nist.gov/cyberframework/informative-references/informative-reference-catalog) ??? The privacy work is particularly interesting because it's applied to inviduals, not organizations. The way you model loss for individuals is super interesting compared to organizations. If there is a 95% chance that I'm going to die from my terminal cancer in the next 6 months, would I take a medication that has a 50% chance of killing me in 36 months? --- # Questions?