Lead Author: Pavel Krcal Co-author(s): Ola Bäckström Ola.Backstrom@lr.org
Pengbo Wang Pengbo.Wang@lr.org
Implementation of Conditional Quantification in RiskSpectrum PSA
Basic events in Probabilistic Safety Assessment (PSA) models are typically quantified independently of the accident sequence and of other failures that lead to a system unavailability. This simplifies quantification of undesirable consequences and in most situations, this approximation does not distort safety indicators. However, there are emerging needs for dependency handling between basic events such as (1) dependencies between operator actions, (2) correlations between events in PSA, e.g., incurred by seismic events, and (3) common cause failure modeling. In these situations, improved handling of dependencies could yield more realistic analysis results and by this increase applicability of safety indicators.
Conditional quantification of basic events presents a flexible, simple, and transparent tool to model these dependencies. At the same time, it poses theoretical and algorithmic challenges to analysis tools. We describe the implementation of the first release of this feature in RiskSpectrum PSA (version 1.5.0, released in 2021), focusing on the choices taken and solutions applied. The aim of this feature is to enable users to specify conditional probabilities of basic events when needed and appropriate, focusing mainly on Human Reliability Analysis (HRA) applications. The solution treats conditional quantification of basic events correctly throughout the whole analysis span, starting with the generation of minimal cut sets (MCS), quantification of the generated MCS list (including the MCS BDD algorithm), merging and post-processing of MCS lists, as well as importance, sensitivity, time-dependency and uncertainty analyses.
Dependency treatment for operator actions removes undeserved bonus when accounting for several human failures within one scenario. Subsequent failures might depend on the fact that the operator has already failed with a previous action. Conditional quantification will then conservatively increase the human error probability, typically according to one of the pre-defined formulas specified in the applied HRA method. The implemented algorithm allows users to specify conditional probabilities as a part of the model and then run a single analysis that efficiently generates all cut sets and correctly applies the cutoff so that no cut set with the value above the cutoff after the conditional quantification is lost. Basic events that obtain a new value from conditional probability are treated as separate events. This resolves possible under-approximations due to the success treatment of conservatively estimated dependent basic events, especially in the MCS BDD quantification of the MCS list. Experiments on industrial-sized models show that our method, compared to the standardly used HRA event replacement in post-processing, can efficiently generate minimal cut sets which would be otherwise missing or discarded.
Further development will focus on extending conditional quantification for other applications. Common cause failure modeling might benefit from a more flexible way to specify dependencies between events. For instance, not fully symmetrical situations might be transparently modeled by specifying conditional probabilities. Correlations between seismic events represent another possible area for conditional quantification, where this concise way of specifying dependencies might improve both modeling and result precision.
Bio: Pavel Krcal finished his PhD in Theoretical Computer Science (Formal Verification of Real-Time Systems) at Uppsala University, Sweden, in 2009. Since then, he is working as a part of the software development team of RiskSpectrum, where he gained profound expertise in Reliability Theory and is now responsible for R&D in the area of modeling and analysis. Pavel maintains the thought leader profile of RiskSpectrum also by collaboration with universities and by scientific publications.
Country: SWE Company: LR RiskSpectrum Job Title: RiskSpectrum Methods Research Lead
Paper 2 PA131
Lead Author: Pavel Krcal Co-author(s): Ola Bäckström Ola.Backstrom@lr.org
Helena Troili Helena.Troili@lr.org
Control Logic Encoding using RiskSpectrum ModelBuilder
A goal of model-based safety assessment is to bring dependability modeling closer to the system design and allow for automated analysis of these high-level models. A system design description consists of system components and their relations. In many applications, a dependability model can copy the system design very closely. The dependability logic can be specified in a generic form per component type, applicable to all instances of this component type. A model might require a limited amount of specific, irregular, dependability information, such as relations or conditions affecting failures and their propagation. To prepare a model for an analysis, it remains to specify a configuration and safety/availability/production criteria.
The modeling language for describing the dependability logic of component types used in RiskSpectrum ModelBuilder is called Figaro and has evolved and matured over decades. It is an object-oriented modeling language with elements of declarative programming. It allows specifying interactions between components in the first-order logic. By this, a general description applies to all valid system topologies. The expressive power of this language has been demonstrated by numerous applications especially in the Nuclear Safety domain. A series of publications describe use cases where it has been successfully applied.
In this paper, we demonstrate how the Figaro language and the concept of knowledge bases empowers dependability experts. It allows them to formalize and codify dependability knowledge for a specific domain or application type. It can be then used by non-experts in the form of a component library to build any model from this domain. The knowledge base can be systematically updated or extended whenever there is a need.
We focus on the possibilities to encode complex control logic in the component definitions. In general, one can specify any logic that can be described by a finite state machine or by a flow-chart. Communication between components, interactions between the state of a component and the state of related components, and interleaving between stochastic events and control actions necessary for the control are also discussed.
We exemplify the power of Figaro on Digital I&C for Nuclear Power Plants, where the ModelBuilder approach allows to relatively easily extend the modeling to include intelligent voting. Automatic fault tree generation avoids the tedious and error-prone process of manual modeling for this complicated feature. We also develop main features of a control unit for a heterogenous power generating station scheduling different power sources to match the demand. As the last example, we consider a control of a Spent Fuel Pool that takes the water level in the pool into account. The latter two applications utilize Monte Carlo simulations for the analysis.
Bio: Pavel Krcal finished his PhD in Theoretical Computer Science (Formal Verification of Real-Time Systems) at Uppsala University, Sweden, in 2009. Since then, he is working as a part of the software development team of RiskSpectrum, where he gained profound expertise in Reliability Theory and is now responsible for R&D in the area of modeling and analysis. Pavel maintains the thought leader profile of RiskSpectrum also by collaboration with universities and by scientific publications.
Country: SWE Company: LR RiskSpectrum Job Title: RiskSpectrum Methods Research Lead
Paper 3 MA105
Lead Author: Mattias Håkansson Co-author(s): Gunnar Johanson gunnar.johanson@afry.com
C-BOOK: COMMON CAUSE FAILURE RELIABILITY DATA BOOK (UPDATE)
Common Cause Failure (CCF) events can significantly impact the availability of safety systems of nuclear power plants. In recognition of this, CCF data are systematically being collected and analysed in several countries under the framework of the International Common Cause Data Exchange (ICDE) project.
In 2017, the first version of the CCF data book (C-book) was published by the Nordic PSA Group (NPSAG). The C-book provides the Nordic PSA practitioners with CCF reliability data for the dependency analysis that is considered in the compulsory, probabilistic safety assessment (PSA) of nuclear power plants. The C-Book should be considered as an important step in the continuous effort to collect and analyse data on CCF of safety components at NPPs, and to improve quality of data in PSA.
In 2021, a second version of the C-book was published by NPSAG, and it considers that the collected data has doubled since the first version. This second version of the C-book presents the methodology for quantification of CCF rates, CCF probabilities and alpha factors for k-out-of-n failures. Generic CCF reliability data tables, supported by sensitivity cases, general trends, and comparisons with other CCF data sources.
The C-book includes a comprehensive procedure including all steps from CCF event input data, via event impact vectors, to final CCF parameters, which has been developed and validated. The procedure provides a common basis for methods and guidelines for data classification and assessment, and by establishing a format to allow data to be shared for quantifications and provide interpretation of raw data for exchange and use in quantification models.
The input data to the analyses represents homogenous subsets of data reported to the ICDE, where events are analysed and reviewed in a team review to achieve quality assurance. The quantification tasks are presented in a transparent way, which includes the data analysis for impact vector construction and Bayesian parameter estimation.
The sensitivity cases address important aspects of data subsets, especially by separating design and human error related events.
In conclusion, the updated CCF data book, which contains generic and plant specific CCF rates, probabilities, and alpha factors, will improve the quality of data for the dependency analysis in the PSA of nuclear power plants.
Bio: Mattias has about 11 years of experience with safety analysis within the nuclear power industry in Sweden. Mattias has participated in several international research projects with focus on CCF (common-cause failure) analysis, addressing both qualitative and quantitative aspects.
Country: SWE Company: Risk Pilot AB Job Title: Technical consultant
Paper 4 MA26
Lead Author: Vladimir Marbukh
Towards Reliability/Security Risk Metrics for Large-Scale Networked Infrastructures: Work in Progress
Realistic systems contain potential vulnerabilities which can be activated by some natural events or by malicious agents. System reliability/security risk metrics quantify the potential economic and other system losses due to possible activation of potential system vulnerabilities. Evaluation of these metrics requires assessment of unconditional probabilities of successful activation of various subsets of potential vulnerabilities. These probabilities are affected by (a) Dependency Relationships (DeR) among potential system vulnerabilities encoded by fault tree, attack graph, etc., and (b) conditional probabilities of the individual exploits, when all the prerequisites for a given potential vulnerability are satisfied. While reliability models assume fixed conditional probabilities of individual exploits, security models assume a possibility of adversarial selection of these probabilities. Combination of system DeR without cycles with conditional probabilities of individual exploits allows one to employ powerful methodologies of Bayesian Network (BaN) analysis for evaluation of the system reliability/risk metrics.
However, DeR for highly interconnected large-scale networked infrastructures are often contain cycles. In such cases combination of system DeR with conditional probabilities of individual exploits may not uniquely determine the corresponding unconditional probabilities of exploits and thus the system reliability/security risk metrics. Existing attempts to resolve this issue are not completely satisfactory since they effectively alter system DeR either by removing cycles or imposing additional constraints which are not intrinsic to the system. Another unresolved issue is modelling security risk metrics by accounting for adversarial selection of the conditional probabilities of the individual exploits.
Our work in progress proposes resolving the issue of cycles in the system DeR by assuming that the unconditional probability distribution of the successful exploits maximizes entropy over all probability distributions on the subsets of the feasible vulnerabilities which are consistent with the system DeR. In addition to methodological plausibility of this procedure as yielding the “most typical” unconditional distribution, which is consistent with the empirical data, advantages also include consistency with BaN approach for DeRs without cycles, computational advantages for large-scale infrastructures due to leveraging approximations developed in statistical physics, etc. Our analysis under mean-field approximation suggests that cycles in the system DeR may indicate a possibility of cascading failures, and thus are essential for systemic risk assessment. We also discuss evaluation of system security metrics by replacing expected system losses with risk adjusted system losses, which are based on Value at Risk (VaR), Conditional VaR (CVaR), or Entropic VaR (EVaR) measures of risk adjusted performance.
Bio: Vladimir Marbukh was born and educated in Leningrad, Soviet Union. His professional interests cover various areas of applied mathematics. In 1989 he moved to the United States. Since then, his affiliation included Math Department of Bell Labs at Murray Hill and NIST Mathematical and Computational Sciences Division in Gaithersburg, Maryland.
Country: USA Company: NIST Job Title: technical staff member