Lead Author: Alexandra Loumidis Co-author(s): Todd Paulos, todd.paulos@jpl.nasa.gov
Andrew Ho, andrew.h.ho@jpl.nasa.gov
Douglas Sheldon, douglas.j.sheldon@jpl.nasa.gov
Markov Modeling of Redundant System-On-Chip (SOC) Systems
The increasing cost and decreasing availability of space-rated
custom System on Chip (SoC) components has led to interest in
using commercial components from terrestrial industries in space
environments. Along with this interest, there exists a need to
understand how the reliability of the chips, including common
cause upsets, can impact the probability of mission success and
risk.
This project modeled the failure and recovery of a system
consisting of two Qualcomm Snapdragon processors with five
upset types each. Four Markov models were created, modeling both
recoverable and non-recoverable systems. Models 1 through 3
assume the system is recoverable while Model 4 was created to
account for non-recoverable upsets. Model 1 assumes the rate of
recovering two upset items is the same as the rate of recovering one
upset item. Model 2 assumes that items recover one at a time at two
different recovery rates. Model 3 assumes that the boot-up time of a
second processor is greater than the recovery time for a single
processor.
MATLAB scripts were produced to plot availability of each mode...
Lead Author: Todd Paulos Co-author(s): Andrew Ho andrew.h.ho@gmail.com
Curtis Smith curtis.smith@inl.gov
Reliability Modeling of Complex Components Using Simulation
This paper is a continuation of papers presented at the 15th and 16th Probabilistic Safety Assessment and Management Conference, in which discussions of modeling failure modes of complex components and the effects of censor bias were presented. The first paper demonstrated how the typical method of treating failure modes as being exponential in nature gives optimistic predictions when predicting how improvements to subcomponents will perform in the real world. Instead of relying on traditional analytical methods, it is a more accurate approach to model the failure modes as a race in time; unfortunately, this does not give a closed form solution. A simulation solution was presented that demonstrated the optimistic predictions of the classical techniques. The second paper demonstrated the effect of censor bias when dealing with large amounts of success only testing at the failure modes. Again, the censor bias also contributed to optimistic results.
In our quest for closed-form solutions and simplicity, the world of reliability engineering relies on everything behaving like an exp...
Lead Author: Courtney Otani Co-author(s): Mihai Diaconeasa, madiacon@ncsu.edu;
Steven Prescott, steven.prescott@inl.gov;
Arjun Earthperson, aarjun@ncsu.edu;
Robby Christian Robby.Christian@inl.gov
Probabilistic Methods for Cyclical and Coupled Systems with Changing Failure Rates
Advancements in the design of nuclear systems with automated control features has led to increasingly complex coupled systems and dynamic failure scenarios. This is especially true for micro reactor designs where components are not expected to be replaced during the reactors lifetime so the life of the system in addition to safety needs to be evaluated. Modeling these sequences of time-dependent events requires addressing cyclical processes and changing failure rates in ways that represent the true dynamics of the system in contrast to a single sampling for a component time to failure. This research presents two distinct analytical methods for several failure distributions, that evaluate a final time to failure used for different scenarios where the time to failure must be sampled multiple times. The first method is used when evaluating a component whose failure rate increases due to an outside event after the initial sampling but before the initially sampled time to failure. The second method is used when evaluating multiple identical components or a component that has been replaced...
Lead Author: Fernando Ferrante Co-author(s): Ken Kiper, kiperkl@westinghouse.com
Carroll Trull, trullca@westinghouse.com
Matt Degonish, degonimm@westinghouse.com
Development of Good Practices in the Implementation of Common Cause Failure in PRA Models
Modeling Common Cause Failure (CCF) quantitatively in Probabilistic Risk Assessment (PRA) models using parametric approaches has become significantly complex and challenging for risk-informed decision-making (RIDM) purposes, as the state-of-practice now includes an extensive consideration of CCF modeling. Different approaches are needed for modeling CCF in PRA models, depending on the specialized topic (e.g., support system initiating event, inter-system, functional dependency).
How to model dependencies appropriately is a critical aspect, as different topics may be better addressed via different solutions (e.g., CCF basic event derived via parametric modeling versus direct inclusion of dependencies in the PRA logic structure). For example, the distinction between “inter-system” and “intra-system” CCF is artificial and potentially misleading, as different types of dependencies can be misinterpreted under each term. A better approach to distinguishing how dependencies and CCF need to be handled, regardless of such artificial definitions, would better serve the PRA community. ...
Session Chair: Johan Sorman (johan.sorman@lr.org )
Paper 1 AG46
Lead Author: Andrei Gribok Co-author(s): Curtis L. Smith curtis.smith@inl.gov
Support Vector Analysis for Computational Risk Assessment, Decision Making, and Vulnerability Discovery in Complex Systems
A primary limitation of modern probabilistic risk assessment (PRA) is that, since the risk scenarios and system vulnerabilities are manually developed by analysts, they critically depend on the analysts’ qualifications, available information about the system, and ability to understand and “discover” the system vulnerabilities (as well as to properly describe them using Boolean logic). In other words, modern PRA is a method of documenting analysts’ discoveries rather than suggesting new, previously unknown risks. The paper describes a method for auto-detecting possible vulnerabilities in system designs, thus revealing previously unseen issues and reducing human error/costs by enabling analysts to focus on critical areas via intelligent, efficient sampling of the system’s parameter space.
For existing systems with available fault trees, we developed the proof-of-principle methodology, allowing the proposed novel methodology first stochastically generates large volumes of training data by “rewiring” fault trees of the target system and then learning the most important fea...
Lead Author: Lavínia Maria Mendes Araújo Co-author(s): Isis Didier Lins - isis.lins@ufpe.br, Diego Andrés Aichele Figueroa - diego.aichele@ufpe.br, Caio Bezerra Souto Maior - caio.maior@ufpe.br, Marcio das Chagas Moura - marcio.cmoura@ufpe.br, Enrique Lopez Droguett - eald@g.ucla.edu
A REVIEW OF QUANTUM(-INSPIRED) OPTIMIZATION METHODS FOR SYSTEM RELIABILITY PROBLEMS
Many industrial systems demand equipment with high levels of reliability. Companies and academia have been developing, over the years, mathematical methods and advancing engineering techniques to assist in the maintenance of active and reliable systems. There are different optimization problems in this context, highlighting (1) the redundancy allocation problem (RAP), (2) the reliability allocation problem, and (3) the reliability-redundancy allocation problem (RRAP). Multiple, frequently conflicting objectives are included in system reliability design challenges. Yet, a few are universal, such as maximizing reliability and minimizing costs. Many solving methods have already been applied to these problems, e.g., dynamic, linear, integer, and nonlinear programming, as well as classical metaheuristics based on evolutionary algorithms, such as the Genetic Algorithm (GA). Either way, these methods are modeled according to the specificities of the systems. However, these approaches can be very computationally expensive depending on the problem instances. Meanwhile, quantum computing has ...
A PSAM Profile is not yet available for this author.
Paper 3 AN79
Lead Author: Anders Olsson Co-author(s): Francesco Di Dedda (francesco.didedda@vysusgroup.com)
Lovisa Nordlöf (lovisa.nordlof@okg.uniper.energy)
Availability and reliability analysis of Independent Core Cooling at Oskarshamn 3
In unit 3 at Oskarshamn NPP an independent core cooling function has been installed (OBH). Besides incorporating the OBH function in the PSA, an analysis has also been performed of its availability and reliability. As the OBH function is designed to operate during long time periods and under severe conditions, the mission time was extended to 72 hours.
Even though a “standard PSA” is supposed to be a realistic assessment, compromise is necessary in terms of the assumptions and simplifications, which may or may not contribute to the results of the PSA. Such assumptions and simplifications are of course an important aspect of the uncertainties. When a single system or function is analysed, the importance of these may be more significant. A crucial part of the analysis was therefore to identify and reduce embedded conservatisms in the PSA.
As a prolonged mission time was studied, another important aspect covered in the analysis was the repair of failed components. As the analysis covered both power operation mode and shutdown, different conditions for conducting repair were consid...
Name: Anders Olsson (anders.olsson@vysusgroup.com)
Paper 4 EM255
Lead Author: Enrique Meléndez Co-author(s): Miguel Sánchez-Perea, msp@csn.es
César Queral, cesar.queral@upm.es
Marcos Cabezas
Sergio Courtin, sergio.courtin@upm.es
Rafael Iglesias
Julia Herrero-Otero
Alberto Garcia-Herranz
Carlos París
Standardized Probabilistic Safety Assessment Models: Applications of SPAR-CSN Project
The regulatory activity requires the oversight of licensee performance to be made from an independent position. This position is better served when the regulatory body develops its own methodologies and tools. In particular in the matter of probabilistic risk analysis, even if the licensees’ analyses are subject to peer-review and/or are reviewed by the regulatory body, it is very difficult to manage the large amount of hypothesis and assumptions behind the model. Thus, the development of a PRA model for regulatory use improves the knowledge of the NPP risks and can be seen as an enhancement of the regulatory practice.
On this regard, the Spanish Regulatory Body (CSN), in collaboration with the Universidad Politécnica de Madrid (UPM), has developed its own generic standardized model (SPAR-CSN) for 3-loop PWR-WEC designs. The present paper shows an example of the application of the model in an event occurred in a nuclear power plant....
Session Chair: Vincent Paglioni (paglioni@umd.edu)
Paper 1 JJ250
Lead Author: Andreas Bye Co-author(s): Jeffrey A. Julius, jjulius@jensenhughes.com
Dr. Ronald Boring, ronald.boring@inl.gov
HRA Challenges in New Nuclear Power Plant Designs
Digital instrumentation and control systems are being added to operating nuclear power plants (NPP), and included in the designs of the next generation of NPPs. Further, the newer advanced reactors are not only incorporating digital systems, but they are also increasing the amount of automation to improve plant safety and to decrease the reliance on human operators. In each of these cases, advancements in system design such as instrumentation, controls, automation, and data collection have significantly altered the human-machine interface. Human factors insights related to tasks, procedures, training, and allocation of functions help to improve safety and reliability. Operating experience with NPPs and other systems tells us that in addition to improvements, the design is not guaranteed to be free from errors. In order to evaluate the effectiveness of these improvements in the digital I&C systems, a human reliability analysis (HRA) as part of a risk assessment can provide insights into what is likely to go wrong and the consequences of errors. For the first generation of power ...
Lead Author: Ahmad Al-Douri Co-author(s): Camille S. Levine clevine1@umd.edu
Katrina M. Groth kgroth@umd.edu
Identifying Human Failure Events (HFEs) for External Hazard Probabilistic Risk Assessment
In recent years, several advancements in nuclear power plant (NPP) probabilistic risk assessment (PRA) have been driven by increased understanding of external hazards, plant response, and uncertainties. However, major sources of uncertainty associated with external hazard PRA remain. One source discussed in this study is the close coupling of physical impacts on plants and overall plant risk under hazard events due to the significant human actions that are carried out to enable plant response and recovery from natural hazards events. This makes human reliability and human-plant interactions important elements in to consider in enhancing PRA to address external hazards.
One of the challenges in considering human responses is that most existing human reliability analysis (HRA) models, such as SPAR-H and THERP, were not developed for assessing ex-control room actions and hazard response. To support this new scope for HRA, HRA models will need to be developed or modified to support identification of human activities, causal factors, and uncertainties inherent in external hazard respons...
Lead Author: Seungwoo Lee Co-author(s): Yongjin Lee, k730lyj@kins.re.kr
Namchul Cho, namchul.cho@kins.re.kr
INSIGHTS FOR HUMAN RELIABILITY ANALYSIS METHOD GAINED FROM ROOT CAUSES ANALYSIS OF HUMAN ERROR EVENTS OCCURRED IN KOREA
Human error is known as dominant contributors to the safety of complex systems such as nuclear power plants(NPPs). In analyzing human error, both prospective approach whose purpose is to estimate the probability of human error in quantitative manner and retrospective approach whose purpose is to identify the root causes of human error in qualitative manner are used.
Regarding prospective approach, various methods have been suggested to assess human reliability in field of risk assessment of nuclear power plants. Most of these approaches visits the results from retrospective approach to develop their own models for estimating human error probability or to describe mechanisms for human error. Also, it is notable that the qualitative analysis is used as one of the key steps in most of the prospective approaches.
In this manner, this study aims to give insights for current HRA methods by investigating the results of retrospective approach. When human error occurs in Korea, investigators from KINS(Korea Institute of Nuclear Safety) analyzes the human error using HuRAM+( Human related eve...
Lead Author: Jinkyun Park Co-author(s): Inseok Jang, isjang@kaeri.re.kr
Jooyoung Park, jooyoung.Park@inl.gov
Ronald L. Boring, Ronald.Boring@inl.gov
Thomas A. Ulrich, thomas.ulrich@inl.gov
A framework to integrate HRA data obtained from different sources based on the complexity scores of proceduralized tasks
Since the TMI accident, it is evident that the PSA (Probabilistic Safety Assessment) or PRA (Probabilistic Risk Assessment) has been used as one of the representative techniques to enhance the safety of nuclear power plants (NPPs) by visualizing the catalog of potential hazards in a systematic way. Since human error represents one of the potential hazards, diverse HFEs (Human Failure Events) should be incorporated into the development of the PSA model. Typical HFEs include “the purpose of the task cannot be achieved” or “the task fails to be completed” [1]. Accordingly, in terms of conducting the PSA, it is indispensable to quantify the likelihood of HFEs (or Human Error Probabilities, HEPs). For this reason, many kinds of HRA (Human Reliability Analysis) methods have been proposed in the last several decades.
In general, the HRA process can be done with three steps: (1) task analysis, (2) qualitative analysis, and (3) quantitative analysis. Brief explanations on these steps are as follows: “Task analysis is the process of collecting and analyzing relevant information on th...
A modeling framework for assessing resilience of urban infrastructure systems considering multiple interdependency and uncertainty
1. Introduction
Damage to infrastructure during disasters is a major challenge for countries around the world, and various kinds of measures such as seismic retrofitting of infrastructure facilities are taken in place to mitigate it. Simulation-based evaluations are commonly used to determine the most appropriate measure to choose from, and various modeling frameworks have been developed. In many modeling frameworks, the focus is on reproducing infrastructures precisely, but as urban systems are complex and composed of a variety of interacting elements, it is necessary to include not only physical lifeline infrastructures but also important human activities to the society. Against this background, Kanno et al. developed a human-centered modeling framework of urban systems that captures various types of interdependencies underlying urban sociotechnical and socioeconomic systems, by integrating three subsystems: civil life, manufacturing/service industry, and lifeline infrastructure to the model. Following that, Wakayama et al. applied this model to optimize the post-disaster recovery of water supply system and showed the practical feasibility of this model. However, since the simulations only assume specific fixed disaster scenarios, cases with low probability of occurrence but extensive damage could not be considered. Therefore, we developed a modeling framework that can consider both interdependency and uncertainty, by modifying the model of the previous study to enable random scenario generation and applying the Monte Carlo method. In order to evaluate the validity of the proposed framework, we compared two results: with/without simple post-disaster countermeasures.
2. Method
2.1 Overview of the model
The simulation model consists of three main subsystems: civil life, manufacturing/service industry, and lifeline infrastructure, and nine different types of interdependencies existing within and between these subsystems are considered. For example, citizens need to use lifelines to live, while lifelines need workers to commute to the lifeline facilities to work properly. Lifelines are expressed as multi-layered networks of links and nodes, with each link having parameters that represent the connection and length. Enterprises and citizen agents are placed on the road network. When a disaster occurs, it is assumed that some lifeline links will be out of service, and repair squads will be dispatched to repair those links. The resilience index of the system is evaluated by calculating the resilience triangle throughout the simulation, where performance of the whole urban system is calculated as a linear sum of each of the subsystems, and the objective is to minimize the resilience index. A brute-force Monte Carlo method is applied to this simulation, and the lifeline links that get destroyed after the disaster are sampled from a uniform distribution for each run.
2.2 Application to the water supply network
In this study, we focused on the resilience of water supply system. An additional parameter for whether the link is a main pipe with larger diameter or not was given to the water pipes, and we assumed that only water pipes get damaged when a disaster occurs. Then, we compared the results with and without restoration planning. In case of having a restoration plan, main pipes were prioritized, and when the pipes are the same type, priority was given to the pipe closer to upstream. In case of no restoration plan, the order of restoration was selected randomly.
2.3 Simulation Settings
The road network model consists of 2310 nodes and 1951 links, and 6990 citizen agents (each representing 4 people) and 628 enterprises are placed on the network. As for the water distribution network, 4910 pipes are placed, and 13 water repair squads work to repair the pipes after a disaster occurs. 100,000 simulations will be conducted, with each simulation lasting 20 days, with a disaster occurring on the fifth day, when 100 water pipes are randomly destroyed.
3. Results
In case of no restoration plan, the average resilience index was 16.1 and the maximum resilience index was 758.2. In the case of having a restoration plan, the average resilience index was 14.4 and the maximum resilience index was 82.5. Over 99% of the simulations in both cases resulted in a resilience index below 50, so the worst cases where the resilience index exceeds 50 could be overlooked without the Monte Carlo method. In addition, while a numerical comparison alone only shows a slight improvement in the average resilience index, a graphical comparison shows a significant reduction in the possibility of an extremely high resilience index.
4. Conclusion
We developed a modeling framework based on previous research to enable resilience assessment of urban infrastructure systems considering multiple interdependency and uncertainty. Simulation results showed that the effectiveness of measures can be evaluated from more perspectives than comparing numbers obtained from a simulation with a fixed scenario. The next step will be randomizing the scale of the disaster and adding attributes that affect the failure probability to the pipes. As the number of random factors increases, the variance of the simulation results will increase, and more simulation runs will be needed, so we will apply variance reduction methods to reduce computational effort. After that, we will implement more reality-based pre-disaster and post-disaster measures and comparing the results of their effectiveness, so that the model could be put to practical use.
References
T. Kanno, T. Suzuki, S. Koike, and K. Furuta, “Human centered modeling framework of multiple interdependency in urban systems for simulation of post disaster recovery processes,” Cognition, Technology & Work, 21(6), pp. 301-316, 2019.
Wakayama, K., T. Kanno, Y. Kawase, H. Takahashi, and K. Furuta, “Comparison of the post-disaster recovery of water supply system by GA optimization and heuristics”, Proceedings of the 30th European Safety and Reliability Conference and the 15th Probabilistic Safety Assessment and Management Conference (2020)
Lead Author: Brian D. Ehrhart Co-author(s): Benjamin B. Schroeder bbschro@sandia.gov
Ethan S. Hecht ehecht@sandia.gov
Quantitative Risk Assessment Sensitivity Study as the Basis for Risk-Informed Consequence-Based Setback Distance Requirements for Liquid Hydrogen Storage Systems
A quantitative risk assessment on a representative liquid hydrogen storage system was performed to identify the main drivers of individual risk and provide a technical basis for revised separation distances for bulk liquid hydrogen storage systems in regulations, codes, and standards requirements. The quantitative risk assessment framework in Hydrogen Plus Other Alternative Fuels Risk Assessment Models (HyRAM+) was used, and multiple relevant inputs to the risk assessment (e.g., system pipe size, ignition probabilities) were individually varied. For each set of risk assessment inputs, the individual risk as a function of the distance away from the release point was determined, and the risk-based separation distance was determined from an acceptable risk criterion. These risk-based distances were then converted to equivalent leak size using consequence models that would result in the same distance to selected hazard criteria (i.e., extent of flammable cloud, heat flux, and peak overpressure). The leak sizes were normalized to a fraction of the flow area of the source piping. The resul...
Study of Bridge Damage Prediction Leveraging Kinematic Contact Enforcement-based Moving Load and Deep Learning Techniques
This paper presents a study on the development of hybrid models featuring structural health monitoring (SHM) inspection using bridge weigh-in-motion (BWIM) physics-based models. Artificial intelligence (AI) techniques helped improve the structural damage prediction during SHM inspection of bridges. This study introduces a comprehensive assessment of 1) a unique finite element (FE) simulation approach, which leverages the kinematic contact enforcement (KCE) method, verified with the vehicle-bridge interaction (VBI) theory, and 2) machine learning (ML) techniques to identify and automatically predict structural damages from the structural response. The KCE method is a new approach to simulating vehicle motion in a BWIM model, which is used to carry out actual structural responses to motion. Thus, KCE method provides contact conditions between elements (i.e., contact type, material properties, and element moving speed), which enables the realistic vehicle motion and structural response to be simulated. The FE model is designed with four different classes of damages with three different ...
Lead Author: Jinkyun Park Co-author(s): Inseok Jang, isjang@kaeri.re.kr
Jooyoung Park, jooyoung.Park@inl.gov
Ronald L. Boring, Ronald.Boring@inl.gov
Thomas A. Ulrich, thomas.ulrich@inl.gov
A framework to integrate HRA data obtained from different sources based on the complexity scores of proceduralized tasks
Since the TMI accident, it is evident that the PSA (Probabilistic Safety Assessment) or PRA (Probabilistic Risk Assessment) has been used as one of the representative techniques to enhance the safety of nuclear power plants (NPPs) by visualizing the catalog of potential hazards in a systematic way. Since human error represents one of the potential hazards, diverse HFEs (Human Failure Events) should be incorporated into the development of the PSA model. Typical HFEs include “the purpose of the task cannot be achieved” or “the task fails to be completed” [1]. Accordingly, in terms of conducting the PSA, it is indispensable to quantify the likelihood of HFEs (or Human Error Probabilities, HEPs). For this reason, many kinds of HRA (Human Reliability Analysis) methods have been proposed in the last several decades.
In general, the HRA process can be done with three steps: (1) task analysis, (2) qualitative analysis, and (3) quantitative analysis. Brief explanations on these steps are as follows: “Task analysis is the process of collecting and analyzing relevant information on th...
Session M05 - Pressurized Water Reactor Owners Group (PWROG) Applications
Session Chair: James Lin (jlin@absconsulting.com)
Paper 1 SU155
Lead Author: Sue Sallade Co-author(s): N. Reed LaBarge (labargnr@westinghouse.com)
Alvin Robertson (robertac@westinghouse.com)
Joe W. Loesch (joseph.w.loesch@xcelenergy.com)
Jim M. Lynde (James.Lynde@exeloncorp.com)
Kyle S. Shearer (sheareks@westinghouse.com)
Laura L. Genutis (genutill@westinghouse.com)
Operational Guidance to Improve Industry Benefit of Alternate Equipment and Strategies
The PWROG has recently championed several efforts to increase collaboration between Probabilistic Risk Assessment (PRA) experts and nuclear power plant operators and procedure writers. The result of these efforts has allowed for nuclear power plants to risk inform plant procedures to reduce risk and promote the health and safety of the public.
A key insight from this collaboration was the need for, and development of, an industry framework for generating written guidance to increase the benefit of alternate strategies. As the industry expands the use of alternate equipment (installed or portable), standardization of terminology, application, and PRA modeling of flexible equipment operation is desired. To that end, a new category of industry Guidelines has been defined: “Flex Support Guidelines Additional Defense in Depth” (FSG+DD). The new Guideline categorization maximizes the use of defense in depth and minimizes the need for flexible equipment to fall into regulatory required maintenance (Maintenance Rule Scope). FSG+DD strategies may be utilized to supplement current ...
A PSAM Profile is not yet available for the presenter.
Paper 2 RO150
Lead Author: Richard Rolland Co-author(s): Raymond Schneider
schneire@westinghouse.com
Lessons Learned in PRA Modeling of Digital Systems
As the existing nuclear plant fleet ages and evolutionary plants are added to the nuclear plant generation capacity, analog safety systems that have been the mainstay of nuclear plant protection systems have started to become obsolete. These obsolescence issues are causing analog systems to be replaced with digital systems. The digital replacements offer several advantages over the analog counterparts including the ability to self-diagnose failures and place systems in safe-stable states. While these features increase the overall reliability of the system and reduce maintenance costs, they increase the complexity of the system. Digital systems still have the possibility of global failures of the digital safety function via common cause failure of software. It is helpful to build a PRA model for the digital system to fully understand the risk impact of the analog to digital transition. The complexity and relationships among the diverse and redundant system components introduces challenges to modeling of these systems.
This paper discusses developing a digital I&C PRA model and expl...
Lead Author: N. Reed LaBarge Co-author(s): Jayne E. Ritter (jayne.e.ritter@xcelenergy.com)
Roy Linthicum (roy.linthicum@exeloncorp.com)
Kyle S. Shearer (sheareks@westinghouse.com)
Damian Mirizio (mirizids@westinghouse.com)
Benefits and Lessons Learned from PRA / Operations Interface at Nuclear Power Plants
The PWROG has recently championed several efforts to increase collaboration between Probabilistic Risk Assessment (PRA) experts and nuclear power plant operators and procedure writers. The result of these efforts has been successful in promoting realism in the probabilistic representation of the role of human interface with nuclear power plants (i.e., Human Reliability Analysis (HRA)). Specifically a number of meetings and workshops have been held with PRA analysts, HRA experts and representatives from plant operations, procedure writing, and training to review current state-of-practice HRA methods and assumptions and identify areas that may be overly conservative. Guidance and recommendations have been developed and in some cases challenge state-of-practice HRA assumptions and methodologies. In these cases, specific methods or assumptions have been identified that could be revised to better reflect modern plant operational practices. Other areas of HRA have also be identified as opportunities for future research in order to ensure realistic treatment of the role of human operat...
Lead Author: Roy Linthicum Co-author(s): Mike Powell Michael.Powell@aps.com
Presenter of this paper: Michael Powell (michael.powell@aps.com)
Flex Equipment Reliability Data
Post Fukushima, Utilities have invested significant resources in procuring Flex Equipment and developing guidance for using the equipment for beyond design basis external hazards. NEI 16-08 “Guidance for Optimizing the Use of Portable Equipment” urges utilities to leverage this investment by using Flex or other portable equipment to provide additional safety benefits. These safety benefits can be quantified by including Flex equipment in the site-specific PRA models. This can provide additional margin for various risk informed applications, such as TSTF-505 “Provide Risk-Informed Extended Completion Times - RITSTF Initiative 4b”, Significance Determination Process evaluations and the Mitigating Systems Performance Index. Modeling Flex equipment in utility PRA models requires the development of reliability data, which is currently unavailable. The PWROG, with support form the BWROG, is currently developing failure data for the most commonly credited Flex equipment. This paper provides the final results of this evaluation including the approach used in developing the data, comp...
A PSAM Profile is not yet available for this author. Presenter Name: Michael Powell (michael.powell@aps.com) Bio: Michael (Mike) E. Powell is Director of Strategic Initiatives
Palo Verde Generatino Station (Palo Verde) for Arizona Public
Service Companv (APS), Powell ioined APS/Palo Verde in 1990.
Powell has provided leadership to the US nuclear industry, He is
currently the Chairman & Chief Operating Officer or the PWROG
In roughly 32years at the site, Powell has held positions of
increasing responsibility in various departments,including
leadership roles in nuclear licensing, tire protection, nuclear
projects, design engineering, maintenance engineering and
nuclear fuel management. Powell has been a membership of
Senior Leadership at the station since 2005.
Powell has served as a Technical Consultant to the IAEA and is
contributing author for several IAEA Technical Documents.
Powell holds a bachelor's degree in electrical engineering from the State University of New York at Stony Brook and a master's degree in nuclear engineering from the Georgia Institute of
Technology.
Session M11 - Fukushima
Session Chair: Craig Primer (craig.primer@inl.gov)
Paper 1 MA57
Lead Author: Shuhei Matsunaka
Seismic Probabilistic Risk Analysis of Transmission Systems for Kashiwazaki-Kariwa NPS using Deaggregation Hazard taking account of Non-Specified Source Faults
In the 2011 Fukushima Dai-ichi nuclear power plants accident, power transmission facilities were damaged due to the earthquake, and the emergency diesel generators failed to perform their intended function due to the subsequent tsunami, resulting in the station blackout (SBO) scenario. Although the seismic reliability of offsite AC power supply has been evaluated in the conventional Seismic Probabilistic Risk Assessment (SPRA), in the United States, the generic fragility has been applied in the typical practical evaluation. Previous Kashiwazaki-Kariwa (KK) NPS SPRA has incorporated the plant-specific fragility of seismically weakest onsite power generation components (i.e., ceramic insulator), but it could be a non-conservative evaluation. Although there are evaluation examples of the transmission system, there are some assumptions and limitations that may become key in actual plant evaluation (such as insufficient consideration of non-specified source faults).
In this study, the seismic reliability of the offsite power supply for the actual plant was evaluated in detail by calcul...
Lead Author: Sung-yeop Kim Co-author(s): Yun Young Choi (choi930121@nims.re.kr)
Soo-Yong Park (sypark@kaeri.re.kr)
Application of Deep Learning Models to Estimate Source Release of NPP Accidents
In the event of nuclear power plant (NPP) accident, estimation of source release should be performed quickly and accurately in order to support the decision of public protection. In case of Fukushima Dai-ichi NPP accident, even though System for Prediction of Environmental Emergency Dose Information (SPEEDI) has been developed and prepared, it was not used to support the decision making of public protection due to the lack of source term information which should be provided from the system. In order to overcome the limitation of existing methods in aspect of quick and accurate source term estimation, deep learning approach using various NPP safety parameters as the learning input and releases of radioactive materials as the learning output is applied in this study. It was tried to search and apply variety of deep learning models such as ANN with pre-assigned function, encoder model of Transformer followed by fully connected layer, and multi-stage Transformer, in order to find and develop an optimized deep learning model to estimate the source release of NPP accidents....
Lead Author: Ali Ayoub Co-author(s): Haruko M Wainwright, hmwainw@mit.edu
Giovanni Sansavini, sansavig@ethz.ch
Closing the Planning-to-Implementation Gap in Nuclear Emergency Response: Lessons Learned and Methodological Advances
Post-accident mitigation and consequence analysis have been subjects of extensive research in the nuclear industry. Strict regulatory guidelines and radiation monitoring networks are usually in place to support the prompt implementation of protective actions (evacuation, sheltering, etc.) in case of emergency. Most of the emergency actions and Emergency Planning Zones (EPZ) are pre-planned based on presumptions, coupling accident scenarios with potential offsite radiological consequences (Probabilistic Risk Assessment and atmospheric transport and dispersion models). However, the Fukushima Daiichi nuclear accident has exposed the challenges in nuclear emergency responses, since the existing plans had to be adapted several times, and monitoring data as well as dispersion codes could not be used as planned, hence aggravating the situation. In this paper, we present a comprehensive review of existing international emergency response plans and guidelines. In addition, we investigate a list of well-documented atmospheric transport and dispersion codes that are typically used for emergency...
Identification of Superimposition Events Induced by a Combination of Seismic and Tsunami Impacts for Developing a Multi-Hazard Probabilistic Risk Assessment Method
Fukushima-Daiichi Nuclear Power Plant accident in March 2011 revealed the risk of the combinational hazards of seismic and tsunami at nuclear power plants (NPPs). For evaluation and improvement of the safety of NPPs against seismic and tsunami hazards, probabilistic risk assessment (PRA) methods have been developed for each hazard. However, each method focuses on the target single-hazard and ignores almost all impacts of the other hazard. For a precise risk assessment of the combination of the two hazards, a new PRA method needs to be developed that considers not only the impact of each hazard but also the impacts of their combination. This study identifies in general what kind of events can be induced by the combination of seismic and tsunami impacts at a typical NPP site faced on a coast. These events are classified based on their causes and occurring locations. How each event can be considered in PRA is briefly described. ...
Session M12 - Maintenance Modeling and Optimization
Session Chair: Dusko Kancev (dkancev@kkg.ch)
Paper 1 VI320
Lead Author: Vivek Agarwal Co-author(s): Vaibhav Yadav, Andrei V. Gribok, Matthew Yarlett, and Brad Diggans
Preventive Maintenance Optimization based on Historical Data
Nuclear power plants (NPPs) follow an established maintenance plan to ensure safe and reliable plant operation. These established maintenance plans are part of a preventive maintenance (PM) strategy and are defined for all equipment and systems at a plant site. For the time-based PM tasks, workers from electrical, mechanical, and instrumentation and controls maintenance perform different activities, like inspection, calibration, replacement, and refurbishment, at a defined interval, usually without taking into consideration the condition of the equipment and system. Due to the high cost and labor-intensive
nature of the PM strategy, it is important to optimize these intervals through PM Optimization (PMO). In this presentation, we will use historic data for a motor-driven pump is used to evaluate any change in the risk of failure of a system or component if these PM interval are revised. The presentation will also discuss potential cost savings with the recommended change in interval and minimal to no risk of failure of the system. ...
Risk Of Maintenance Resource Sharing In Transport Systems
The authors investigate the problem of maintenance resource sharing for transport systems in the context of operational risk assessment. The paper includes a short introduction to the maintenance problems and a discussion of maintenance resource sharing issues in transport systems' effective performance. Later, a three-parameter risk assessment ratio is introduced. It includes a probability of disruption occurrence, consequences of disruption occurrence, and a new measure characterizing maintenance resources availability. Later, the problem of maintenance resources availability is analyzed and discussed. Finally, a short case study is introduced. The presented paper gives the possibility to identify research gaps and possible future research directions connected with optimization of maintenance problems in transportation organizations. ...
Lead Author: Tamer Tevetoglu Co-author(s): Bernd Bertsche bernd.bertsche@ima.uni-stuttgart.de
A Machine Learning Approach to Enhance the Information on Suspensions in Life Data Analysis
Increasing digitalization and implementation of sensors in systems result in high data availability, which enables and benefits data-driven approaches. Commonly, these approaches revolve around predictive maintenance, anomaly detection, or clustering. In this paper, we analyze the practicality and performance of life data analyses based on neural networks. To this end, the Weibull analysis is extended with a machine learning approach and compared with conventional approaches in a laboratory test setup.
Reliability engineers usually have budget and time constraints regarding testing strategies. These constraints manifest as an inability to accurately verify a system’s reliability with a pre-defined confidence due to small sample sizes, insufficient number of failures from testing, or inadequate choice of life data analysis methods. Conventional approaches in life data analysis counteract these constraints by taking suspensions into account or allowing to correct the bias when computing parameter estimates and confidence bounds. Hence, engineers only have limited number of tools in...
Identification of Superimposition Events Induced by a Combination of Seismic and Tsunami Impacts for Developing a Multi-Hazard Probabilistic Risk Assessment Method
Fukushima-Daiichi Nuclear Power Plant accident in March 2011 revealed the risk of the combinational hazards of seismic and tsunami at nuclear power plants (NPPs). For evaluation and improvement of the safety of NPPs against seismic and tsunami hazards, probabilistic risk assessment (PRA) methods have been developed for each hazard. However, each method focuses on the target single-hazard and ignores almost all impacts of the other hazard. For a precise risk assessment of the combination of the two hazards, a new PRA method needs to be developed that considers not only the impact of each hazard but also the impacts of their combination. This study identifies in general what kind of events can be induced by the combination of seismic and tsunami impacts at a typical NPP site faced on a coast. These events are classified based on their causes and occurring locations. How each event can be considered in PRA is briefly described. ...
Session Chair: Jeffrey Julius (jjulius@jensenhughes.com)
Paper 1 MI240
Lead Author: Michelle Kichline Co-author(s): Jing Xing, Jing.Xing@nrc.gov
James Chang, James.Chang@nrc.gov
Dependency Analysis Using the Integrated Human Event Analysis System Human Reliability Analysis Methodology
Dependency in the context of human reliability analysis (HRA) refers to the impact of success or failure of a human action on performance of subsequent human actions. Existing dependency models assess the level of dependency between two consecutive human failure events (HFEs) based on the coupling factors or commonalities that exist for both HFEs. The U.S. Nuclear Regulatory Commission (NRC) developed a new dependency model that is informed by behavioral and cognitive science and expands on existing dependency models by identifying the specific cognitive failure modes (CFMs), performance influencing factors (PIFs), and PIF attributes that are impacted by dependency. This new dependency model identifies and evaluates how failure of the first human action affects the context of subsequent human actions. The NRC presents the model in NUREG-2198, “The General Methodology of an Integrated Human Event Analysis System (IDHEAS-G),” issued May 2021. The NRC staff developed IDHEAS-G as a new HRA methodology for agency use. IDHEAS-G is a new general HRA methodology that can be used to devel...
Models and Knowhow for Human Reliability Analysis on Portable Equipment
The use of portable equipment such as mobile diesel pumps and power generators may be necessary to respond to severe accidents at nuclear power plants; however, development and application of human reliability analysis (HRA) methods for portable equipment is scarce in the world. Our past study described models and knowhow for HRA on portable equipment we developed and their application example in tsunami probabilistic risk assessment. It showed definitions of types of steps, examples of application rules of the table of estimated human error probability (HEP) in THERP method for the on-site operation/work, and task timeline diagram developed for organizing actors, locations, and time information (e.g., time required for executing a task). The present study has improved them and prepared for additional models and knowhow for HRA on portable equipment, as shown in the followings: 1) example of application rule of Cause-Based Decision Tree Method for the emergency operations facility, 2) re-definition of types of steps, 3) example of application rule of the table of estimated HEP in the...
Lead Author: Vincent Paglioni Co-author(s): Torrey Mortenson (torrey.mortenson@inl.gov)
Katrina M. Groth (kgroth@umd.edu)
The human failure event: what is it and what should it be?
The human failure event (HFE) is typically the intersection point between the probabilistic risk assessment (PRA) of a complex engineering system (i.e., nuclear power plant (NPP)) and a separate human reliability analysis (HRA) methodology. In that lens, the HFE represents the total accounting of human error in the safety assessment of NPPs. The HFE is a pivotal aspect of any PRA. Identifying and quantifying the human error probability (HEP), which is the probability associated with an HFE, has typically been the focus of HRA methods in nuclear power. The HFE has typically been the intersection point between the parent PRA and its embedded HRA method. However, “HFE” has not been rigorously defined for either HRA or PRA, and so the entire field of risk analysis lacks a formal definition of what constitutes an HFE . HFE is, of course, a failure of some sort – but at what level of abstraction should the HFE be defined? Is the HFE simply the result of any failed task, or should it represent something larger in scope than a single task or even a set of tasks can define? In this pape...
Lead Author: Yochan Kim Co-author(s): Sun Yeong Choi (sychoi@kaeri.re.kr)
Jinkyun Park (kshpjk@kaeri.re.kr)
Jaewhan Kim (jhkim4@kaeri.re.kr)
Statistical evidence of minimum human error probability for an emergency event from simulation records
Human reliability analysis estimates error probabilities of human operators under given contextual conditions for predicting a probabilistic risk of complex systems. The human error probabilities have been often determined based on the limited operating data and simplified cognitive models; hence, there has been a recognition in the field of HRA that it is necessary to appropriately assign a conservative value to a very low error probability in consideration of various uncertainty factors. For example, EPRI [2010] suggested assigning different minimum values such as 1.0E-04, 1.0E-05, and 1.0E-06 according to the contextual factors by referring to the typical hardware failure probability. Whaley et al. [2011] recommended using 1.0E-5 as a lower bound of HEP with consideration of usefulness in cut-set investigations. However, no objective evidence supporting the reasonableness of the minimum value was not presented. This paper attempts to generate statistical information to determine the minimum error probability bound based on the human error data from the simulation records. The data...
Lead Author: Hassane Chraibi Co-author(s): Jean-Christophe HOUDEBINE (Jean-christophe.houdebine@ariste.fr)
Integrated dynamic probabilistic safety assessments with PyCATSHOO: a new coupling approach.
Integrated dynamic probabilistic safety assessment (IDPSA) approaches provide a valuable complement to the PSA classic methods that no longer needs to be justified.
These hybrid approaches gather in the same model stochastic discrete events behavior and deterministic and time-dependent one which account for physical phenomena.
However, these approaches are still facing several challenges such as modelling complexity, computational costs, data availability, post-processing difficulties etc. EDF contributes to meeting these challenges by developing the PyCATSHOO tool, which, among others, addresses the modeling complexity and calculation costs. The improvement effort of this tool is still ongoing and focuses on another challenge, namely the coupling methods between models that deal with discrete stochastic aspects and physical codes. In most experiments conducted to date, this coupling has been carried out thanks to ad hoc solutions and required a significant effort. However, a solution exists which could benefit IDPSA models. This solution is the FMI (Functional Mockup Interface) stan...
Lead Author: Junyong Bae Co-author(s): Jong Woo Park, jongwoo822@unist.ac.kr
Seung Jun Lee, sjlee420@unist.ac.kr
Deep learning for Guided Simulation of Scenarios for Dynamic Probabilistic Risk Assessment
One of the practical challenges of simulation-based dynamic risk assessment is to optimize a large number of scenarios that should be analyzed by computationally expensive codes such as thermal-hydraulic system codes. To tackle this challenge, this research suggests a guided simulation framework inspired by the human reasoning process utilizing deep learning. This framework employs a deep neural network to estimate the consequences of assumed scenarios based on the result obtained from the simulated scenarios and quantifies the estimation confidence using Monte Carlo dropout. In addition, an autoencoder and a mean-shift clustering are implemented to group long sequential records of simulation results. As a result, this framework can point out the scenarios that should be analyzed preferentially. This consequence-based optimizing framework could be applied as a scenario screening engine for an advanced dynamic risk assessment framework, alongside a probability-based optimizing framework....
Lead Author: Pavel Krcal Co-author(s): Ola Bäckström, Ola.Backstrom@lr.org
Pengbo Wang, Pengbo.Wang@lr.org
Transparency of Dynamic Calculation Approaches
Classical fault tree and event tree models trade possibilities to express order of events, stand-by back-up systems triggered only when needed, repairs or grace delays for two important advantages: scalability of analysis and easy interpretation of models and results. The latter aspect cannot be quantified and is rarely explicitly reflected. Correctness arguments for various modeling patterns or model modifications need to find shared acceptance among modelers, system experts, reviewers and regulators. Conclusions drawn from analysis results must be supported by shared interpretations accessible to analysts, regulators, operators and owners. The concept of (mostly) independent basic events and failure propagation defined by Boolean logic offers a common ground for shared trust in the model.
In many applications, the static way of modeling inherent in fault trees is sufficient for the purpose, for example for PSA Level 1 analyses. Conservatism caused by a limited handling of time and repairs stays under control and does not skew analysis results. Certain applications, on the other han...
Lead Author: Yochan Kim Co-author(s): Sun Yeong Choi (sychoi@kaeri.re.kr)
Jinkyun Park (kshpjk@kaeri.re.kr)
Jaewhan Kim (jhkim4@kaeri.re.kr)
Statistical evidence of minimum human error probability for an emergency event from simulation records
Human reliability analysis estimates error probabilities of human operators under given contextual conditions for predicting a probabilistic risk of complex systems. The human error probabilities have been often determined based on the limited operating data and simplified cognitive models; hence, there has been a recognition in the field of HRA that it is necessary to appropriately assign a conservative value to a very low error probability in consideration of various uncertainty factors. For example, EPRI [2010] suggested assigning different minimum values such as 1.0E-04, 1.0E-05, and 1.0E-06 according to the contextual factors by referring to the typical hardware failure probability. Whaley et al. [2011] recommended using 1.0E-5 as a lower bound of HEP with consideration of usefulness in cut-set investigations. However, no objective evidence supporting the reasonableness of the minimum value was not presented. This paper attempts to generate statistical information to determine the minimum error probability bound based on the human error data from the simulation records. The data...
Lead Author: Steve Prescott Co-author(s): James Knudsen (james.knudsen@inl.gov)
Stephen T. Wood (stephen.wood@inl.gov)
Automated-fire PRA Scenario Modeling in SAPHIRE Using FRI3D
The current fire modeling practices require many manual steps and processes to transfer fire scenario information and data between software tools such CAFTA, CFAST, databases, etc. To help in minimizing these manual steps, the Fire Risk Investigation in 3D (FRI3D) software was developed. The primary goal of FRI3D is to automate as many steps as possible and link the 3D spatial information with the probabilistic risk assessment (PRA) information. Initially FRI3D was coupled directly with the existing FRANX fire input data and CAFTA to perform the fire analysis. The modular design allows for coupling with other PRA tools and the PRA software, Systems Analysis Programs for Hands-on Reliability Evaluations (SAPHIRE), was incorporated for a facility pilot project. SAPHIRE does not have designated tools that can be used to assign specific fire scenarios; therefore, the fire scenarios need to be added manually which is time-consuming and potentially error prone. To make FRI3D compatible with SAPHIRE and simplify the scenario modeling, a new module was created which automatically generates t...
Lead Author: Yoshikazu Deguchi Co-author(s): Keisuke Himoto, himoto-k92ta@mlit.go.jp
HIERARCHICAL BAYESIAN ESTIMATION OF FIRE GROWTH RATE FOR VARIOUS BUILDING USAGES BASED ON THE FIRE INCIDENTS REPORT
In the evacuation safety design of a building, it is necessary to set an appropriate design fire source, which can be represented by the fire growth rate α. In the basic Japanese procedure for evacuation safety design, the Verification method for evacuation safety, the fire growth rate α is defined by the sum of the αf, value calculated by the calorific value per unit floor area of loaded combustibles ql , and the αm, value correlated with the type of interior finish. However, it is not clear what levels of the fire growth rate α of the Verification method for evacuation safety are required for an actual fire.
DEGUCHI et.al (2011) statistically derived the distributions of fire growth rate for rooms of selected usages by using the statistics on burnt floor area, extinction time, etc. of reported fires from 1995 to 2008. The authors concluded that the fire growth rates of an office, residence, restaurant and retail store can be approximated by lognormal distributions. However, the fire growth rates could have been obtained only for usages for which sufficient data had been avail...
Lead Author: Ayako Hirose Co-author(s): Daisuke Takeda (d-takeda@criepi.denken.or.jp)
Kohei Nonose (nonose@criepi.denken.or.jp)
Koji Tasaka (kotasaka@criepi.denken.or.jp)
An Exploratory Study on Decision to Main Control Room Abandonment due to Fire-Induced Loss of Habitability: Using a VR Nuclear Power Plant Main Control Room Simulator
In Fire PRA/HRA on nuclear power plant, the criteria related to concentration of smoke or room temperature for main control room (MCR) abandonment due to loss of habitability (LOH) are given by NUREG-6850. However, the criteria are so severe for human body that it is unclear at what point operators will decide to abandon the MCR if a fire actually breaks out. Also, because MCR fire events are very rare and impossible to be simulated in real MCR, it would be difficult for operators themself to answer clearly about their own timing to decide to abandon during MCR fires. Using virtual reality (VR) technology to allow operators to experience MCR fires may make it easier to collect data about their timing of MCR abandonment. Therefore, this study aims to develop a virtual environment of an MCR fire using a VR nuclear power plant MCR simulator and collect data on decision-makings on MCR abandonment to improve fire HRA.
Based on NUREG-1934, content with event of an electrical cabinet fire in MCR was developed by using the VR nuclear power plant MCR simulator. To make the content more realis...
Lead Author: Yochan Kim Co-author(s): Sun Yeong Choi (sychoi@kaeri.re.kr)
Jinkyun Park (kshpjk@kaeri.re.kr)
Jaewhan Kim (jhkim4@kaeri.re.kr)
Statistical evidence of minimum human error probability for an emergency event from simulation records
Human reliability analysis estimates error probabilities of human operators under given contextual conditions for predicting a probabilistic risk of complex systems. The human error probabilities have been often determined based on the limited operating data and simplified cognitive models; hence, there has been a recognition in the field of HRA that it is necessary to appropriately assign a conservative value to a very low error probability in consideration of various uncertainty factors. For example, EPRI [2010] suggested assigning different minimum values such as 1.0E-04, 1.0E-05, and 1.0E-06 according to the contextual factors by referring to the typical hardware failure probability. Whaley et al. [2011] recommended using 1.0E-5 as a lower bound of HEP with consideration of usefulness in cut-set investigations. However, no objective evidence supporting the reasonableness of the minimum value was not presented. This paper attempts to generate statistical information to determine the minimum error probability bound based on the human error data from the simulation records. The data...
Lead Author: Edward Chen Co-author(s): Bao Han han.bao@inl.gov
Tate Shorthill tate.shorthil@inl.gov
Nam Dinh ntdinh@ncsu.edu
Failure Mechanism Traceability and Application in Human System Interface of Nuclear Power Plants using RESHA
In recent years, there has been considerable effort to modernize existing and new nuclear power plants with digital instrumentation and control systems (DI&C). However, there has also been considerable concern both by industry and regulatory bodies on the risk and consequence analysis of these systems. Of particular concern is digital common cause failures (CCFs) as a result of failures or “misbehaviors” by the software in both the control and monitoring. While many new methods have been proposed to identify potential failure modes, such as Systems-theoretic Process Analysis (STPA), Hazard and Consequence Analysis for Digital Systems (HAZCADS), etc., these methods are focused primarily on the control action pathway of a system. Unlike the control pathway, the information feedback pathway lacks control actions, which are typically associated as software basic events, and thus assessment of software basic events in such systems is uncertain. In this work, we present the idea of intermediate processors and unsafe information flow (UIF) to help safety analysts trace failure modes in ...
Lead Author: Han Bao Co-author(s): Hongbin Zhang: hzhang@terrapower.com
Tate Shorthill: Tate.Shorthill@inl.gov
Edward Chen: echen2@ncsu.edu
Common Cause Failure Evaluation of High Safety-significant Safety-related Digital Instrumentation and Control Systems using IRADIC Technology
Digital instrumentation and control (DI&C) systems in nuclear power plants (NPPs) have many advantages over analog systems but also pose different engineering and technical challenges, such as potential threats due to common cause failures (CCFs). This paper proposes an integrated risk assessment technology for DI&C systems (IRADIC) developed by Idaho National Laboratory for dealing with potential software CCFs in DI&C systems of NPPs. The methodology development of the IRADIC technology on the quantitative evaluation of software CCFs in high safety-significant safety-related DI&C systems in NPPs is illustrated in this paper. In IRADIC, qualitative hazard analysis and quantitative reliability and consequence analysis are successively implemented to obtain quantitative risk information, compare with respective risk evaluation acceptance criteria, and provide suggestions for risk reduction and design optimization. A comprehensive case study was also performed and documented in this paper. Results show that the IRADIC technology can effectively identify potential digital-based CCFs, est...
Lead Author: Sung-Min Shin Co-author(s): Sang Hun Lee / k753lsh@kins.re.kr
Seung Ki Shin / skshin@kaeri.re.kr
A novel approach for quantitative importance analysis of DI&C systems in NPP
The safety-related I&C system of nuclear power plants(NPPs) has quite complex interactions between its components in accordance with the redundancy/diversity design concept applied to ensure their functions, and the complexity is further increased with the recent introduction of digital characteristics. Meanwhile, safety signals can be generated/executed not only automatically but also manually but it is understood that the linkage between them was insufficient in the PSA process, the analysis framework of the existing NPP I&C system. Moreover, it is very difficult to secure quantified failure information of digital components required in analyzing the DI&C system according to the PSA framework. Therefore, this study proposes a new approach to resolve these problems, that is, the complex interactions between system components, the insufficient consideration on the relation between automatic and manual safety signal generation/execution, and the difficulty of securing failure information of digitalized components for PSA analysis.
The method proposed in this study basically includes ...
Lead Author: Yochan Kim Co-author(s): Sun Yeong Choi (sychoi@kaeri.re.kr)
Jinkyun Park (kshpjk@kaeri.re.kr)
Jaewhan Kim (jhkim4@kaeri.re.kr)
Statistical evidence of minimum human error probability for an emergency event from simulation records
Human reliability analysis estimates error probabilities of human operators under given contextual conditions for predicting a probabilistic risk of complex systems. The human error probabilities have been often determined based on the limited operating data and simplified cognitive models; hence, there has been a recognition in the field of HRA that it is necessary to appropriately assign a conservative value to a very low error probability in consideration of various uncertainty factors. For example, EPRI [2010] suggested assigning different minimum values such as 1.0E-04, 1.0E-05, and 1.0E-06 according to the contextual factors by referring to the typical hardware failure probability. Whaley et al. [2011] recommended using 1.0E-5 as a lower bound of HEP with consideration of usefulness in cut-set investigations. However, no objective evidence supporting the reasonableness of the minimum value was not presented. This paper attempts to generate statistical information to determine the minimum error probability bound based on the human error data from the simulation records. The data...
Development of a new plant-specific, full-scope industrial-scale L1/L2 PSA-model with the application of the new RiskSpectrum® Model Builder Tool
In July 2020, the NPP Goesgen-Daeniken AG (KKG) launched a new project (PSASPECTRUM) of refurbishment and restructuring of its plant PSA model in a new software environment and using new software tools. The main purpose of this project is the progression of the current KKG PSA model into the RiskSpectrum® software, including its update in terms of consideration of all the additional plant modifications, model & documentation review, necessary model & documentation corrections, increasing the level of modelling detail as well as improving the level of modelling and documentation consistency. Hence, with the PSASPECTRUM project, one will achieve a consistent and comprehensive PSA model and documentation, compiled within a state-of the-art PSA modelling software environment, allowing to perform the relevant PSA applications and to fulfil the national regulatory requirements more effectively and in a more efficient time manner.
The ultimate goal of this project is to produce a full-scope, all POSs (full-power, low power & shutdown), all IEs classes (internal & external), all hazards (i...
Lead Author: James Knudsen Co-author(s): Curtis L. Smith (Curtis.Smith@inl.gov)
Michael Calley (Michael.Calley@inl.gov)
Issues and Approaches Regarding Success Terms for Probabilistic Risk Assessment Models
Solving event tree accident sequences in probabilistic risk assessments (PRAs) involves assumptions about the success of systems (i.e., event tree top events). The primary assumption is that failure of the system is a rare event (i.e., a low probability of failure); therefore, the success probability is very close to 1.0. Under most conditions, this assumption is valid. However, such is not always the case. When event tree top events have higher failure probabilities—and thus success probabilities that are not close to 1.0—this assumption causes the sequences with success branches to be incorrect. To address this issue, it is necessary to properly account for the success probability of event tree top events in order to better quantify each sequence of the event tree.
The current state of practice allows for some success event tree top events to be included in the accident sequence cut sets (i.e., only single basic events representing the success term and its value). In addition, sequence cut sets are typically quantified using the minimal cut set upper bound approximation versus...
Lead Author: Joy Shen Co-author(s): Michelle T Bensi: mbensi@umd.edu
Mohammad Modarres: modarres@umd.edu
Synthesis of Questionnaire Insights Regarding Current PRA and Additional Tools
This paper presents and analyzes the responses to a questionnaire on the nuclear industry's insights on current probabilistic risk assessment (PRA) tools. Many PRA tools traditionally use event tree and fault tree analysis. While these tools are well-established and developed, they are not without limitations. Additionally, the PRA scope is evolving to include other areas of interest, such as external hazard PRAs, human reliability assessments, or dynamic PRAs. A questionnaire was created to engage PRA practitioners and gather insights about the community's views on the advantages/limitations of PRA tools and trending needs. The questionnaire requested anonymous feedback and insights on applying legacy, conventional, and other PRA tools. Additionally, a review/survey was performed of existing resources to supplement the questionnaire and develop a comprehensive analysis. The questionnaire and survey reveal that the needs of the PRA community are evolving, as are the tools they use to accomplish this change. The high-level needs of the PRA community include dynamic PRAs, external haza...
Session M23 - Diamond Anniversary of THERP Panel Session
Session Chair: Ronald Boring (ronald.boring@inl.gov)
Paper 1 RO172
Lead Author: Ronald Boring
The Diamond Anniversary of THERP: Reflections on Human Reliability Analysis at Sixty Years
The Technique for Human Error Rate Prediction (THERP) was first revealed at a symposium of the then Human Factors Society in 1962. Since that time, THERP has shifted from an initial approach used to support the safety of weapons work, to its appearance as a nuclear regulatory document to support risk analysis of nuclear power plants, to a ubiquitous legacy method. Much of the framework that the field of human reliability analysis (HRA) uses is based on the foundations of THERP, and newer methods are still benchmarked against it. Yet, since its completed appearance as NUREG/CR-1278 in 1983, much has changed. THERP predates the advent of digital control room technologies and the emergence of automation; many safety industries beyond nuclear power are using HRA but must contend with a method calibrated to nuclear operations; many areas of human performance such as cognition, teamwork, and decision making are largely absent in THERP; tasks outside the control room are not adequately modeled in THERP; many accident types including severe accidents were not considered in THERP; THERP does ...
Development of a new plant-specific, full-scope industrial-scale L1/L2 PSA-model with the application of the new RiskSpectrum® Model Builder Tool
In July 2020, the NPP Goesgen-Daeniken AG (KKG) launched a new project (PSASPECTRUM) of refurbishment and restructuring of its plant PSA model in a new software environment and using new software tools. The main purpose of this project is the progression of the current KKG PSA model into the RiskSpectrum® software, including its update in terms of consideration of all the additional plant modifications, model & documentation review, necessary model & documentation corrections, increasing the level of modelling detail as well as improving the level of modelling and documentation consistency. Hence, with the PSASPECTRUM project, one will achieve a consistent and comprehensive PSA model and documentation, compiled within a state-of the-art PSA modelling software environment, allowing to perform the relevant PSA applications and to fulfil the national regulatory requirements more effectively and in a more efficient time manner.
The ultimate goal of this project is to produce a full-scope, all POSs (full-power, low power & shutdown), all IEs classes (internal & external), all hazards (i...
Lead Author: James Knudsen Co-author(s): Curtis L. Smith (Curtis.Smith@inl.gov)
Michael Calley (Michael.Calley@inl.gov)
Issues and Approaches Regarding Success Terms for Probabilistic Risk Assessment Models
Solving event tree accident sequences in probabilistic risk assessments (PRAs) involves assumptions about the success of systems (i.e., event tree top events). The primary assumption is that failure of the system is a rare event (i.e., a low probability of failure); therefore, the success probability is very close to 1.0. Under most conditions, this assumption is valid. However, such is not always the case. When event tree top events have higher failure probabilities—and thus success probabilities that are not close to 1.0—this assumption causes the sequences with success branches to be incorrect. To address this issue, it is necessary to properly account for the success probability of event tree top events in order to better quantify each sequence of the event tree.
The current state of practice allows for some success event tree top events to be included in the accident sequence cut sets (i.e., only single basic events representing the success term and its value). In addition, sequence cut sets are typically quantified using the minimal cut set upper bound approximation versus...
Lead Author: Joy Shen Co-author(s): Michelle T Bensi: mbensi@umd.edu
Mohammad Modarres: modarres@umd.edu
Synthesis of Questionnaire Insights Regarding Current PRA and Additional Tools
This paper presents and analyzes the responses to a questionnaire on the nuclear industry's insights on current probabilistic risk assessment (PRA) tools. Many PRA tools traditionally use event tree and fault tree analysis. While these tools are well-established and developed, they are not without limitations. Additionally, the PRA scope is evolving to include other areas of interest, such as external hazard PRAs, human reliability assessments, or dynamic PRAs. A questionnaire was created to engage PRA practitioners and gather insights about the community's views on the advantages/limitations of PRA tools and trending needs. The questionnaire requested anonymous feedback and insights on applying legacy, conventional, and other PRA tools. Additionally, a review/survey was performed of existing resources to supplement the questionnaire and develop a comprehensive analysis. The questionnaire and survey reveal that the needs of the PRA community are evolving, as are the tools they use to accomplish this change. The high-level needs of the PRA community include dynamic PRAs, external haza...
Session M24 - Risk Governance and Societal Safety I
Session Chair: Sai Zhang (sai.zhang@inl.gov)
Paper 1 WA49
Lead Author: Wasin Vechgama Co-author(s): Mr. Watcha Sasawattakul, Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, 17th floor, Engineering 4 Building (Charoenvidsavakham), Phayathai Road, Wang Mai, Pathumwan, Bangkok, 10330, Thailand, Email: watcha.sasawattakul@gmail.com.
Development of Classification Model for Public Perception of Nuclear Energy in Social Media Platform using Machine Learning: Facebook Platform in Thailand
Due to the nuclear consequences of the severe accidents of Nuclear Power Plants (NPPs) of Fukushima Daiichi in 2011, the public acceptance of nuclear energy has been decreased significantly in many countries including Thailand. Since 2011, the Thailand Government had continuously postponed the NPP project until in 2018 the NPP project was not contained in the latest Power Development Plan. Apart from the concerns of the safety of NPPs, public apprehension was the important reason affecting the nuclear energy plan of Thailand. In the past, public acceptance surveys have been conducted by using questionnaires to reflect the people's opinion about nuclear energy in Thailand, especially after the Fukushima disaster. However, the surveys using the questionnaire had the limitation of people access, and the high cost and time consumption. Since, nowadays, the key role of computational code and social media is influential to people around the world significantly including Thailand, data collections from the direct and indirect surveys in various fields have been evaluated through social medi...
Lead Author: Garill Coles Co-author(s): Steve M. Short, steve.short@pnnl.gov
Steve J. Maheras, steven.maheras@pnnl.gov
Harold E. Adkins, harold.adkins@pnnl.gov
Risk-Informed Approach for Regulatory Approval of Microreactor Transport
Pacific Northwest National Laboratory (PNNL) was tasked to develop and evaluate regulatory options for transport of microreactors. The work was funded by the National Reactor Innovation Center a National Department of Energy program within the Office of Nuclear Energy research and development which supports demonstration of microreactor technology. PNNL developed a risk-informed regulatory framework for the licensing of the transportation of microreactors, focused on the most challenging case which is the transportation of irradiated nuclear fuel that is assumed to be an integral component of the microreactor transportation package. The framework lays out a viable regulatory pathway, including decision points for regulatory options and the supporting technical evaluations for those options in phases from near to long term.
Microreactors are very small nuclear reactors of 20 megawatts electric (MWe) or less, designed to be factory-built, and may be transportable. The microreactor designs being considered in this evaluation are tristructural isotropic (TRISO) fueled using high-as...
Lead Author: Ben Chen Co-author(s): Bruce Hamilton, bhamilton@anl.gov
Dave Grabaskas, dgrabaskas@anl.gov
Mark Cunningham, mark.cunningham@anl.gov
Sinem Perk, sperk@anl.gov
Success Path Analysis as a Recommended Practice for enhanced Quality in High Reliability Organizations
Since 2012, Argonne National Laboratory has worked with the U.S. Bureau of Safety and Environmental Enforcement (BSEE) to develop and implement tools that support risk-informed decision making for the oil and gas industry. The Success Path Analysis Method that was developed helped visualize risk in an easy-to-understand way, provided a common language and systematic process for understanding and managing high-risk activities and equipment, enabled operational risk to be quantified, and proved to be an effective tool to facilitate communication and prioritize discussion topics among operators and BSEE with a focus on improving safety.
A Success Path begins with a diagram of the hardware, software, and human actions needed to ensure safe operation of a system or component. Success Paths provide a "chain of causality" illustrating what (hardware, software, and human actions) must go right to ensure safe operations. Visualizing what must go right helps us understand, manage, and respond to what can fail. The benefits of the application of the Success Path Method in the work with BSEE has...
Lead Author: Jeeyea Ahn Co-author(s): Wooseok Jo, cws5528@unist.ac.kr;
Byung Joo Min, bjmin135@unist.ac.kr;
Seung Jun Lee, sjlee420@unist.ac.kr
A methodology for measuring the difficulty of nuclear safety culture and safety management factors
Since a strong safety culture is required for safety, the evaluation of safety culture and safety management must also meet the requirements of safety evaluation. However, at present, there is no suitable method to evaluate safety culture or safety management index in this systematic way. Most existing safety culture evaluation methods mainly focus on evaluating the maturity level of the target organization's safety culture, and do not take into account the concept of a graded approach and generally do not deviate from the framework of discovery and removal of vulnerable elements. Moreover, there is currently no unified safety culture model, so there is a difficulty in smooth communication due to differences in understanding of the same factors between organizations during safety culture-related exchanges. In this regard, this research has a purpose to develop an in-depth analysis tool for safety culture for detailed analysis by safety culture attribute and to propose a methodology to promote mutual understanding of safety culture by institution. This method can be utilized in the di...
Use of a Risk-Informed Performance-Based Advanced Reactor Design Standard to Categorize and Classify Components for a Current Generation Nuclear Power Plant
A number of standards and guidelines have been developed for use in the US for the purpose of risk informing the design of advanced reactors. Some of these documents are directed at specific reactor types, others are technology neutral. The objectives of each of these standards and guidelines are directed at appropriately combining deterministic, probabilistic, and performance-based design methods during development of the plant design.
This paper describes the application of one of these risk-informed performance-based (RIPB) design standards to the categorization and classification of components for a current generation light water reactor. While the selected standard was developed for use in the design of an advanced reactor, the purpose of this exercise was to identify what components would have been classified as safety related, non-safety related (with special treatment requirements) and non-safety related (no special treatment requirements) had insights from the plant specific PRA been available at the time the plant was licensed.
The plant is a PWR that began operation in ...
A PSAM Profile is not yet available for this author.
Paper 2 AN186
Lead Author: Antonios Zoulis
Expansion and Use of Risk-Informed Process for Evaluations
The Risk-Informed Process of Evaluations (RIPE) can be used to defer or eliminate compliance issues with a minimal safety impact using existing regulations. The Nuclear Regulatory Commission (NRC) approved this initiative that utilizes licensee’s previously approved risk-informed initiatives to inform licensing actions in January 2021. The initiative leverages current regulations and uses risk information to identify low safety significant issues that US licensees can use to submit plant-specific regulatory actions for issues that would support a streamlined review by the NRC using existing programs and processes that are consistent with Regulatory Guide 1.174, “An Approach for Using Probabilistic Risk Assessment In Risk-Informed Decisions On Plant-specific Changes to the Licensing Basis” (Agencywide Documents Access and Management System (ADAMS) Accession No. ML100910006). The NRC expanded the use of RIPE to include other risk-informed initiatives and to allow for Technical Specification changes. In addition, the first application using the RIPE process was submitted involvin...
Risk Management and Risk Aversion, from Benefit to Impediment
Risk management is a critical tool for improving the probability of project success by identifying, assessing, prioritizing, and attempting to control threats to project realization. For industries that require high operational reliability due to the potential consequences of system failure, such as the nuclear, aerospace, and chemical sectors, a major focus of risk management is the preservation of process safety. Due to the nature of the processes or systems under consideration, the associated process safety analyses (such as risk and safety assessments) and safety features can require significant resources. These costs are typically tolerated either due to the need to satisfy regulatory requirements or based on the assumption that they generally decrease the occurrence of unwanted events and therefore improve the probability of project success. However, as the level of acceptable or tolerable risk from unwanted events decreases, the required resources necessary for ensuring and demonstrating satisfaction of these criteria can grow and in turn can become one of the dominant impedim...
Lead Author: Fernando Ferrante Co-author(s): Karl Fleming, karlfleming@comcast.net
Ed Parsley, eparsley@jensenhughes.com
Charlie Young, cyoung@jensenhughes.com
Leo Shanley, lshanley@jensenhughes.com
Enhancement of the Use of Defense-in-Depth and Safety Margin for Decision-Making Purposes
Both Defense-In-Depth (DID) and Safety Margin (SM) have been longstanding key concepts in nuclear applications, well before Probabilistic Risk Assessment (PRA) became a staple of risk applications in this field. A detailed review of key references on the subject of DID/SM in RIDM indicates that these topics are overdue for a more efficient, integrated approach, as RIDM applications continue to gain acceptance and implementation experience worldwide. As the use of PRA and RIDM continues to expand, different perspectives on DID/SM can challenge the incorporation of additional risk modeling and wider, more comprehensive application of PRA in nuclear power plants (NPPs). In particular, challenges from a deterministic-oriented perspective against more risk-informed applications, as well as their expansion in areas where PRA is not used as heavily, can lead to misperceptions that DID/SM principles are not aligned with respect to risk insights obtained via risk assessment inputs.
A careful investigation and discussion of DID/SM as overarching principles of nuclear safety was performed to hi...
Lead Author: Tomasz KISIEL Co-author(s): Artur KIERZKOWSKI artur.kierzkowski@pwr.edu.pl
A method for managing a security checkpoint through multi-criteria analysis with consideration of safety and process performance.
The purpose of this paper is to develop a method for configuring a security screening checkpoint at an airport. The method will be based on such personnel management that the most advantageous ratio of safety to process efficiency is achieved. The paper uses a computer simulation method on the basis of which a multi-criteria analysis is conducted. Two criteria are taken into account: safety and process performance. Such a model has not yet been developed in the scientific literature and can be of significant interest to the airport security control manager. The article is part of the work related to the project: "Development of an innovative desk of the primary and supplementary training of the security control operator at the airport". ...
Lead Author: Agnieszka Tubis Co-author(s): Szymon Haładyn szymon.haladyn@pwr.edu.pl
Human error risk analysis in the cargo logistics process in the airport zone
Human errors are a common cause of disruptions in the logistics service process. In order to eliminate them, enterprises introduce automatic solutions to routine logistics operations. However, not in every system is it possible to automate the process of handling loads. Some operations still require human intervention, or the economic calculation does not justify using autonomous solutions. In these situations, it is necessary to implement solutions that will eliminate or reduce the impact of the human factor on the risk of adverse events. The article aims to analyze the risk of human errors in cargo logistics handling at the airport. Field tests were carried out at a selected airport. On their basis, the identification of adverse events and their analysis using qualitative and quantitative risk assessment methods were carried out. The assessment results made it possible to develop dedicated solutions to improve the crew's competencies with the use of Virtual Reality technology....
Lead Author: Franciszek Restel Co-author(s): Lukasz Wolniewicz, lukasz.wolniewicz@pwr.edu.pl
Identification of safety relevant activities of train crews using the Functional Resonance Analysis Method (FRAM)
During training the classical traing process of train crews are used real vehicles. There are two disadvantages of this approach. The first, vehicles are out of order to perform commercial tasks. This factor cerates high costs and, to minimze them, the tarining time is kept as short as possible. Secondly, there is no possibilty to trainee dangerous sitations, as for example fire on board of a train. Thus, using of Virtual Reality in the traning process is a key undertaking to improve safety and efficiency of railway operation processes. The problem occurrs how to choose safety relevant situations for implenetation as scenarios in the Virtual Reality environment. The paper proposes a method for determining train crew activities based on activity execution variability. The variability of activity execution is characterized by precision and timeliness. The accuracy and timeliness of train crew activity performance were estimated mainly based on a survey of train crews, as well operation data from the Polish Railway Network Manager.
The research problem is focussed on selection of the mo...
Risk Of Maintenance Resource Sharing In Transport Systems
The authors investigate the problem of maintenance resource sharing for transport systems in the context of operational risk assessment. The paper includes a short introduction to the maintenance problems and a discussion of maintenance resource sharing issues in transport systems' effective performance. Later, a three-parameter risk assessment ratio is introduced. It includes a probability of disruption occurrence, consequences of disruption occurrence, and a new measure characterizing maintenance resources availability. Later, the problem of maintenance resources availability is analyzed and discussed. Finally, a short case study is introduced. The presented paper gives the possibility to identify research gaps and possible future research directions connected with optimization of maintenance problems in transportation organizations. ...
Session T02 - Operational Experience and Data Analysis
Session Chair: Anders Olsson (anders.olsson@vysusgroup.com)
Paper 1 ER221
Lead Author: Erik Sparre Co-author(s): Carl Eriksson, carl.eriksson@riskpilot.se
Mattias Håkansson, mattias.hakansson@riskpilot.se
Jacob Larsson, jacob.larsson@riskpilot.se
Gunnar Johanson, gunnar.johanson@riskpilot.se
Project DIOR (Deeper Investigation of Repairability of Failures)
The overall purpose of the DIOR project is to better understand failure data used in PRA models. A deeper knowledge about repairability, timing of failures, which failures (causes, coupling mechanisms) occur early/late etcetera gives the analyst valuable insights when developing state-of-the-art PSA models. Such knowledge can be relevant when introducing repair into PRA models, how to assign failure data for longer time windows, definition of CCF (concerning for example failure modes and repair possibilities).
Information about single failures in Nordic Nuclear Power Plants is gathered in the so-called TUD-database. This database has primarily been used to calculate failure probabilities and failure rates for components, but since the database also contains information about repair times, and other timing information, it is possible to calculate measures such as mean time to repair (MTTR). Information about CCFs is collected within the ICDE-project and compiled in databases.
Therefore, events from the TUD and ICDE databases have been analyzed for a selected number of components (ce...
A PSAM Profile is not yet available for this author.
Paper 2 HA48
Lead Author: Hayat Chatri Co-author(s): Gunnar Johanson; gunnar.johanson@afconsult.com
Jan Stiller; Jan.Stiller@grs.de
Jeffery Wood; Jeffery.Wood@nrc.gov
Recent Insights from the International Common Cause Failure Data Exchange (ICDE) Project
Abstract: CCF events can significantly impact the availability of safety systems of nuclear power plants. For this reason, the ICDE Project was initiated by several countries in 1994. Since 1997 it has been operated within the OECD NEA framework and the project has successfully operated over seven consecutive terms (the current term being 2019-2022). The ICDE Project allows multiple countries to collaborate and exchange common-cause failure (CCF) data to enhance the quality of risk analyses, which include CCF modelling. Because CCF events are typically rare, most countries do not experience enough CCF events to perform meaningful analyses. Information combined from several countries, however, have yielded sufficient data for more rigorous analyses.
The ICDE project has meanwhile published eleven reports on collection and analysis of CCF events of specific component types (centrifugal pumps, emergency diesel generators, motor operated valves, safety and relief valves, check valves, circuit breakers, level measurement, control rod drive assemblies, heat exchangers) and five topical re...
Lead Author: Marco Arndt Co-author(s): Philipp Mell, M.Sc.; E-mail: philipp.mell@ima.uni-stuttgart.de
Dr.-Ing. Martin Dazer; E-mail: martin.dazer@ima.uni-stuttgart.de
Prof. Dr.-Ing. Bernd Bertsche; E-mail: bernd.bertsche@ima.uni-stuttgart.de
Generic effects of deviations from test design orthogonality on test power and regression modelling of Central-Composite Designs
In the context of design of experiments (DoE), for many cases the quantitative dependency of a nonlinear target parameter on a few factors is to be determined for the related parameter prediction. For these cases, from the group of response surface designs, test plans are used following the structure of Central-Composite Design (CCD). Based on full-factorial test plans, they feature additional test runs in the center of the design space (center run) as well as along the main axes (star run), which yield the required information for a quadratic model while still being highly efficient. The leverage value α predefines the relative directional distance beyond the center run for the star runs. The individual value determination of α as well as the specific arrangement of the test runs in the design matrix follow a generic mathematical approach to match required DoE properties. Here the most essential respective property is orthogonality. It is sufficiently required in order to consider uncorrelated and independent coefficients separately and to establish regression models, guaranteeing...
Name: Marco Arndt (marco.arndt@ima.uni-stuttgart.de)
Paper 4 SJ53
Lead Author: Jan Stiller
Modelling and Quantification of Correlated Failures of Multiple Components due to Asymmetries of the Electrical Power Supply System of Nuclear Power Plants in PSA
Failures of multiple redundant trains of the electrical power supply system of nuclear power plants (NPPs) have recently gained increasing attention by the nuclear community. This was triggered by events at different NPPs where single causes led to such failures. For example, at the Byron NPP (USA), asymmetries in the power supply system arose from a single failure of an insulator in the switchyard of the plant. The asymmetry failed to cause the reactor protection system (RPS) to initiate the isolation of the emergency bus bars and the operation of the emergency diesel generators. At the Forsmark NPP (Sweden), an open phase condition which was not detected by the RPS was caused by the failure of one pole of a breaker. In both events, the electrical consumers remained connected with the fault and were exposed to an asymmetric voltage supply, leading to unavailability and destruction of safety related electrical equipment. Similar events occurred in other plants as well, e.g., at Vandellòs, Unit 2 (Spain) in 2006, at Dungeness, Unit B (United Kingdom) in 2007 and at Bruce, Unit A-1 (C...
Modeling and simulation of in-hospital medical processes in the event of disaster for the evaluation of BCP and training
Background and purpose: Disaster-base hospitals are expected to play a central role in regional disaster medicine and required to provide medical treatment to many injured patients in the event of large-scale disasters. To be well prepared for disasters and improve the response capability, disaster response training based on the well-designed BCPs is demanded for hospital staff. However, traditional live exercise simulation is costly and has many restrictions from the ordinal hospital operations and activities. Also, there is no established method to evaluate the effectiveness and resilience of BCPs. With this background, this study aims to develop a highly realistic computer simulation model of the whole processes of disaster medicine in disaster-based hospitals that can be used for training and BCP evaluation. As the first step of this purpose, we are developing a model of the in-hospital medical processes in the event of disasters. Methods: By analyzing relevant documents including disaster training scenarios, observing an actual disaster response training, and interviewing with t...
Safety of GFRP-Reinforced Concrete Columns Subjected to Sustained Intensity
This research presents the safety of concrete columns reinforced with glass fiber reinforced polymer (GFRP) composite bars under sustained loading over a service period of 100 years. An analytical model is formulated based on force equilibrium and strain compatibility to predict the behavior of the columns and to assess their performance reliability. Upon validation of the model against published experimental data, an extensive parametric study is conducted to elucidate the safety of the columns subjected to the long-term load. The implications of the constituents are examined with a focus on creep, shrinkage, and stress redistributions. The vulnerability of premature failure is discussed for practitioners to consider when implementing GFRP-reinforced concrete columns in the field....
Lead Author: Kim Hintz Co-author(s): Dr.-Ing. Martin Dazer, martin.dazer@ima.uni-stuttgart.de
Prof. Dr.-Ing. Bernd Bertsche, bernd.bertsche@ima.uni-stuttgart.de
Availability Analysis of Photovoltaic System Concepts to Derive Reliability Requirements for Inverters within Different Application Scenarios
The energy concept based on fossil fuels is unsustainable in long term due to the ongoing shortage of resources and the associated negative impact on climate change. The remedy is offered by the sustainable, nearly inexhaustible, and almost emission-free sources of renewable energy. Photovoltaics (PV) play a key role, converting the world's solar radiation into electrical energy without emissions and thus making solar energy usable in a decentralized manner.
To promote the expansion of PV systems in urban environments, manufacturers must guarantee ecological, safe, but above all economical operation for the customer.
Various concepts are used for grid-connected PV systems, which differ in their electronic design and circuit layout. These include concepts based on string or module inverters as well as concepts using power optimizers. The concepts offer several advantages and disadvantages, such as the control of individual PV modules at the optimal operating point, their monitoring, as well as the flexibility in the installation. Furthermore, the profitability of the PV system depend...
Lead Author: Somayeh Mohammadi Co-author(s): Michelle Bensi (mbensi@umd.edu) ZhegangMa (zhegang.ma@inl.gov), and Kaveh Faraji Najarkolaie (kfaraji@terpmail.umd.edu)
Uncertainty in Predicted Tropical Cyclone Path and Landfall Characteristics for Landfalling Storms to Support External Hazard Probabilistic Risk Assessments for Critical Infrastructure – A Preliminary Analysis
Critical infrastructure facilities, such as nuclear power plants, are often located in coastal regions exposed to tropical cyclones (TCs). These facilities may employ permanent protective measures as well as strategies that require manual (human) actions to install temporary features (e.g., flood protection berms and pumps). In addition to the possibility of hardware failures, there is a possibility that actions will be unsuccessful due to delayed organizational decision-making, human errors, and differences between predicted and experienced hazard characteristics. Accurate external hazard probabilistic risk assessments (XHPRAs) must quantify these error probabilities, which depend on factors such as the information available to support decisions, the time available to perform actions, and the environmental conditions under which actions are performed. These factors are subject to uncertainty due to uncertainty in TC forecasts. To support XHPRAs for critical infrastructure facilities, this paper seeks to explore uncertainty in the conditions under which human actions will be performe...
A PSAM Profile is not yet available for this author.
Session T04 - Hydrogen Production
Session Chair: Richard Boardman (richard.boardman@inl.gov)
Paper 1 RI309
Lead Author: Tyler Westover Co-author(s): Richard Boardman;
Stephen Hancock
Presenter of this paper: Kurt Vedros (kurt.vedros@inl.gov)
Thermal and Electrical Coupling to Electrolysis Plants
Close coupling of nuclear and electrolysis plants requires modifications of the power transmission station to take advantage of the low-cost, clean electricity produced by nuclear plants. In addition, high-temperature electrolysis can reduce the cost of electrolysis when the steam produced by nuclear plants is used to heat and produce steam for electrolysis. RELAP and HYSYS models provided the basis for preliminary hydrogen production studies. This work supports a conceptual architectural engineering design of the thermal energy delivery systems. Understanding the thermal and electrical interfaces is essential for PRAs and ensuring the safety of plant operations and protection and stability of the power systems and reactor core....
A PSAM Profile is not yet available for this author. Presenter Name: Kurt Vedros (kurt.vedros@inl.gov) Bio: Kurt is a lead risk assessment engineer with Idaho National Laboratory's Nuclear Science and Technology division's Reliability, Risk, and Resilience Sciences Group. Kurt has over 25 years of experience in reliability and risk engineering. His research areas of interest are in static and dynamic probabilistic risk assessment of advanced reactors and in support of sustainability improvements for existing nuclear power plants, power analysis-informed PRA of electrical grids, development of community chemical risk assessment techniques, Bayesian parameter estimations, and external environmental event hazards assessment. He has a Bachelor of Science in Nuclear Engineering from Idaho State University and reliability institutes from University of Arizona.
Paper 2 KU224
Lead Author: Kurt Vedros Co-author(s): Robby Christian, robby.christian@inl.gov
Austin Glover, amglove@sandia.gov
Curtis Smith, curtis.smith@inl.gov
Probabilistic Risk Assessment of a Light Water Reactor Coupled with a High-Temperature Electrolysis Hydrogen Production Plant – Part 1: Hazards Analysis
The profitability of existing nuclear power plants (NPPs) can be enhanced by using excess thermal energy to supply industrial processes. While the decision to modify the NPP to supply thermal energy externally to the plant is an economic one, the licensing permission for the modification is based on safety. To investigate the safety acceptance for such a modification, two generic probabilistic risk assessment (PRA) models were developed in this study to evaluate the effect on safety of the addition of a heat extraction system (HES) to a light water reactor (LWR). The two PRA models are for a pressurized-water reactor (PWR) and a boiling water reactor (BWR), respectively.
The introduction of a HES has the goal of providing heat that would normally be wasted to be used for new revenue generation through processes such as making hydrogen. This HES module feeds process heat to a High Temperature Electrolysis Facility (HTEF). The PRAs used in this assessment of the HES are generic, and therefore some assumptions are made to preserve generality. A Failure Mode and Effects Analysis (FMEA) ...
Lead Author: Richard Boardman Co-author(s): S. Jason Remer- Sherman.Remer@inl.gov; Jack Cadogan, Tyler Westover Tyler.Westover@inl.gov, (CERTREC authors to be added)
Hydrogen Regulatory Research Review Group
The formation of the Hydrogen Regulatory Research Review Group (H3RG) is a natural out-growth of feedback and discussions from ongoing LWRS Flexible Plant Operations and Generation Stakeholder meetings. The role of the H3RG is to identify licensing considerations in support of practical nuclear fleet integration with high temperature electrolysis. This collaboration engages expertise from; DOE supported national laboratories, independent R&D organizations, electric utility licensing personnel, nuclear architect engineering firms, Advanced Reactor Demonstration Program (ARDP) applicants, and hydrogen vendor research organizations. The primary objective of the H3RG is 2022 is to identify licensing approaches (based on traditional Nuclear Regulatory Commission (NRC) requirements) to introduce hydrogen production by HTE as an alternate energy stream at nuclear facilities. ...
Lead Author: Austin Glover Co-author(s): Kurt Vedros- Kurt.Vedros@inl.gov; Austin Glover- amglove@sandia.gov; Courtney Otani- Courtney.Otani@inl.gov
Assessment of Hydrogen Plant Risks For Siting Near Nuclear Power Plants
An underlying basis for the PRA of a nuclear plant that is connected to a hydrogen plant is understanding the safety risks associated with hydrogen production, storage, and pipeline infrastructure that is located in close proximity. Sandia National Laboratory has used historical data to provide relevant accident scenarios and frequencies. In addition, potential hazards associated with local hydrogen leaks, flames, and explosion scenarios have been evaluated to determine the impact on the nuclear plant facility and related power systems and other plant targets. The outcomes for a large-scale hydrogen product within 0.5 to 1.0 kilometers has been evaluated. The minimum separation distance of the HTEF is calculated based on the target fragility criteria of 1 psi defined in Regulatory Guide 1.91....
Session Chair: Craig Primer (craig.primer@inl.gov)
Paper 1 VI321
Lead Author: Vivek Agarwal Co-author(s): Matthew Yarlett and Brad Diggans
Digital Nuclear Platform for Automation of Maintenance Activities
This presentation would describe requirements that established through development and design of the Nuclear Digital Platform (NDP) by PKMJ Technical Services, referred to as PKMJ NDP. The PKMJ NDP was designed to be a centralized solution to support large-volume storage and access, large-volume data processing, advanced data analytics techniques (e.g., artificial intelligence/machine learning), data and information visualization, and reporting. PKMJ utilizes enhanced business intelligence techniques in support of NPP customers; however, these tools are used in industries throughout the world. The PKMJ NDP takes input from nuclear industry subject matter experts and combines it with input from mixed discipline teams of data experts in order to apply cutting-edge principles to rapidly explore data, unlock and test new ideas, and turn those ideas into services. The presentation would also describe the development of automated work package in the PKMJ NDP reducing the risk of human error, optimizing resources allocation, automating the process, and leading to cost-saving if implemented....
Paper VI321 | |
Name: Vivek Agarwal (vivek.agarwal@inl.gov)
Paper 2 AH36
Lead Author: Ahmad Al Rashdan Co-author(s): Brian Wilcken Brian.Wilcken@inl.gov
Kellen Giraud Kellen.Giraud@inl.gov
Svetlana Lawrence Svetlana.Lawrence@inl.gov
Using Automated Trending to Inform Nuclear Power Plant Compliance
To remain competitive in current and future energy markets, the nuclear power industry must reduce operating costs while improving operational performance through comprehensive plant modernization and by transforming the model of operations from being historically labor-centric to one that is technology-centric. Many nuclear power plant (NPP) activities could be transformed using recent advances in automation. Specifically, activities performed in an NPP to achieve or demonstrate compliance with regulatory requirements could be performed more efficiently, both from a licensee and a regulator point of view.
A corrective action program (CAP) is a key component of NPP compliance and performance-tracking activities. In addition to documenting and tracking the resolution of issues that occur in NPPs, a CAP is a primary resource for monitoring the trending of issues and causes of undesirable events across a plant, fleet of plants, or even the industry as a whole. Those conditions are documented in condition reports (CRs). Thus, CRs contain invaluable information about plant performance. H...
Paper AH36 | |
Name: Ahmad Al Rashdan (ahmad.alrashdan@inl.gov)
Paper 3 KO218
Lead Author: Koushik Araseethota Manjunatha Co-author(s): Vivek Agarwal, vivek.agarwal@inl.gov
Randall D. Reese, randall.reese@inl.gov
Federated Transfer Learning for Scalable Condition based Monitoring of Nuclear Power Plant Components
Condition-based monitoring (CBM) techniques are widely being adopted for maintenance activities in nuclear power plants. Asset operational data are collected by smart sensors mounted on and around the components. The sensed data is often gathered and processed by a monitoring and diagnostic center to garner various component fault signatures. These fault signatures are subsequently used as input to train predictive machine learning (ML) models for the specific component. Development of ML models require a significant amount of healthy and fault data. As faults are rare events, it is highly unlikely that all the potential fault modes are captured for a single component. Moreover, new components without historical data cannot contribute to ML model development. Additionally, fault signatures extracted from a single component cannot be robust enough to handle unseen fault patterns in same or different components. Privacy, security, legal, and commercial concerns often prevent data sharing across different plant systems.
This research presents federated transfer learning (FTL) to scale M...
A PSAM Profile is not yet available for this author.
Paper 4 M.110
Lead Author: Marcin Hinz Co-author(s): Doha Meslem, dmeslem.dm@gmail.com
Stefan Bracke, bracke@uni-wuppertal.de
Application of Gaussian Mixed Model for the clustering of fine grinded surfaces
The optical perception of high precision, fine grinded surfaces plays a major role, especially in various consumer goods. The very complex manufacturing process of many of these products consists of a variety of parameters such as feed rate, cutting speed, grinding disc, cutting fluid, contact force or process time. The change of a parameter setting has a direct effect on the surface topography. Therefore, a standardized and optimized configuration of process parameters enables a desired quality of a product. By varying the process parameters of the high precision fine grinding process, a variety of cutlery samples with different surface topographies are manufactured.
Surface topographies of grinded surfaces are measured by the use of classical methods (roughness measuring device, gloss measuring device, spectrophotometer). To improve the conventional methods, a new image processing analysis approach is needed to get a faster and more cost-effective analysis of produced surfaces. For the recognition of the product’s quality, a systematic analysis based on unsupervised learning tec...
Session T11 - National Academies' Panel on Evaluation of the Transport Airplane Risk Assessment Methodology
Paper 1 ZA334
Lead Author: Zahra Mohaghegh Co-author(s): Eric Allison, eric@joby.aero; Letica Cuellar-Hengartner, leticia@lanl.gov; George Ligler, gtligler@tamu.edu
National Academies' Panel on Evaluation of the Transport Airplane Risk Assessment Methodology
The U.S. Congress mandated the FAA to enter into an agreement with the National Academies of Sciences, Engineering, and Medicine (National Academies) to develop a report regarding the methodology and effectiveness of the Transport Airplane Risk Assessment Methodology (TARAM) process used by the FAA. TARAM is used to identify potential risks in currently operating airplanes, which will alert FAA officials if they need to take action to prevent potential accidents. The National Academies appointed an ad hoc committee of 12 members to undertake a study to evaluate the FAA’s TARAM process. This panel discussion will cover: Role and objectives of TARAM within the FAA’s overall safety oversight system; Assessment of the TARAM analysis process; Effectiveness of TARAM for the purposes of improving aviation safety; Recommendations to improve the methodology and effectiveness of TARAM as an element of aviation safety. ...
Lead Author: George Ligler Co-author(s): Eric Allison, eric@joby.aero; Leticia Cuellar-Hengartner, leticia@lanl.gov, Zahra Mohaghegh, zahra13@illinois.edu
National Academies' Panel on Evaluation of the Transport Airplane Risk Assessment Methodology
The U.S. Congress mandated the FAA to enter into an agreement with the National Academies of Sciences, Engineering, and Medicine (National Academies) to develop a report regarding the methodology and effectiveness of the Transport Airplane Risk Assessment Methodology (TARAM) process used by the FAA. TARAM is used to identify potential risks in currently operating airplanes, which will alert FAA officials if they need to take action to prevent potential accidents. The National Academies appointed an ad hoc committee of 12 members to undertake a study to evaluate the FAA’s TARAM process. This panel discussion will cover: Role and objectives of TARAM within the FAA’s overall safety oversight system; Assessment of the TARAM analysis process; Effectiveness of TARAM for the purposes of improving aviation safety; Recommendations to improve the methodology and effectiveness of TARAM as an element of aviation safety.”...
Paper GT336 | |
Name: George Ligler (gtligler@tamu.edu)
Paper 3 EA339
Lead Author: Eric Allison Co-author(s): Letica Cuellar-Hengartner, leticia@lanl.gov
George Ligler, gtligler@tamu.edu
Zahra Mohaghegh, zahra13@illinois.edu
National Academies' Panel on Evaluation of the Transport Airplane Risk Assessment Methodology
The U.S. Congress mandated the FAA to enter into an agreement with the National Academies of Sciences, Engineering, and Medicine (National Academies) to develop a report regarding the methodology and effectiveness of the Transport Airplane Risk Assessment Methodology (TARAM) process used by the FAA. TARAM is used to identify potential risks in currently operating airplanes, which will alert FAA officials if they need to take action to prevent potential accidents. The National Academies appointed an ad hoc committee of 12 members to undertake a study to evaluate the FAA’s TARAM process. This panel discussion will cover: Role and objectives of TARAM within the FAA’s overall safety oversight system; Assessment of the TARAM analysis process; Effectiveness of TARAM for the purposes of improving aviation safety; Recommendations to improve the methodology and effectiveness of TARAM as an element of aviation safety....
Paper EA339 | |
A PSAM Profile is not yet available for this author.
Paper 4 LE338
Lead Author: Leticia Hengartner Co-author(s): Eric Allison, eric@joby.aero
George Ligler, gtligler@tamu.edu
Zahra Mohaghegh, zahra13@illinois.edu
National Academies' Panel on Evaluation of the Transport Airplane Risk Assessment Methodology
The U.S. Congress mandated the FAA to enter into an agreement with the National Academies of Sciences, Engineering, and Medicine (National Academies) to develop a report regarding the methodology and effectiveness of the Transport Airplane Risk Assessment Methodology (TARAM) process used by the FAA. TARAM is used to identify potential risks in currently operating airplanes, which will alert FAA officials if they need to take action to prevent potential accidents. The National Academies appointed an ad hoc committee of 12 members to undertake a study to evaluate the FAA’s TARAM process. This panel discussion will cover: Role and objectives of TARAM within the FAA’s overall safety oversight system; Assessment of the TARAM analysis process; Effectiveness of TARAM for the purposes of improving aviation safety; Recommendations to improve the methodology and effectiveness of TARAM as an element of aviation safety....
Paper LE338 | |
Name: Leticia Hengartner (leticia@lanl.gov)
Session T12 - Light Water Reactor Sustainability
Session Chair: Kurt Vedros (kurt.vedros@inl.gov)
Paper 1 SV67
Lead Author: Lana Lawrence Co-author(s): Curtis L. Smith, Curtis.Smith@inl.gov
Diego Mandelli, Diego.Mandelli@inl.gov
Ronald L. Boring, Ronald.Boring@inl.gov
OVERVIEW OF RISA PROJECTS VALUE TO THE INDUSTRY
The United States nuclear industry is facing a strong challenge to maintain regulatory required levels of safety while ensuring economic competitiveness to stay in business. Safety remains a key parameter for all aspects related to operation of light water reactor (LWR) nuclear power plants (NPPs) and can be achieved more economically by using a risk informed ecosystem such as that being developed by the Risk-Informed Systems Analysis (RISA) Pathway under Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) Program. The LWRS Program is promoting a wide range of research and development (R&D) activities with the goal to maximize the safety, economics, and performance of NPPs through improved scientific understanding, especially given that many plants are considering second license renewal.
Within the RISA Pathway, the application of risk is somewhat unconventional: the R&D that is applied through the Pathway is not just centered on traditional safety margin; instead, we take a broader view of risk that encompasses integrated models that can better represent plant margi...
Lead Author: Yong-Joon Choi Co-author(s): Yunyeong Heo (yyheo0207@unist.ac.kr)
Eunseo So (eunseo.so@inl.gov)
Mohammad Abdo (mohammad.abdo@inl.gov)
Cole Blakely (cole.blakely@inl.gov)
Carlo Parisi (carlo.parisi@inl.gov)
Jarrett Valeri (jarrett.valeri@fpolisolutions.com)
Chris Gosdin (cgosdin@fpolisolutions.com)
Gabrielle Palamone (gabrielle.palamone@fpolisolutions.com)
Cesare Frepoli (frepolc@fpolisolutions.com)
Jason Hou (jhou8@ncsu.edu)
Demonstration of the Plant Fuel Reload Process Optimization for an Operating PWR
The US Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) Program, Risk-Informed Systems Analysis (RISA) Pathway plant reload optimization project aims to develop and demonstrate an automatized generic platform that can generate optimized fuel load configurations in the reactor core of a nuclear power plant. The project targets to optimize reactor core thermal limits through the implementation of state-of-the-art computational and modeling techniques. The optimization of core thermal limits allows a smaller fuel batch size to produce the same amount of electricity, which reduces new fuel costs and saves a significant amount of money on the back-end of the fuel cycle by reducing the volume of spent fuel that needs to be processed. The cost of a typical fuel reload for a light water reactor is about $50M and this project that a cost reduction of at least 5% is attainable by consolidating methods and core design procedures and practices. . This equates to a savings in excess of $2M per reactor per reload. There could be additional savings achievable in the back-end cos...
Sustaining the Existing Fleet of Light Water Reactor
Sustaining the value of the US nuclear power fleet can be achieved
through cost-effective, reliable operation to deliver diversity, robustness,
environmental benefits, and national leadership. Many owners plan to operate nuclear plants for 60 years and more to capture this value. Doing so
requires ensuring the integrity of key materials and the economic viability
of these plants in current and future energy markets. This paper presents a summary of research activities conducted by the DOE-sponsored Light Water Reactor Sustainability program to address key issues pertaining to the continued operation of the existing fleet of operating light water reactors....
Paper BR12 | |
Name: Bruce Hallbert (bruce.hallbert@inl.gov)
Paper 4 RI307
Lead Author: Richard Boardman Co-author(s): Richard Boardman
U.S. H2@Scale Vision and Hydrogen Earthshot Goal
The motive for hydrogen markets is driven by several factors. Hydrogen is required for ammonia-based fertilizers and petroleum refining. Future uses include steel production, transportation, and production of biofuels and synthetic jet and diesel fuel. Hydrogen is a leading energy storage option and can be used to arbitrage the energy produced by nuclear power plants. This can help firm variable renewable power generation. The U.S. Department of Energy establish the H2@Scale Initiative and is sponsoring R&D to help accelerated technology development and demonstration projects. Nuclear energy is considered a leading option for large-scale hydrogen production. The joint efforts of the LWRS and Hydrogen/Fuel Cell Programs can help achieve the U.S. Earthshot goal of producing a kilogram of hydrogen for 1.00USD by the end of the decade (1-1-1)....
Session Chair: Dave Grabaskas (dgrabaskas@anl.gov)
Paper 1 FL328
Lead Author: Florian Berchtold Co-author(s): Gerhard Mayer mayer.gerhard@grs.de
Marina Röwekamp marina.roewekamp@grs.de
Presenter of this paper: Marina Roewekamp (marina.roewekamp@grs.de)
Consideration of Long-Lasting External Flooding Within PSA – Modelling Supplementary Emergency Measures
A methodological approach has been developed to adequately consider supplementary emergency measures in long-lasting hazard events within Level 1 probabilistic safety analysis (PSA). Those measures are established in the short-term using the available time during long-lasing events and have not been planned in advance. The methodology has been applied to a riverine flooding event of a reference nuclear power plant (NPP) site. In this context, two assumptions have been made: the site will not be accessible by road or rail, and there will be a consequential complete loss of offsite power. Consequently, the operation of the emergency diesel generators over a period exceeding seven days at maximum and a permanent exchange of the shift personnel are needed.
Two supplementary emergency measures ensuring the supply of the plant with fuel and shift personnel have been analysed in detail. The supply is ensured via a ferry service using amphibious vehicles or with helicopter transport. These measures were implemented and quantified in an existing Level 1 PSA model of the reference NPP, taking...
A PSAM Profile is not yet available for this author. Presenter Name: Marina Roewekamp (marina.roewekamp@grs.de) Bio: - Diploma in Physics PhD (Dr. rer. nat. In Physical Chemistry / Materials Science) from University of Bonn
- Senior Chief Expert for Hazards and PSA at GRS – the Federal German Nuclear Technical Safety Organization – for > 33 years
- PSA work: mainly performing and/or reviewing Level 1 PSA, particularly for Internal and External Hazards (incl. hazard combinations)
- Member of the German PSA Expert Panel for > 15 years
- Former Chair and actual Vice Chair of OECD/NEA/CSNI Working Group on Risk Assessment (WGRISK)
- Chair of OECD/NEA CSNI Expert Group on Fire Risk (EGFR) and of Management Board of OECD/NEA FIRE (Fire Events Records Exchange) Project
- Consultant and/or reviewer for various IAEA Guides (SSG-64, SSG-67, SSG-68, DS523 (revision of SSG-3 on Level 1 PSA), DS528 (revision of SSG-4 on Level 2 PSA), TECDOCS on MUPSA, Advanced PSA Methods, Safety Assessment of Nuclear Installations Against Combinations of External Hazards, etc.
- IAPSAM Board of Directors member since
Paper 2 JS148
Lead Author: Joy Shen Co-author(s): Michelle T Bensi: mbensi@umd.edu
Mohammad Modarres: modarres@umd.edu
External Flood Fragility Development and Integrating Novel Tools to the XFPRA framework
This presentation explores the development of external flood fragility functions for nuclear power plants. It also explores the stochastic conversion of external flood height to an internal flood height using novel probabilistic risk assessment (PRA) tools such as Bayesian networks augmented with Monte Carlo simulations. PRAs in the nuclear industry are traditionally performed with event tree and fault tree analyses (ETA/FTA). The challenges to these tools include the binary state assumptions (failed or un-failed), the limitations in modeling dependencies, and the static treatment of time. These challenges are particularly relevant and limiting when modeling external flood risks. For example, components may partially fail, requiring a multi-state failure model. Furthermore, correlations between components may exist due to location, manufacturing, and maintenance. In addition, the effects of a flood event are spatially and temporally dynamic. These challenges prompt an investigation to integrate other tools in the traditional ETA/FTA framework. This presentation describes the initial ...
Lead Author: Michelle Bensi Co-author(s): • Somayeh Mohammadi (somayeh@terpmail.umd.edu)
• Tao Liu (liut8@vcu.edu)
• Camille Levine (clevine1@umd.edu)
• Ahmad Al-Douri (clevine1@umd.edu)
• Ray Schneider(schneire@westinghouse.com)
• Zeyun Wu (zwu@vcu.edu)
• Katrina Groth (kgroth@umd.edu)
• Zhegang Ma (zhegang.ma@inl.gov)
Uncertainty in External Hazard Probabilistic Risk Assessment (PRA): A Structured Taxonomy
Probabilistic risk assessment (PRA) systematically assesses the risks posed to or by a complex system such as a nuclear power plant. Such systems are often comprised of physical structures, mechanical components, and humans that interact with or control the system or respond to system upsets. This complex interdependence leads to many sources of uncertainty that must be characterized in the PRA. This problem is exacerbated when exploring the impacts of external hazards (e.g., earthquakes, floods, and fires) on complex systems due to the additional need to understand and characterize the hazard and its impacts. This study defines a framework for identifying and categorizing the common sources of uncertainty encountered in performing external hazard PRAs for nuclear power plants. The framework may be more generally applicable to the assessment of a wide range of facilities involving potentially high consequence external hazard events. Commentary on drivers of uncertainty in external hazard PRA (XHPRA) (particularly within the context of external flooding hazards) and current gaps/chall...
Lead Author: Tamas Siklossy Co-author(s): Attila Bareith bareith@nubiki.hu
Barnabas Toth tothb@nubiki.hu
Bence Burjan burjan@nubiki.hu
Assessing the Impact of Combined External Events on the Safety of NPP Paks
Originally, mostly single external hazards were considered in the definition of the design basis of the Paks NPP in Hungary. Accordingly, the analysis and evaluation of plant resistance against design basis loads as well as the probabilistic safety assessment (PSA) of the plant have been limited to these single external hazards. However, some specific requirements of the Hungarian Nuclear Safety Codes as well as international recommendations and lessons learned from the Fukushima Dai-ichi NPP accident pointed out the need for systematically assessing the combinations of external hazards for the Paks NPP so that their impact on plant safety can be determined, understood and evaluated. The assessment has lately been completed by following the commonly exercised steps in this domain: hazard selection and screening, probabilistic hazard assessment, evaluation of plant protection, plant response and fragility analysis, development of event sequence models for hazard initiated plant transients, and risk quantification and evaluation of results. The analysis proved to be a significant c...
Lead Author: Pavel Krcal Co-author(s): Ola Bäckström Ola.Backstrom@lr.org
Pengbo Wang Pengbo.Wang@lr.org
Implementation of Conditional Quantification in RiskSpectrum PSA
Basic events in Probabilistic Safety Assessment (PSA) models are typically quantified independently of the accident sequence and of other failures that lead to a system unavailability. This simplifies quantification of undesirable consequences and in most situations, this approximation does not distort safety indicators. However, there are emerging needs for dependency handling between basic events such as (1) dependencies between operator actions, (2) correlations between events in PSA, e.g., incurred by seismic events, and (3) common cause failure modeling. In these situations, improved handling of dependencies could yield more realistic analysis results and by this increase applicability of safety indicators.
Conditional quantification of basic events presents a flexible, simple, and transparent tool to model these dependencies. At the same time, it poses theoretical and algorithmic challenges to analysis tools. We describe the implementation of the first release of this feature in RiskSpectrum PSA (version 1.5.0, released in 2021), focusing on the choices taken and solutions ap...
Lead Author: Pavel Krcal Co-author(s): Ola Bäckström Ola.Backstrom@lr.org
Helena Troili Helena.Troili@lr.org
Control Logic Encoding using RiskSpectrum ModelBuilder
A goal of model-based safety assessment is to bring dependability modeling closer to the system design and allow for automated analysis of these high-level models. A system design description consists of system components and their relations. In many applications, a dependability model can copy the system design very closely. The dependability logic can be specified in a generic form per component type, applicable to all instances of this component type. A model might require a limited amount of specific, irregular, dependability information, such as relations or conditions affecting failures and their propagation. To prepare a model for an analysis, it remains to specify a configuration and safety/availability/production criteria.
The modeling language for describing the dependability logic of component types used in RiskSpectrum ModelBuilder is called Figaro and has evolved and matured over decades. It is an object-oriented modeling language with elements of declarative programming. It allows specifying interactions between components in the first-order logic. By this, a general de...
Lead Author: Mattias Håkansson Co-author(s): Gunnar Johanson gunnar.johanson@afry.com
C-BOOK: COMMON CAUSE FAILURE RELIABILITY DATA BOOK (UPDATE)
Common Cause Failure (CCF) events can significantly impact the availability of safety systems of nuclear power plants. In recognition of this, CCF data are systematically being collected and analysed in several countries under the framework of the International Common Cause Data Exchange (ICDE) project.
In 2017, the first version of the CCF data book (C-book) was published by the Nordic PSA Group (NPSAG). The C-book provides the Nordic PSA practitioners with CCF reliability data for the dependency analysis that is considered in the compulsory, probabilistic safety assessment (PSA) of nuclear power plants. The C-Book should be considered as an important step in the continuous effort to collect and analyse data on CCF of safety components at NPPs, and to improve quality of data in PSA.
In 2021, a second version of the C-book was published by NPSAG, and it considers that the collected data has doubled since the first version. This second version of the C-book presents the methodology for quantification of CCF rates, CCF probabilities and alpha factors for k-out-of-n failures. Generic C...
Towards Reliability/Security Risk Metrics for Large-Scale Networked Infrastructures: Work in Progress
Realistic systems contain potential vulnerabilities which can be activated by some natural events or by malicious agents. System reliability/security risk metrics quantify the potential economic and other system losses due to possible activation of potential system vulnerabilities. Evaluation of these metrics requires assessment of unconditional probabilities of successful activation of various subsets of potential vulnerabilities. These probabilities are affected by (a) Dependency Relationships (DeR) among potential system vulnerabilities encoded by fault tree, attack graph, etc., and (b) conditional probabilities of the individual exploits, when all the prerequisites for a given potential vulnerability are satisfied. While reliability models assume fixed conditional probabilities of individual exploits, security models assume a possibility of adversarial selection of these probabilities. Combination of system DeR without cycles with conditional probabilities of individual exploits allows one to employ powerful methodologies of Bayesian Network (BaN) analysis for evaluation of the s...
Session T15 - Machine Learning II / Digital Transformation
Session Chair: Yail Kim (jimmy.kim@ucdenver.edu)
Paper 1 NI33
Lead Author: Nicola Tamascelli Co-author(s): Antonio Javier Nakhal Akel, Riccardo Patriarca, Nicola Paltrinieri, Ana Maria Cruz
Are we going towards "no-brainer" risk management? A case study on climate hazards
Industry is stepping into its 4.0 phase by implementing and increasingly relaying on cyber-technological systems. Wider networks of sensors may allow for continuous monitoring of industrial process conditions. Enhanced computational power provides the capability of processing the collected “big data”. Early warnings can then be picked and lead to suggestion for proactive safety strategies, or directly initiate the action of autonomous actuators ensuring the required level of system safety. But have we reached these safety 4.0 promises yet, or will we ever reach them? A traditional view on safety defines it as the absence of accidents and incidents. A forward-looking perspective on safety affirms that it involves ensuring that “as many things as possible go right”. However, in both the views there is an element of uncertainty associated to the prediction of future risks and, more subtle, to the capability of possessing all the necessary information for such prediction. This uncertainty does not simply disappear once we apply advanced artificial intelligence (AI) techniques to ...
Lead Author: Diego Andrés Aichele Figueroa Co-author(s): Lavínia Maria Mendes Araújo - lavinia.mendes@ufpe.br
Thais Campos Lucas - thaiscamposlucas@hotmail.com
Marcio das Chagas Moura - marcio.cmoura@ufpe.br
Isis Didier Lins - isis.lins@ufpe.br
Enrique Lopez Droguett - eald@g.ucla.edu
Diagnosis of Failure Modes from Bearing Data via Deep Learning Variational Autoencoder Method
Bearings and gears are indispensable equipment in complex machinery. Many studies developed analyses to improve the effectiveness of predictive maintenance for these components. Thus, Deep Learning (DL) models for the diagnosis and prognosis of equipment failure modes can be highlighted. For this purpose, many applications have used supervised learning methods, such as Support Vector Machine, Multilayer Perceptron, and Convolutional neural networks. However, in practice, labeled data connected to the conditions of real-time systems can be more complex and costly to obtain. In this sense, we highlight unsupervised learning models, where the algorithm discovers by itself through data exploration, the possible relationships between data points. Hence, this paper aims to apply the unsupervised Variational Autoencoder method to diagnose failure modes of bearings and gears. Six databases available in the literature will be used for analyses purposes. Nevertheless, optimization of the model's hyperparameters is aggregated to perform more efficient assessments. Finally, the results will be c...
A PSAM Profile is not yet available for this author.
Paper 3 SI295
Lead Author: Sizarta Sarshar
Towards utilization of digital technology for railway infrastructure
The Norwegian railway is being modernized and emerging technologies allow for a digital transformation to improve efficiency and safety. New digital enablers include among others fast and high bandwidth communication systems, new sensors, and internet of things (IoT) solutions, use of artificial intelligence (AI) and digital twins to support e.g., automated train operations and smart asset management.
To better understand some of the needs and challenges with introducing the digital enabler digital twins in the railway domain in Norway, a pre-project was established by the Norwegian Railway Directorate and carried out by the research institute IFE and infrastructure owner Bane NOR. The intention was to identify gaps where research is needed.
Two use cases were studied to explore a holistic approach for utilizing digital twins where relevant end users were interviewed to identify needs and gaps:
(1) Maintenance and inspection of a bridge which relate to asset management, and
(2) Train operations which relate to planning and dispatching train movements
This paper discusses the fin...
Lead Author: Young Ho Chae Co-author(s): Hyeonmin Kim, hyeonmin@kaeri.re.kr
Poong Hyun Seong, phseong1@kaist.ac.kr
Development of a physics informed neural network based simulation methodology for DPSA
A nuclear power plant is a safety-critical system with large size and high complexity. Therefore, various methods were developed to identify possible accidents and deal with them. To broadly classify the methods, there are experiment-based methods and simulation-based methods.
However, the experiment-based method, in reality, has several limitations. Therefore, various simulation-based analysis methods were developed. Most of the simulation-based analysis methods were highly dependent on numerical methods. Therefore, if the number of nodes and time units are divided to increase the analysis resolution, the time required for calculation tends to increase exponentially as the number of nodes is divided.
Therefore, in this paper, to accelerate the simulation we developed artificial intelligence based simulation acceleration method. As an algorithm for AI-based simulation, physics informed neural network algorithm is implemented for convergence speed and extrapolation robustness.
By using the suggested method, dynamic event tree based dynamic probabilistic safety assessment can be cond...
Lead Author: Nancy Lindsey Co-author(s): Jeff Dawson, jeffrey.w.dawson@nasa.gov
Doug Sheldon, douglas.j.sheldon@jpl.nasa.gov
Anthony DiVenti, anthony.j.diventi@nasa.gov
Lionel Sindjui, lionel-nobel.w.sindjui@nasa.gov
NASA Physics of Failure (PoF) for Reliability
NASA Physics of Failure (PoF) for Reliability
Authors: Nancy Lindsey, Jeff Dawson, Doug Sheldon, Anthony DiVenti, Lionel Sindjui
Abstract:
An item’s reliability or longevity is dependent not only on its design but also on how it is used, manufactured, tested, and the stresses it has or will experience. Stresses include operational and environmental exposures to thermal, voltage, current, age/exposure, mechanical, and radiation mechanisms. Therefore, in reliability analysis, it is important to consider the contributions of all of these factors when predicting the failure rates of components. Historically, there has been a reliance on handbook data (e.g., MIL-HDBK-217), but experience has shown that these values and distributions are not representative of actual performance (1,2). Therefore, to make more credible reliability and risk assessments for its missions, NASA must transition to estimating likelihoods of failure based on an item’s reliability/longevity factors (or the physical susceptibilities and strengths impacting the design’s performance) has or will experience, wh...
Lead Author: Scott Lawrence Co-author(s): Susie Go, susie.go@nasa.gov
Scott Lawrence, scott.l.lawrence@nasa.gov
Amir Levine, amir.levine02.gmail.com
Failure Propagation Simulation for Launch Vehicle Safety Estimates
Abstract
An important component in the assessment of launch vehicle safety is the estimation of the likelihood of a large-scale explosion given the existence of a failure that manifests as a relatively localized energy release. Here, a model for simulating cascading failures of energetic components in proximity, where the primary modes of energy transfer are fragments/shrapnel and blast overpressure, is presented. This model is an extension of the model developed by Mathias and Motiwala (2015) [REF1] with enhancements that include the effects of fragment ricochet, fragment drag, ballistic limit equation fragment thresholds, and blast impulse thresholds. A series of sensitivity studies have been carried out for a generic launch vehicle engine section and results are presented that illustrate the effects of the model modifications.
REF1: Mathias, D., & Motiwala, S. (2015). Simulation of Liquid Rocket Engine Failure Propagation Using Self-Evolving Scenarios. 2015 Annual Reliability and Maintainability Symposium (RAMS 2015) . Palm Harbor, FL: Institute of Electrical and Electronics Engin...
Towards utilization of digital technology for railway infrastructure
The Norwegian railway is being modernized and emerging technologies allow for a digital transformation to improve efficiency and safety. New digital enablers include among others fast and high bandwidth communication systems, new sensors, and internet of things (IoT) solutions, use of artificial intelligence (AI) and digital twins to support e.g., automated train operations and smart asset management.
To better understand some of the needs and challenges with introducing the digital enabler digital twins in the railway domain in Norway, a pre-project was established by the Norwegian Railway Directorate and carried out by the research institute IFE and infrastructure owner Bane NOR. The intention was to identify gaps where research is needed.
Two use cases were studied to explore a holistic approach for utilizing digital twins where relevant end users were interviewed to identify needs and gaps:
(1) Maintenance and inspection of a bridge which relate to asset management, and
(2) Train operations which relate to planning and dispatching train movements
This paper discusses the fin...
Lead Author: Young Ho Chae Co-author(s): Hyeonmin Kim, hyeonmin@kaeri.re.kr
Poong Hyun Seong, phseong1@kaist.ac.kr
Development of a physics informed neural network based simulation methodology for DPSA
A nuclear power plant is a safety-critical system with large size and high complexity. Therefore, various methods were developed to identify possible accidents and deal with them. To broadly classify the methods, there are experiment-based methods and simulation-based methods.
However, the experiment-based method, in reality, has several limitations. Therefore, various simulation-based analysis methods were developed. Most of the simulation-based analysis methods were highly dependent on numerical methods. Therefore, if the number of nodes and time units are divided to increase the analysis resolution, the time required for calculation tends to increase exponentially as the number of nodes is divided.
Therefore, in this paper, to accelerate the simulation we developed artificial intelligence based simulation acceleration method. As an algorithm for AI-based simulation, physics informed neural network algorithm is implemented for convergence speed and extrapolation robustness.
By using the suggested method, dynamic event tree based dynamic probabilistic safety assessment can be cond...
Session Chair: Jeffrey Julius (jjulius@jensenhughes.com)
Paper 1 CE253
Lead Author: César Queral Co-author(s): Sergio Courtin, sergio.courtin@upm.es
Rafael Iglesias
Marcos Cabezas
Alberto Garcia-Herranz
Julia Herrero-Otero
Enrique Meléndez, ema@csn.es
Rafael Mendizábal, rmsanz@csn.es
Miguel Sánchez-Perea, msp@csn.es
On the use of SPAR-CSN models for identifying DEC-A sequences. First ideas.
The Fukushima accident has brought, among others, an increment and improvement of the safety requirements for the nuclear power plants. One important requirement established in many countries is that the so-called Design Extension Conditions (DEC) need to be fully considered in the assessment of current and new advanced nuclear power plants. This safety improvement of NPPs focuses on the conditions of multiple failures of the safety system, included in the currently available guidances widely in the world (e.g., as proposed by IAEA SSR 2/1). The associated applications and practices begin to emerge, given that the topic of DEC is being advanced rapidly both nationally and internationally. Nevertheless, these practices are still not comprehensive, in particular, regarding the interface with the plant design basis, its role in the Defence-in-Depth, selection of requirements, impact on operating limits and conditions and/or selection of DEC sequences to be included in the analyses.
As for the selection of DEC sequences, most if not all, the current practices/guidances indicate that ”...
A PSAM Profile is not yet available for this author.
Paper 2 SA151
Lead Author: Tatsuya Sakurahara Co-author(s): Zahra Mohaghegh, zahra13@illinois.edu
Risk-informed Analysis for Advanced Nuclear Power Reactors: Pipe Reliability Case Study and Lessons Learned
To facilitate the design and licensing of advanced nuclear power reactors, it is imperative to conduct risk-informed analysis prior to, or in parallel with, technology developments. Significant efforts have been dedicated to the developments of Probabilistic Risk Assessment (PRA) and the establishment of the risk-informed decision-making framework for advanced reactors, such as the Licensing Modernization Project (LMP), development of Title 10 of the Code of Federal Regulations, Part 53 and other regulatory guidance by the Nuclear Regulatory Commission (NRC), as well as the issuance of the ASME/ANS Non-LWR Probabilistic Risk Assessment Standard (RA-S-1.4-2021). In this realm, the authors’ team has participated in an International Atomic Energy Agency (IAEA) Coordinated Research Project, “Methodology for Assessing Pipe Failure Rates in Advanced Water-Cooled Reactors,” 2018-2021. This presentation summarizes the research findings and lessons learned from the authors’ activities under this IAEA CRP, aimed at advancing the pipe failure rate analysis methodologies for advanced rea...
Paper SA151 | |
Name: Tatsuya Sakurahara (sakurah2@illinois.edu)
Paper 3 MA193
Lead Author: Tatsuya Sakurahara Co-author(s): Sari Alkhatib, sarifa2@illinois.edu; Mohammad Albati, malbati2@illinois.edu; Seyed Reihani, sreihani@illinois.edu; Ernie Kee, erniekee@illinois.edu; Zahra Mohaghegh, zahra13@illinois.edu; Terry L. von Thaden, vonthade@illinois.edu; Richard Kesler, rkesler2@illinois.edu; Farzaneh Masoud, fmasoud2@illinois.edu; Brian Ratté, bdratte@STPEGS.COM; Mary Anne Billings, mabillings@STPEGS.COM
Academia-Industry Collaboration to Advance Fire Probabilistic Risk Assessment of Nuclear Power Plants
This presentation reports on the academia-industry project supported by the U.S. Department of Energy. This project aims to improve the operational efficiency of Nuclear Power Plants (NPPs) by enhancing the realism of the Fire Probabilistic Risk Assessment (PRA). In previous work by the Socio-Technical Risk Analysis (SoTeRiA) Laboratory at the University of Illinois at Urbana-Champaign, the Fire PRA realism associated with fire progression and damage modeling and the modeling of interactions between fire progression and manual suppression was advanced by developing an Integrated PRA (I-PRA) methodological framework. This latest academia-industry project has advanced the Fire I-PRA methodological framework, focusing on the current Fire PRA challenges in the nuclear industry, and scaled up Fire I-PRA to a full-scope plant. This project has been conducted in three phases. Phase I developed a streamlined approach to perform a more efficient screening of fire zones and ignition sources in Fire PRA. The advanced screening approach was mapped to the NUREG/CR-6850 procedure to demonstrate th...
Paper MA193 | |
Name: Tatsuya Sakurahara (sakurah2@illinois.edu)
Paper 4 CY266
Lead Author: Young Ho Chae Co-author(s): Hyeonmin Kim, hyeonmin@kaeri.re.kr
Poong Hyun Seong, phseong1@kaist.ac.kr
Development of a physics informed neural network based simulation methodology for DPSA
A nuclear power plant is a safety-critical system with large size and high complexity. Therefore, various methods were developed to identify possible accidents and deal with them. To broadly classify the methods, there are experiment-based methods and simulation-based methods.
However, the experiment-based method, in reality, has several limitations. Therefore, various simulation-based analysis methods were developed. Most of the simulation-based analysis methods were highly dependent on numerical methods. Therefore, if the number of nodes and time units are divided to increase the analysis resolution, the time required for calculation tends to increase exponentially as the number of nodes is divided.
Therefore, in this paper, to accelerate the simulation we developed artificial intelligence based simulation acceleration method. As an algorithm for AI-based simulation, physics informed neural network algorithm is implemented for convergence speed and extrapolation robustness.
By using the suggested method, dynamic event tree based dynamic probabilistic safety assessment can be cond...
Lead Author: Michelle Bensi Co-author(s): • Katrina Groth (kgroth@umd.edu)
• Zeyun Wu (zwu@vcu.edu)
• Zhegang Ma (zhegang.ma@inl.gov)
• Ray Schneider(schneire@westinghouse.com)
• Somayeh Mohammadi (somayeh@terpmail.umd.edu)
• Tao Liu (liut8@vcu.edu)
• Ahmad Al-Douri (clevine1@umd.edu)
• Camille Levine (clevine1@umd.edu)
Identifying and Prioritizing Sources of Uncertainty in External Hazard Probabilistic Risk Assessment: Project Activities and Progress
There are inherently significant sources of uncertainty in external hazard probabilistic risk assessment (XHPRA) for nuclear power facilities. The state of knowledge and practice associated with these uncertainties varies across hazard groups (e.g., earthquakes, wind, and flooding). There is currently a research need to build upon the existing state of knowledge to develop a technically sound, risk-informed strategy for identifying and characterizing drivers of hazard uncertainty in XHPRA for multiple classes of hazards. This paper summarizes the ongoing progress of a multi-year research project that seeks to: (1) develop a structured process for identifying, evaluating, categorizing, and communicating the impact of uncertainties on XHPRAs, (2) investigate the spectrum of uncertainties associated with realistically parsing hazard information into the XHPRA, (3) understand how uncertainties in the physical hazard characteristics/timing interfaces with plant response strategies (e.g., plant physics and human reliability), and (4) assess the combined impact of these uncertainties (and ...
Lead Author: Albena Stoyanova Co-author(s): Olivier Nusbaumer (olivier.nusbaumer@kkl.ch)
Pavol Zvoncek (pavol.zvoncek@kkl.ch)
Devi Kompella (devi.kompella@relsafe.co.in)
Karthik Ravichandran (karthik.ravichandran@relsafe.co.in)
Rahul Agarwal (rahul.agarwal@relsafe.co.in)
Seismically-induced Internal Fire and Flood (SIFF) Assessment at Leibstadt NPP, Switzerland
Seismically induced secondary hazards such as seismically-induced fire and flood (SIFF) events have gained greater attention following major earthquakes like the Tōhoku and Virginia Earthquakes in 2011 and Kashiwazaki-Kariwa earthquake in 2007. Secondary hazards/effects proved to be of higher likelihood than originally assumed in the plant design basis. Different organizations like USNRC, EPRI, ASME, IAEA and WENRA worked to strengthen the requirements and methods for evaluating the protection against such hazards. The Swiss regulatory authority (ENSI) has also issued heightened requirements for seismic verification of existing Nuclear Power Plants (NPPs), mandating deterministic and probabilistic SIFF assessments.
To fulfil ENSI requirements and to stay up-to-date with international developments, KKL performed a systematic and comprehensive assessment of SIFF events based on the state-of-the-art EPRI methodology (EPRI-3002012980), covering both deterministic and probabilistic aspects in an integrated manner.
The main objectives of the project were to: (i) apply a robust, stage-...
Lead Author: Daichi Nagai Co-author(s): Koji Shirai, shirai@criepi.denken.or.jp / Koji Tasaka, kotasaka@criepi.denken.or.jp
Internal Flooding Fragility Experiments using Full-scale Fire Doors to Evaluate Door Failure Water Height and Leakage Flow Rate under Hydrostatic Pressure Loads
The Central Research Institute of Electric Power Industry (CRIEPI) is developing an internal flooding PRA (IF-PRA) guide for nuclear power plants in Japan. One of the questions of interest in the internal-flooding scenario modeling is that of how susceptible to failure structures and components are to the water ingress or spray. In IF-PRA, as doors can be flooding propagation paths, flooding could propagate to adjacent areas through gaps between doors and frames as well as doors opened by static head due to flooding depths. Therefore, it is very important to clarify the characteristics of flooding propagation modeling through doors. CRIEPI has performed hydrostatic pressure tests using one-hour rating fireproof door which is often used in nuclear power plants. In this study, the door which size is about 1.0 m width and 2.1 m height attached to 7.5 m3 rectangular water tank. By filling the tank with water until door swing to open due to the door latch failure, water height and leakage flow rate were measured. According to the test results, we found that the door failure could be occur...
Lead Author: Marina Roewekamp Co-author(s): John A. Nakoski, John.Nakoski@nrc.gov
Attila Bareith, bareith@nubiki.hu
Dana Havlín Nováková, dana.havlinnovakova@sujb.cz
Consideration of Combined Hazards Within PSA - A WGEV and WGRISK Perspective
After the Fukushima reactor accidents, combinations of hazards have gained more and more attention. As a result, the importance of adequate consideration of hazard combinations, particularly involving external hazards, in all types of safety assessments of nuclear installations has been recognized by the experts involved.
Several national as well as international activities are ongoing with respect to combinations of hazards, including the extension of IAEA Safety Guides to better cover combined hazards, and one of the currently ongoing activities conducted jointly by the OECD Nuclear Energy Agency (NEA) Working Groups on Risk Assessment (WGRISK) and on External Events (WGEV) entitled “Combinations of External Hazards – Hazard and Impact Assessment and Probabilistic Safety Analysis (PSA) for Nuclear Installations”. This activity was initiated in 2020 addressing the state-of-the-art practices used in considering combinations of external hazards in the design and safety assessment of nuclear installations.
The first phase of the activity covered a survey of members of both Worki...
Session T24 - Risk Governance and Societal Safety II
Session Chair: Sai Zhang (sai.zhang@inl.gov)
Paper 1 JO179
Lead Author: Johan Sorman Co-author(s): Yi Zou, Yi.Zou@lr.org
Experience from using a tool for analysing the significance of events at nuclear power installations following the NRC Significance Determination Process
The NRC's significance determination process: “The ‘Significance Determination Process’ (SDP) is an organized, planned process to evaluate the risk or safety significance of conditions, events or findings at nuclear power reactors.” The process is described in detail in the publicly available document, NRC Inspection Manual, Manual Chapter 0609.
This process is not only used by the NRC, but indeed used by nuclear operators to determine the significance of events as well as providing an engineering understanding of their risk and safety significance.
This paper outlines the findings developing and implementing a tool that supports the SDP.
The tool provides assistance for all the steps in the SDP process. Events that are found to be of low significance are screened out but stored in the database for documentation purposes. Events that are determined to be of significance are further analysed following the SDP. All details of the events are documented in the tool and based on this information, the impact on risk is quantified using the full PSA model. The results are categoriz...
Lead Author: Richard Kim Co-author(s): Tina Diao, tina.diao@aerospacetechnical.com
Madison Coots, madison.coots@aerospacetechnical.com
Perspectives on Managing Risks in Energy Systems
Modern enterprise risk management (ERM) for complex engineered systems is ultimately concerned with making high-quality resource allocation decisions at the organizational level with the goal of minimizing the risk of these systems. Effective ERM is challenging in several ways: 1) it necessitates the continuous assessment of a comprehensive set of risks; 2) it requires an intelligent measure of value and objectives; and 3) it requires a normative decision framework that is grounded in quantitative measures, rather than heuristics. In this paper, we propose a principled approach for ERM to address these pressing challenges in complex systems across numerous industries. We begin by outlining and explaining the methods comprising the Risk Management Toolkit: a set of rigorously tested quantitative methods with a proven track record for bolstering the efficacy of modern ERM programs. We then outline a set of organizational characteristics that we believe play instrumental roles in ensuring effective ERM across an organization. Finally, we use an illustrative example system from the energ...
A PSAM Profile is not yet available for this author.
Paper 3 TH14
Lead Author: Thor Myklebust Co-author(s): Tor Stålhane stalhane@sintef.no
Purchasers and integrators of safety components and products, which information should we ask for?
Several manufacturers of safety products and safety systems have to purchase and integrate components and products produced elsewhere and sometimes for another environment or use. Examples of components and products that manufacturers integrate are microchips, libraries, openSafety protocols, COTS (Commercial Off The Shelf) software, sensors, and valves.
One could divide this integration into three categories: components and products having a (1) SIL (Safety Integrity Level) compatibility certificate, (2) integrator and supplier have DIA (Development Interface Agreement) or similar, and (3) COTS or similar.
This paper focuses on suppliers that deliver components or products, including a SIL compatibility certificate and six other relevant documents (safety manual, safety case including safety-related application conditions (SRAC) and hazard log, safety assessment report, certificate report, and user manual). We start with an explanation of the relevant documents and which safety standards include requirements for such documents.
This paper aims to aid purchasers and integrators wit...
Lead Author: Marina Roewekamp Co-author(s): John A. Nakoski, John.Nakoski@nrc.gov
Attila Bareith, bareith@nubiki.hu
Dana Havlín Nováková, dana.havlinnovakova@sujb.cz
Consideration of Combined Hazards Within PSA - A WGEV and WGRISK Perspective
After the Fukushima reactor accidents, combinations of hazards have gained more and more attention. As a result, the importance of adequate consideration of hazard combinations, particularly involving external hazards, in all types of safety assessments of nuclear installations has been recognized by the experts involved.
Several national as well as international activities are ongoing with respect to combinations of hazards, including the extension of IAEA Safety Guides to better cover combined hazards, and one of the currently ongoing activities conducted jointly by the OECD Nuclear Energy Agency (NEA) Working Groups on Risk Assessment (WGRISK) and on External Events (WGEV) entitled “Combinations of External Hazards – Hazard and Impact Assessment and Probabilistic Safety Analysis (PSA) for Nuclear Installations”. This activity was initiated in 2020 addressing the state-of-the-art practices used in considering combinations of external hazards in the design and safety assessment of nuclear installations.
The first phase of the activity covered a survey of members of both Worki...
Lead Author: Austin Lewis Co-author(s): Katrina M. Groth (kgroth@umd.edu)
Impact of Complex Engineering System Data Stream Discretization Techniques on the Performance of Dynamic Bayesian Network-Based Health Assessments
Critical infrastructure in the energy and industry sectors is dependent on the reliability of complex engineering systems (CESes), such as nuclear power plants or manufacturing plants; it is important, therefore, to be able to monitor their system health and make informed decisions on maintenance and risk management practices. One proposed approach is the use of a causal-based model such as a Dynamic Bayesian Network (DBN) that contains the structural logic of and provides a graphical representation of the causal relationships within engineering systems. A current challenge in CES modeling is fully understanding how different data stream discretizations used in developing the underlying conditional probability tables (CPTs) impact the DBN's system health estimates. Using a range of metrics designed for comparing health management models, this paper demonstrates the impact that different time discretization strategies have on the performance of DBN models built for CES health assessments. Using simulated nuclear data of a sodium fast reactor (SFR) experiencing a transient overpower (T...
A PSAM Profile is not yet available for this author.
Paper 2 JO134
Lead Author: Josefin Lindström Co-author(s): Jonas Johansson, jonas.johansson@risk.lth.se
Towards Conceptualizing and Modelling Critical Flows
Continuous access to flows of goods and services, such as energy, transport, information, and food, is essential and the basis for functioning modern societies. Hence, they are critical and need to be secured, as highlighted in USA Presidential Policy Directive (PPD-21) and EU Directives (EPCIP, NIS). Mega-trends such as climate change, changing geopolitical environment, hybrid- and cyber threats, globalization of supply chains, and rapid technological advances come with new challenges and emerging threats to securing flows. Given the complexities involved in securing critical flows, the focus should not only be on protection but also resilience. Critical flows are fundamentally enabled and accommodated by critical infrastructures and supply chains of diverse natures. Further, the interconnectedness of infrastructures and leanness of supply chains lead to an increased vulnerability where no single entity has a comprehensive overview and understanding over their interconnected form and function, i.e., their joint structure, connectivity, flow volumes, and spatial and temporal distribu...
Reliability of CFRP-Prestressed Concrete Girders for Highway Bridges
This research discusses the reliability of concrete bridge girders prestressed with carbon fiber reinforced polymer (CFRP) composite tendons. Of interest is the calibration of strength reduction factors to accommodate the safe operation of such bridge structures. Benchmark superstructures are design in accordance with the American Association of State Highway Transportation Officials (AASHTO) Load and Resistance Factor Design (LRFD) Bridge Design Specifications (BDS) and the American Concrete Institute Committee 440 document. Uncertainty is taken into consideration through stochastic modeling, which is validated against published data. Design proposals are provided for practitioners to implement....
Lead Author: Shunichi Tada Co-author(s): Taro Kanno, kanno@sys.t.u-tokyo.ac.jp
Applying the Genetic Algorithm for Finding the Worst Scenario for the Post-Disaster Recovery of Water Distribution Network
Because water is an essential resource for numerous activities, a water distribution network (WDN) is one of the most important lifelines; therefore, considerations must be made to prepare for the restoration of WDNs during post-disaster periods. To evaluate the restoration plan of damaged pipes, we are developing the agent-based simulation that can reproduce the restoration processes of the WDNs and how the restoration plan affects to the performance of each subsystem of a city during the post-disaster periods. In many researches, the damage scenario was usually manually generated; the number of damaged pipes was estimated by an empirical equation considering the magnitude of earthquake and the properties of pipes, while a geographical distribution of the damaged pipes was randomly selected. In our previous research, it was found that the performance of the restoration plan is highly dependent on the geographical distribution of the damaged pipes. Therefore, it is difficult to appropriately evaluate the resilience of WDNs using such randomly generated scenarios, rather it is necessa...
Name: Shunichi Tada (tada-shunichi513@g.ecc.u-tokyo.ac.jp)
Session W01 - Physical Security I
Session Chair: Gregory Wyss (gdwyss@sandia.gov)
Paper 1 SH288
Lead Author: Shawn St. Germain Co-author(s): Robby Christian, robby.christian@inl.gov
Vaibhav Yadav, vaibhav.yadav@inl.gov
Steven Prescott, steven.prescott@inl.gov
A Risk-Informed Approach to Linked Safety-Security Modeling
The requirements for U.S. nuclear power plants to maintain a large onsite
physical security force contribute to their operational costs. The cost of
maintaining the current physical security posture is approximately 10% of the
overall operation and maintenance budget for commercial nuclear power plants.
The goal of the Light Water Reactor Sustainability (LWRS) Program Physical
Security Pathway is to develop tools, methods, and technologies and provide the
technical basis for an optimized physical security posture. The conservatisms
built into current security postures may be analyzed and minimized in order to
reduce security costs while still ensuring adequate security and operational
safety. The research performed at Idaho National Laboratory within LWRS
Program Physical Security Pathway has successfully developed dynamic forceon-force (FOF) modeling framework using various computer simulation tools
and integrates them with the dynamic assessment Event Modeling Risk
Assessment using Linked Diagrams (EMRALD) tool....
Lead Author: Brian Cohn Co-author(s): Emily Sandt, esandt@sandia.gov
Douglas Osborn, dosborn@sandia.gov
Tunc Aldemir, aldemir.1@osu.edu
A Dynamic, Integrated Approach to Vital Area Identification
The Vital Area Identification (VAI) process is a widely used method to determine which locations at a nuclear power plant (NPP) site need to be protected from sabotage. The intent of VAI is to identify a combination of systems that, if successfully protected, ensure that adversary sabotage cannot cause significant core damage. However, the VAI process does not consider what happens if a vital area is sabotaged by adversaries. Security analysis assumes that the sabotage of any vital area results in an imminent onset of core damage, even if there is other, non-vital, equipment that could be used to perform the same function as the sabotaged equipment.
Integrated safety-security (2S) assessment using dynamic probabilistic risk assessment (DPRA) has been explored as a method to determine the consequences of sabotage of a vital area, and previous efforts have successfully demonstrated that the 2S methodologies are able to incorporate the loss of reactor safety systems and mitigation efforts on the reactor response for a previously identified attack scenario. However, current methods are u...
Lead Author: Andrew Thompson Co-author(s): Dusty Brooks, dbrooks@sandia.gov
Douglas Osborn dosborn@sandia.gov
Risk-Informing Access Delay Timelines
The Light Water Reactor Sustainability (LWRS) program has developed a new method to modernize how access delay timelines are developed and utilized in physical security system evaluation. This new method utilizes Bayesian methods to combine subject matter expert (SME) judgement and small performance test datasets in a consistent and defensible way. It will enable a more holistic view of delay performance that provides distributions of task times and task success probabilities to account for tasks that, if failed, would result in failure of the attack.
Using the current methods, access delay timelines rely on reported data from tests where possible, and on SME judgement to help fill in any blanks that exist in the testing. This data is generally reported using a single time rather than distributions, or as a triangular distribution centered around the minimum time from the test, with minimum and maximum assumed to be +/- 50% of this mean. However, these assumptions are not always realistic and can result in overly conservative timeline risk. The key driver for considering a change in ...
This paper presents a framework for calculating the value of deterrence related to countermeasures implemented to mitigate an attack by an adaptive adversary. We present a methodology for adapting Defender-Attacker Decision Trees to partition the utility of countermeasures into three components: (1) threat reduction (deterrence), (2) vulnerability reduction, and (3) consequence mitigation. The Expected Utility of Imperfect Control (EUIC) attributable to a specific implementation of the countermeasure is based on calculations from decision analysis and is defined as the difference in the expected utilities of the no countermeasure branch and the branch representing the countermeasure variant (Johnson & Tani, 2013; McNamee & Celona, 2009). The EUIC represents the net benefit of implementing the countermeasure, including all costs associated with development, implementation, and operation. Benefits largely derive from three sources: (1) changes in attack probability (threat reduction (2) changes in detection probability (vulnerability reduction), and (3) changes in the distribution of a...
Session Chair: Curtis Smith (curtis.smith@inl.gov)
Paper 1 CU58
Lead Author: Curtis Smith
Observations and Results from a Benchmark on External Events Hazard Frequency and Magnitude Statistical Modelling
The March 2011 accident at the Fukushima Daiichi nuclear power plant triggered discussions about the natural external events that are low-frequency but high-consequence. To address these issues and determine which events would benefit from international co-operative work, a Nuclear Energy Agency Task Group on Natural External Events (TGNEV) was established. In June 2014, this group reorganised into a Working Group on External Events (WGEV). WGEV is composed of a forum of experts for the exchange of information and experience on external events in member countries, thereby promoting co-operation and maintenance of an effective and efficient network of experts. A recent activity of this group has focused on conducting a benchmark on external events hazard frequency and magnitude statistical modelling. Modelling of these external events is a common practice in hazards and risk assessments found in many countries. Having a valid statistical approach to model these hazards is important. However, current practice indicates a wide variety of approaches being used and an under appreciation o...
Lead Author: Yasser Hamdi Co-author(s): Vincent Rebour - vincent.rebour@irsn.fr
OECD/NEA Benchmark: External Events Hazard Frequency and Magnitude Statistical Modelling - IRSN approach
A general feature of external hazards is that they produce off-normal conditions that can impact nuclear installations. Scenarios- and frequencies-based risk analysis allows a more reliable practice by allowing key stakeholders to make risk informed choices rather than simply relying on traditional deterministic estimates of risk, with a brief description of uncertainty. This benchmark is conducted to facilitate an exercise on the estimation of extreme external events. The aim of the benchmark is to better understand the technical aspects and processes used for the characterization of natural hazards. Data and overall objectives for the benchmarking exercise are presented for a hypothetical external event (e.g., precipitation, extreme temperatures, high winds). Our analysis steps, assumptions and modelling results were summarized. Uncertainties are generated by the fact that the provided synthetic data is relatively uninformative. The use of this synthetic data-based approach allows to evaluate the proposed statistical model and add known uncertainty to the data. Two cases are provid...
Lead Author: Beom-Jin Kim Co-author(s): Minkyu Kim, minkyu@kaeri.re.kr
Daegi Hahm, dhahm@kaeri.re.kr
Benchmark on External Events Hazard Frequency and Magnitude Statistical Modelling in KAERI
This benchmark study aims to apply statistical modeling for frequency and magnitude estimation based on external event hazard assessment data. Based on the results of this study, it is believed that an approach to the quantification of external event initiating events (IEs) can be formulated and evaluated through the application of an effective statistical model. This study's analysis was based on two cases that considered benchmarks provided by the Organization for Economic Co-operation and Development (OECD). Each case was given a magnitude according to the return period. An appropriate statistical model was applied through regression analysis for each case based on this data. Based on the results, the magnitudes of 500, 5,000, 50,000, and 500,000 years were predicted and presented.
The result of this study, statistical analysis was applied to the estimation of two cases presented by the OECD. In any statistical analysis, it is important to understand the characteristics of the data set. For the given problems here, the range of the return period was 10–10,000 years, while that o...
This paper presents a framework for calculating the value of deterrence related to countermeasures implemented to mitigate an attack by an adaptive adversary. We present a methodology for adapting Defender-Attacker Decision Trees to partition the utility of countermeasures into three components: (1) threat reduction (deterrence), (2) vulnerability reduction, and (3) consequence mitigation. The Expected Utility of Imperfect Control (EUIC) attributable to a specific implementation of the countermeasure is based on calculations from decision analysis and is defined as the difference in the expected utilities of the no countermeasure branch and the branch representing the countermeasure variant (Johnson & Tani, 2013; McNamee & Celona, 2009). The EUIC represents the net benefit of implementing the countermeasure, including all costs associated with development, implementation, and operation. Benefits largely derive from three sources: (1) changes in attack probability (threat reduction (2) changes in detection probability (vulnerability reduction), and (3) changes in the distribution of a...
Session W03 - Human Reliability Analysis III (HUNTER)
Session Chair: Andreas Bye (andreas.bye@ife.no)
Paper 1 RO173
Lead Author: Ronald Boring Co-author(s): Thomas Ulrich
thomas.ulrich@inl.gov
Jooyoung Park
jooyoung.park@inl.gov
Jeeyea Ahn
jeeyea.ahn@inl.gov
Yunyeong Heo
yunyeong.heo@inl.gov
The HUNTER Dynamic Human Reliability Analysis Tool: Overview of the Software Framework for Modeling Digital Human Twins
The Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER) was previously developed as a simplified test case for dynamic human reliability analysis (HRA). HUNTER1 paired a dynamicized version of the SPAR-H HRA method that autocalculated the effects of performance shaping factors (PSFs), an implementation of the GOMS-HRA method to compute time for modeled human tasks, and an interface between the RAVEN modeling environment and RELAP5 thermo-hydraulics code. In this manner, a simple implementation of a virtual operator was coupled to a virtual plant model. In order to mature this framework, HUNTER2 has been initiated. HUNTER2 seeks to scale the earlier proof-of-concept demonstration into a software toolkit that can be deployed to support industry needs for dynamic HRA.
One challenge that has existed with many dynamic HRA approaches is the need for bespoke software implementations for each modeled scenario. To avoid this, we are creating a scalable framework that can interface with different modeling tools as modules. The crucial elements of HUNTER2 are: (1) a task-mode...
Lead Author: Tom Ulrich Co-author(s): Ronald L Boring, ronald.boring@inl.gov
Jeeyea Ahn, jeeyea.ahn@inl.gov
Yunyeong Heo, yunyeong.heo@inl.gov
Jooyoung, Park jooyoung.park@inl.gov
Roger Lew, rogerlew@uidaho.edu
The HUNTER Dynamic Human Reliability Analysis Tool: Procedurally Driven Operator Simulation
This paper describes the software implementation of the HUNTER dynamic human reliability analysis (HRA) framework. The HUNTER software tool is a dynamic HRA simulation driven by an operator model defined through existing operating procedures. The software attempts to create a simplified approach to dynamic HRA that does not rely on complex cognitive modeling and therefore provide a more accessible tool for analysts.
Traditional HRA models human errors by building a static model of the operators’ activities surrounding the predefined human failure event. Retroactive analysis entails gathering information around a known event, modeling the event, and then extrapolating the failure to other aspects of the system in prospective manner to determine human error opportunities in other parts of the system that could be impacted by the type of modelled event. Historically, this is a largely manual task performed by an analyst or team of analysts and, as a result, traditional HRA often suffers from a level of variability across analyses. Dynamic HRA provides an opportunity to more objectivel...
Lead Author: Yunyeong Heo Co-author(s): Thomas A. Ulrich, thomas.ulrich@inl.gov
Ronald L. Boring, ronald.boring@inl.gov
Jooyoung Park, Jooyoung.Park@inl.gov
Jeeyea Ahn, jeeya@unist.ac.kr
The HUNTER Dynamic Human Reliability Analysis Tool: Coupling an External Plant Code
The Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER) is a framework to support dynamic human reliability analysis (HRA) with the aim to develop standalone software to perform the dynamic HRA calculations. Within the HRA, human actions in nuclear power plants (NPPs) are predicated by plant states, and human actions influence the plant. In other words, plant operations are necessarily recursive, and it becomes challenging to model complex human-plant interactions. Consequently, we have linked two software simulations that complement those shortcomings. RELAP5-3D—the Reactor Excursion and Leak Analysis Program (RELAP; Aumiller, Tomlinson, and Bauer 2001) is the foundational thermal-hydraulic software used to model nuclear systems. Using RELAP5-3D, we have simulated the plant operations proceeding according to procedures developed to address emergent situations in NPPs. Plant operations include various actions such as the operator checking plant parameters, as well as actions that are continuously performed over time until a specific parameter reaches certain crite...
Lead Author: Jooyoung Park Co-author(s): Ronald Boring, Ronald.Boring@inl.gov
Thomas Ulrich, Thomas.Ulrich@inl.gov
Jeeyea Ahn, Jeeyea.Ahn@inl.gov
Yunyeong Heo, Yunyeong.Heo@inl.gov
The HUNTER Dynamic Human Reliability Analysis Tool: Development of a Module for Performance Shaping Factors
The Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER) is a framework to support dynamic human reliability analysis (HRA) in communication with a variety of methods and tools. The HRA research team at Idaho National Laboratory has developed the HUNTER framework to meet industry HRA needs by the support of the Risk-Informed System Analysis (RISA) pathway of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program. The existing HUNTER (i.e., HUNTER 1.0) has conceptually proposed how the dynamic HRA could be performed using given information from thermal-hydraulics codes, cognitive models, HRA methods, and procedures, while the current HUNTER project (i.e., HUNTER 2.0) aims to systemically design the HUNTER modules and their functions, then develop a standalone HUNTER software to implement the dynamic HRA calculation. In this paper, how we have developed one of the HUNTER modules, the performance shaping factor (PSF) module, is introduced. The PSF refers to any factor that influences human performance such as workload or complexity. It has b...
Assessment of Input Errors in Schedule Risk Analysis (SRA) of Multidisciplinary Projects
Schedule Risk Analysis (SRA) results of multidisciplinary projects can be influenced by multiple factors. This paper presents an assessment and examples of most relevant input errors related to input data collection, planning and arrangement of the SRA process in the project structure and errors in quantitative computations. Regarding the input data collection recommendations are provided on choosing relevant risk categories, creating a supportive climate for project members so that they are ready to provide a reliable input and on how a country or region's institutions (authorities, history, tradition) might influence the decision making process. The arrangement of the SRA process in the projects is on creating the process so that it delivers the accurate results without compromising the personnel performance. When it comes to the quantitative computations the author gives some thoughts on which variables are most crucial and useful as input, how to perform the computations as well as compute and properly analyze Cumulative Distribution Function (CDF) diagram. In the paper a special...
Lead Author: Diego Mandelli Co-author(s): C. Wang: Congjian.Wang@inl.gov
S. Lawrence: Svetlana.Lawrence@inl.gov
D. Morton: david.morton@northwestern.edu
I. Popova: ivilina.popova@gmail.com
S. Hess: SHess@jensenhughes.com
Bridging equipment reliability data and risk informed decisions in a plant operation context
Industry equipment reliability and asset management programs are essential elements that help ensure the safe and economical operation of nuclear power plants. The effectiveness of these programs is addressed in several industry-developed and regulatory programs.
The Risk-Informed Asset Management (RIAM) project is tasked to develop tools in support of the equipment reliability and asset management programs at nuclear power plants. These tools are designed to create a direct bridge between component health/lifecycle data and decision making (e.g., maintenance scheduling and project prioritization).
The goal of this article is to provide a guide for specific use cases that the RIAM project is targeting. We have grouped uses cases into three main areas. The first area focuses on the analysis of equipment reliability data with a particular emphasis on condition-based data, such as test/surveillance reports and component monitoring data. The second area focuses on the integration of equipment reliability into system/plant reliability models to determine system/plant health and identify...
Lead Author: Ben Chen Co-author(s): David Grabaskas, dgrabaskas@anl.gov
Richard Denning, denningrs.8@gmail.com
The Regulatory Treatment of Low Frequency External Events: Initial Insights
To assist the developing advanced reactor industry in future licensing efforts, the DOE Advanced Reactor Demonstration Program (ARDP) Regulatory Development area initiated a project at Argonne National Laboratory to examine the regulatory treatment of external hazards as part of a risk-informed performance-based (RIPB) licensing framework. A RIPB licensing framework for advanced reactors built on establishing an affirmative safety case offers the benefits of increased flexibility regarding key design and licensing decisions based on a detailed assessment and understanding of plant risk.
Historically, reactor licensing addressed events of very low frequency primarily through the application of design margin and defense-in-depth (DID) philosophy. In contrast, RIPB approaches attempt to evaluate these scenarios at a level of detail commensurate with their risk, which often necessitates an explicit treatment of their frequency and associated consequence. While the detailed analysis of low frequency events provides insights that can help justify alternative treatments to past conservatis...
Lead Author: Marina Roewekamp Co-author(s): Joshua Gordon, Office for Nuclear Regulation (ONR), United Kingdom
Christian Müller, Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) gGmbH, Germany
Recent Tasks of the OECD Nuclear Energy Agency Working Group WGRISK – An Overview
Overall objective of the OECD Nuclear Energy Agency (NEA) Working Group on Risk Assessment (WGRISK) is to permanently advance and extend the understanding of probabilistic safety assessment (PSA ). and facilitate its use and application as an important tool for nuclear safety assessment. WGRISK therefore conducts a variety of activities for risk related information exchange between member countries` experts enhancing the use of this tool in order to improve safety.
In the recent past, WGRISK has successfully completed an activity on “Comparative Application of DIGital I&C Modelling Approaches for PSA (DIGMAP)”. A follow-on activity titled “DIGMORE – A Realistic Comparative Application of DI&C Modelling Approaches for PSA” aiming on supporting the improvement of the probabilistic assessment methodology by providing guidance for PSAs with respect to digital I&C (DI&C) systems (including relevant aspects of their hardware and software).
Other currently ongoing or to be started activities are the following:
- “PSA for Reactor Facilities of a Singular Design”, where a worksh...
Session W05 - Uncertainty, Sensitivity, and Bayesian Methods
Session Chair: Daniel Clayton (djclayt@sandia.gov)
Paper 1 JA104
Lead Author: Jan Soedingrekso Co-author(s): Tanja Eraerds tanja.eraerds@grs.de
Martina Kloos martina.kloos@grs.de
Jörg Peschke joerg.peschke@grs.de
Josef Scheuer josef.scheuer@grs.de
Probabilistic Evaluation of Critical Scenarios with Adaptive Monte-Carlo Simulations Using the Software Tool SUSA
The uncertainties of an accident analysis can be addressed by performing Monte-Carlo simulations within the so-called best-estimate plus uncertainty (BEPU) approach. By varying the uncertain input parameters and running the respective simulations of a deterministic code, tolerance intervals of the safety relevant simulation result can be calculated using, for instance, the software tool for uncertainty and sensitivity analysis, SUSA.
However, the analysis of critical scenarios, which are usually rare events, requires a large number of simulations to accurately describe the underlying parameter spaces and to quantify the probability for critical scenarios. By incorporating adaptive sampling methods in the Monte-Carlo simulation, these rare scenarios can be evaluated probabilistically with reasonable computational effort. Three adaptive sampling methods have been implemented in SUSA to determine the parameter space leading to rare critical scenarios and to estimate the probability for these scenarios. The first approach applies a support vector regression metamodel in the frame of a su...
Lead Author: Kurt Vedros Co-author(s): Robby Christian, robby.christian@inl.gov
Austin Glover, amglove@sandia.gov
Curtis Smith, curtis.smith@inl.gov
Probabilistic Risk Assessment of a Light Water Reactor Coupled with a High-Temperature Electrolysis Hydrogen Production Plant – Part 2: Hazards Assessment and PRA
Generic PRA have been performed for the addition of a heat extraction system to a pressurized-water reactor and a boiling water reactor. The results investigate the applicability of the potential licensing approaches which might not require a full U.S. Nuclear Regulatory Commission (NRC) licensing amendment review (LAR). The PRA draw on the design data for the heat delivery and high-temperature electrolysis facility developed by the Light Water Reactor Sustainability Program. The results of the PRA indicate that application using the licensing approach in 10 CFR 50.59 is justified because of the minimal increase in initiating event frequencies for all design basis accidents (DBAs), none exceeding 5.6%. The PRA results for core damage frequency (CDF) and large early release frequency (LERF) support the use of Regulatory Guide 1.174 as further risk information that supports a change without a full LAR. Further insights provided through hazard analysis and sensitivity studies confirm with high confidence that the safety case for licensing an HES addition and an HTEF sited at 1.0 km from...
Uncertainty Analysis of Dynamic PRA Using Nested Monte Carlo Simulations and Multi-Fidelity Models
Uncertainty gives rise to the risk. For nuclear power plants, probabilistic risk assessment (PRA) systematically concludes what people know to estimate the uncertainty, e.g. in the form of risk triplet. However, epistemic uncertainty exists because of a lack of knowledge and simplification. Capable of developing a definite risk profile for decision making under uncertainty, dynamic PRA widely uses more explicit modeling techniques such as simulation to generate scenarios, estimate likelihood/probability and evaluate consequences. The paper tries to analyze the uncertainties in PRA using iterative Monte Carlo simulation of a boiling water reactor (BWR) plant. To alleviate the computational burden of Monte Carlo simulation, multi-fidelity models are introduced to the dynamic PRA. Authors propose to use a multi-fidelity Monte Carlo method with adaptive model selection between high-fidelity model (MELCOR 2.2) and low-fidelity model (machine learning). As a result, the analysis is expected to provide uncertainty information from the perspectives of neglected accident scenarios, probabilit...
Lead Author: Ilkka Karanta Co-author(s): Tero Tyrväinen tero.tyrvainen@vtt.fi
Conservative methods for frequency estimation of combined external events
Initiating event frequency estimates should be conservative so that risk estimates wouldn’t be downplayed, but not too conservative so that they wouldn’t hamper demonstrating the achievement of safety goals. We consider this problem in the case of statistical frequency estimation of combined external events, primarily for probabilistic risk assessment of nuclear facilities. There an additional problem is the scarcity of positive instances where each event (e.g. wind and sea level) would exceed design basis within some short time window. It is likely that measurement data at a site or close to it does not contain a single instance of the combined event, and therefore estimates that utilize available data tend to be non-conservative unless this problem with the data is taken into account. Major Bayesian approaches to IE frequency estimation are reviewed. Finnish practical experience in plant PRA is described. Potential approaches to solving the problem are described. Some are based on the utilization of quasi-Bayesian models, and others on the generation of synthetic data. The ben...
Paper IL333 | |
A PSAM Profile is not yet available for this author.
Session W11 - NASA I
Session Chair: Matthew Forsbacka (matthew.j.forsbacka@nasa.gov)
Paper 1 AS75
Lead Author: Ashley Coates Co-author(s): Scott L. Lawrence scott.l.lawrence@nasa.gov
Donovan L. Mathias donovan.mathias@nasa.gov
Brian J. Cantwell cantwell@stanford.edu
Numerical Investigation of Flame Propagation for Explosion Risk Modeling Development
Understanding the risk associated with uncontained rocket engine failures is critical to ensuring the safety of crew, personnel, and equipment. While there are several approaches to conducting probabilistic risk analyses for these scenarios, developing and using an engineering-level risk assessment model informed by numerical simulations and/or experimental data is the most efficient approach. The current engineering-level model used in support of NASA’s Space Launch System (SLS) program requires the flame speed as an input. Understanding the flame speed under different conditions is therefore a key aspect to determining appropriate blast overpressures and drives the need for flame propagation characteristics to be further studied.
This numerical study looks to address the need for further flame speed characterization by simulating flame propagation through a hydrogen-oxygen mixture and comparing simulation results with experimental data and engineering model results. The simulations consider variations in initial pressure and velocity distribution to identify influencing paramete...
Lead Author: Andrei Gribok Co-author(s): Arman Seuylemezian arman.seuylemezian@jpl.nasa.gov
Performance of Shrinkage Estimators for Bioburden Density Calculations in Planetary Protection Probabilistic Risk Assessment
Planetary protection (PP) is a discipline that focuses on minimizing the biological contamination of spacecraft to ensure compliance with international policy. The National Aeronautics and Space Administration (NASA) has developed a set of requirements (NPR 8715.24) based on recommendations from the Committee on Space Research (COSPAR) that each mission must comply with regarding both forward and backward planetary protection. Biological cleanliness requirements to target bodies, such as Mars, include spacecraft assembly control and direct testing of the microbial bioburden to comply with planetary protection requirements. Constraints include spore requirement limits for Mars missions of 5105 spores at launch with a maximum limit of 300 spores/m2 bioburden density on flight hardware surfaces, while preventing recontamination by utilizing International Organization for Standardization (ISO) 8 or better cleanroom environments. The data for each component are collected using either swabs or wipes. For each component, a number of samples are collected on one given date or on several d...
Probabilistic Modeling of Recovery Efficiency of Sampling Devices used in Planetary Protection Bioburden Estimation
Microbial contamination has been of concern to the planetary protection (PP) discipline since the Viking missions in the 1970s. In order to mitigate this risk and ensure compliance with international policy, the PP discipline continually monitors the microbial bioburden present on spacecraft and associated surfaces. Spacecraft missions destined for other planetary bodies must abide by a set of requirements put forth by NASA based on recommendations from the Committee on Space Research (COSPAR). Compliance to these biological cleanliness requirements are demonstrated by direct sampling of spacecraft hardware and associated surfaces to enumerate the number of microorganisms present on the surface. The PP discipline has employed a variety of tools to perform direct sampling including four different types of swabs (Cotton Puritan, Cotton Copan, Polyester, and Nylon Flocked) as well as two different types of wipes including (TX3211, and TX3224) which are typically used to sample surfaces no larger than 25 cm2 and 1 m2 respectively. The sampling efficiency of these devices is a critical parameter used to generate spacecraft level cleanliness estimates.
In this study, we investigate how recovery efficiency differs by inoculum amount and species. This is analyzed across different sampling devices using a set of microbial organisms applied to stainless steel surfaces. Two different recovery techniques were employed the NASA standard assay as well as the European Space Agency (ESA) standard assay and two different techniques for plating: Milliflex filtration method and direct plating. Data were analyzed by first developing a probabilistic model of the end-to-end experimental process capturing uncertainty from the inoculation of species onto the coupon through recovery and growth. The model was aimed to quantify the mean recovery efficiency, a key metric for understanding the probability that an individual microorganism is recovered and predicting the number of microorganisms present on a surface. A cost function was developed to compare the recovery efficiency of various sampling devices and processes and identify those that provide optimal bioburden estimation capability. The results suggest the nylon flocked swab and the TX 3211 wipe yielded the highest recovery efficiency and optimal bioburden estimation capability. Results from this study will be integrated into the Bayesian statistical framework used to perform bioburden calculations for demonstrating PP requirements compliance.
Lead Author: Anthony Diventi Co-author(s): Matt Forsbacka, Matthew.j.Forsbacka@nasa.gov
Nancy Lindsey, Nancy.j.Lindsey@nasa.gov
Steven Cornford, Steven.l.Cornford@jpl.nasa.gov
Martin Feather, Martin.s.Feather@jpl.nasa.gov
Transformative benefits emerging from NASA OSMA's evolving Data-Centric and Model-Based Policy framework
The evolution from “document-centric” to “data-centric” information leveraging structured data and model-based approaches is at the heart of digital engineering transformational efforts underway across industry and government. It is these approaches that pave the way for data lakes, Authoritative Sources of Truth (ASOTs), and systems-of-systems interoperability and the corresponding transformation benefits thereof. Such benefits include increased data availability, data access equity, data traceability, real-time analytics, batch analytics, and (most importantly) acceleration of the time-to-value and time-to-insights associated with engineering products and analyses.
For Safety, Mission Assurance (SMA), and Mission Success (SMS) activities, realization of such benefits is essential for engineers and analysts alike to provide vital information when needed to support critical decision making across the entire life cycle. Far too often, such information lags these decision points and/or is absent of the robust, integrated, knowledge needed given inherent barriers associa...
Lead Author: Hyun Gook Kang Co-author(s): Junyung Kim, kimj42@rpi.edu
Asad Ullah Amin Shah, shaha11@rpi.edu
Concept Design, Application and Risk Assessment of New Forced Safety Injection Tank for Station Blackout Accident Scenario
In the current fleet of nuclear power plants, engineered safety systems are designed to perform fundamental safety functions. These fundamental safety functions are crucial, and failure of any one of these may lead to devastating accidents like the Fukushima Daiichi accident. The lesson learned from the Fukushima accident led to the development of advanced passive safety systems for all the new reactor designs and additional safety enhancement to the current nuclear fleet, such as accident tolerant fuel and diverse and flexible coping strategies (FLEX). Safety injection tanks (SIT) are designed to refill the core in the event of medium-large or large break LOCA accidents, and are an essential part in the engineered safety features. The amount of coolant inventory inside the SITs can also provide extended time to core damage if utilized in other accidents such as Station Blackout. This research presents the conceptual design of the new forced safety injection tank (FSIT). FSIT is designed by introducing a piston-assemblies on the top of the existing SITs. The principle of operation is...
Lead Author: Robby Christian Co-author(s): Vaibhav Yadav vaibhav.yadav@inl.gov
Steven R. Prescott steven.prescott@inl.gov
Shawn St. Germain shawn.stgermain@inl.gov
Presenter of this paper: Vaibhav Yadav (vaibhav.yadav@inl.gov)
A Dynamic Risk Framework for the Optimization of Physical Security Posture of Nuclear Power Plants
This paper describes an ongoing work within the Light Water Reactor Sustainability pathway at Idaho National Laboratory (INL) to optimize security and cost of nuclear power plants. It introduces the dynamic risk assessment tool developed at INL, Event Modeling Risk Assessment using Linked Diagrams (EMRALD). EMRALD was leveraged to optimize the security posture of a nuclear power plant by integrating force-on-force (FOF) simulations and operator mitigation actions including the dynamic and flexible coping strategies (FLEX).
To illustrate the methodology, four attack scenarios were modeled in a commercially available FOF simulation tool using a hypothetical nuclear power plant facility. The simulation results provide valuable insights into possible attack outcomes, as well as the probabilistic risk of core damage event given these outcomes. Safety mitigation procedures were modeled in EMRALD dependent on the attack outcomes by considering human operator uncertainties.
The results demonstrate that the number of armed responders can be optimized, while still maintaining the same pro...
A PSAM Profile is not yet available for this author. Presenter Name: Vaibhav Yadav (vaibhav.yadav@inl.gov) Bio: Dr. Vaibhav Yadav is a senior scientist at Idaho National Laboratory where he performs and leads several research efforts in the areas of risk, reliability, safety, security and regulations of nuclear power plants. His research can be categorized into following domains namely, risk-based and risk-informed methodologies, digital twin technologies, cyber security and physical security. He has extensive experience in working with US commercial utilities to develop and implement risk-informed computational methodologies for optimizing physical security. He is currently serving as a member of the Physical and Cyber Security Subcommittee of the ANS/ASME Joint Committee on Nuclear Risk Management.
Probabilistic Modeling of Recovery Efficiency of Sampling Devices used in Planetary Protection Bioburden Estimation
Microbial contamination has been of concern to the planetary protection (PP) discipline since the Viking missions in the 1970s. In order to mitigate this risk and ensure compliance with international policy, the PP discipline continually monitors the microbial bioburden present on spacecraft and associated surfaces. Spacecraft missions destined for other planetary bodies must abide by a set of requirements put forth by NASA based on recommendations from the Committee on Space Research (COSPAR). Compliance to these biological cleanliness requirements are demonstrated by direct sampling of spacecraft hardware and associated surfaces to enumerate the number of microorganisms present on the surface. The PP discipline has employed a variety of tools to perform direct sampling including four different types of swabs (Cotton Puritan, Cotton Copan, Polyester, and Nylon Flocked) as well as two different types of wipes including (TX3211, and TX3224) which are typically used to sample surfaces no larger than 25 cm2 and 1 m2 respectively. The sampling efficiency of these devices is a critical parameter used to generate spacecraft level cleanliness estimates.
In this study, we investigate how recovery efficiency differs by inoculum amount and species. This is analyzed across different sampling devices using a set of microbial organisms applied to stainless steel surfaces. Two different recovery techniques were employed the NASA standard assay as well as the European Space Agency (ESA) standard assay and two different techniques for plating: Milliflex filtration method and direct plating. Data were analyzed by first developing a probabilistic model of the end-to-end experimental process capturing uncertainty from the inoculation of species onto the coupon through recovery and growth. The model was aimed to quantify the mean recovery efficiency, a key metric for understanding the probability that an individual microorganism is recovered and predicting the number of microorganisms present on a surface. A cost function was developed to compare the recovery efficiency of various sampling devices and processes and identify those that provide optimal bioburden estimation capability. The results suggest the nylon flocked swab and the TX 3211 wipe yielded the highest recovery efficiency and optimal bioburden estimation capability. Results from this study will be integrated into the Bayesian statistical framework used to perform bioburden calculations for demonstrating PP requirements compliance.
Lead Author: Anthony Diventi Co-author(s): Matt Forsbacka, Matthew.j.Forsbacka@nasa.gov
Nancy Lindsey, Nancy.j.Lindsey@nasa.gov
Steven Cornford, Steven.l.Cornford@jpl.nasa.gov
Martin Feather, Martin.s.Feather@jpl.nasa.gov
Transformative benefits emerging from NASA OSMA's evolving Data-Centric and Model-Based Policy framework
The evolution from “document-centric” to “data-centric” information leveraging structured data and model-based approaches is at the heart of digital engineering transformational efforts underway across industry and government. It is these approaches that pave the way for data lakes, Authoritative Sources of Truth (ASOTs), and systems-of-systems interoperability and the corresponding transformation benefits thereof. Such benefits include increased data availability, data access equity, data traceability, real-time analytics, batch analytics, and (most importantly) acceleration of the time-to-value and time-to-insights associated with engineering products and analyses.
For Safety, Mission Assurance (SMA), and Mission Success (SMS) activities, realization of such benefits is essential for engineers and analysts alike to provide vital information when needed to support critical decision making across the entire life cycle. Far too often, such information lags these decision points and/or is absent of the robust, integrated, knowledge needed given inherent barriers associa...
Name: Anthony Diventi (anthony.j.diventi@nasa.gov)
Session W13 - Human Reliability Analysis IV
Session Chair: Ronald Boring (ronald.boring@inl.gov)
Paper 1 AG183
Lead Author: Agnieszka Tubis Co-author(s): Szymon Haładyn szymon.haladyn@pwr.edu.pl
Human error risk analysis in the cargo logistics process in the airport zone
Human errors are a common cause of disruptions in the logistics service process. In order to eliminate them, enterprises introduce automatic solutions to routine logistics operations. However, not in every system is it possible to automate the process of handling loads. Some operations still require human intervention, or the economic calculation does not justify using autonomous solutions. In these situations, it is necessary to implement solutions that will eliminate or reduce the impact of the human factor on the risk of adverse events. The article aims to analyze the risk of human errors in cargo logistics handling at the airport. Field tests were carried out at a selected airport. On their basis, the identification of adverse events and their analysis using qualitative and quantitative risk assessment methods were carried out. The assessment results made it possible to develop dedicated solutions to improve the crew's competencies with the use of Virtual Reality technology....
Lead Author: Gayoung Park Co-author(s): Awwal M. Arigi, Seo-Ryong Koo, Jonghyun Kim
Dependency Analysis of Human Failures in Multi-Unit Scenarios: Types and Evaluation Method
Abstract: Dependency between human failure events (HFEs) is often analyzed as a part of the conventional human reliability analysis (HRA) process for nuclear power plants (NPPs). Many regulators suggested that the probabilistic risk assessment (PRA) should be included in the dependency between HFEs. In this study, we aim to investigate characteristics based on the results, performing dependencies on about 800 combinations in a multi-unit cutset. From 2017 to 2021, we participated in a project to develop a multi-unit PSA model for regulatory use. It was developed for nine nuclear power plants at the Kori site in Korea. We performed HEP quantification and dependency required in the PSA model and provided the results. This was analyzed using the multi-unit HFE dependency evaluation method. We developed a set of multi-units HFE dependency evaluation elements and their evaluation criteria based on the framework of the single-unit evaluation elements that have been utilized in the HRA practice for NPPs. The result of this work will be used to modify the recently developed multi-unit HFE de...
Lead Author: Jeeyea Ahn Co-author(s): Jooyoung Park, jooyoung.park@inl.gov
Ronald L. Boring, ronald.boring@inl.gov
Thomas A. Ulrich, thomas.ulrich@inl.gov
Yunyeong Heo, Yunyeong.Heo@inl.gov
The HUNTER Dynamic Human Reliability Analysis Tool: Graphical User Interface
The initial version of the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER) simulation software implementation was a Python command line application. The model defining the scenario was manually constructed within input decks that were not user friendly. For the current version of HUNTER, a graphical user interface (GUI) was designed and developed to ease the model creation, editing, and simulation execution. The GUI supports creation and editing of the overall model and sub-model elements grouped under the three main simulation modules (individual, environment, task module). The GUI provides distinct interface sections with visual representations for each of these simulation modules. Visual representations of model objects and their visual arrangement within the modules clearly convey the underlying structure of the simulation code and ease model creation and editing. For example, what was input through the command line or a text editor in the past is now implemented as an intuitive and visualized interaction such as clicking a button or turning a toggle on and off so that analysts can focus on creating the model without having to contend with esoteric text files and their detail syntax requirements.
Lead Author: Jooyoung Park Co-author(s): Ronald Boring, Ronald.Boring@inl.gov
Thomas Ulrich, Thomas.Ulrich@inl.gov
An Approach to Dynamic Human Reliability Analysis using EMRALD Dynamic Risk Assessment Tool
Research for dynamic human reliability analysis (HRA) (a.k.a., simulation-based or computation-based HRA) has been required, as many researchers have emphasized the importance of dynamic approaches to probabilistic safety assessment (PSA). Transiting from static to dynamic HRA may be beneficial to realistically model and evaluate human actions as they would be performed in a system. In the static HRA, a human action is divided into human components like diagnosis and execution, then is quantified by summing the error probabilities of the components. However, this approach may miss the dynamic characteristic of the actual operation that operators continuously diagnose the given situation and execute proper actions based on procedures. Furthermore, the static HRA has estimated time availability for human actions based on structured interviews with knowledgeable experts like operators. It may be also challengeable to specifically evaluate if they can finish the actions by considering unexpected factors critical to the system safety. To treat these challenges, this study proposes an appr...
Lead Author: Sergey Galushin Co-author(s): Anders Riber Marklund anders.ribermarklund@vysusgroup.com Anders Olsson anders.olsson@vysusgroup.com Ola Bäckström ola.backstrom@lr.org Dmitry Grishchenko dmitrygr@kth.se Pavel Kudinov pkudinov@kth.se
Presenter of this paper: Anders Olsson (anders.olsson@vysusgroup.com)
Treatment of Phenomenological Uncertainties in Level 2 PSA for Nordic BWR Using Risk Oriented Accident Analysis Methodology
A comprehensive and robust assessment of phenomenological uncertainties is a challenge for the current real-life PSA L2 applications, since such uncertainty is majorly driven by physical phenomena and timing of events. Typically, the static PSA models are built on a pre-determined set of scenario parameters to describe the accident progression sequence and use a limited number of simulations in the underlying deterministic analysis to evaluate the consequences.
The Risk Oriented Accident Analysis Methodology (ROAAM+) has been developed to enable consistent and comprehensive treatment of both epistemic and aleatory sources of uncertainty in risk quantification. The framework is comprised of a set of deterministic models that simulate different stages of the accident progression, and a probabilistic platform that performs quantification of the uncertainty in conditional containment failure probability. This information is used for enhanced modeling in the PSA-L2 for improved definition of sequences, where information from the ROAAM is used to refine PSA model resolution regarding risk ...
Name: Sergey Galushin (sergey.galushin@vysusgroup.com) Presenter Name: Anders Olsson (anders.olsson@vysusgroup.com) Bio: Anders Olsson holds a master’s degree in Mechanical Engineering and has since 1995 been working in the nuclear industry. He started at ABB Atom where he mainly performed various thermal hydraulic analysis and worked with structural verification. Since 1999 his main focus has been Probabilistic Risk Assessment where he now has extensive experience in PSA Level 1 and 2 for all operating modes including Human Reliability Analysis. He also holds a position as Vice President in Vysus Group with responsibility for the operation of the nuclear consultancy in Sweden.
Paper 2 JL66
Lead Author: James Lin
Thermal-Hydraulic Analyses in Spent Fuel Pool PSA
To support the human reliability analysis (HRA) and the development of event sequence models in the Spent Fuel Pool (SFP) Probabilistic Safety Assessment (PSA), thermal-hydraulic analyses of selected, representative event scenarios must be performed. To evaluate such model parameters as the break flow rate of a SFP loss of inventory initiating event, the time available for specific operator actions or the number of equipment trains required to perform a safety-related function, these analyses can be performed based only on first principle energy and mass balance considerations, which are adequate to determine the broad event characteristics required to perform the HRA and develop the event sequence models with the needed accuracy. The results are considered reasonable approximations intended to reveal overall SFP response behavior and insights.
This paper will describe in detail the plant and SFP design input information used in the analyses, the derivation of the SFP heat load and the various heat load cases considered, the estimate of the break flow rate of the loss of SFP inventor...
DEVELOPMENT AND PRIMARY APPLICATION OF A LEVEL 2 PSA METHODOLOGY IN A SMALL NUCLEAR PLANT
In the nuclear plant licensing process, a qualitative and quantitative analysis of the probability, progression and consequences of transients and accident conditions must be performed to estimate the risk to public health. Probabilistic Safety Assessment (PSA) is a method widely used in the nuclear industry that numerically quantifies risk and is performed at three different levels. The PSA Level 2 addresses the phenomenological and physical events that can occur during core meltdown to containment failure. The methodology of a Level 2 PSA must contain a clear definition of the steps, procedures and reviews to be carried out in accordance with the standards and guidelines recommended by the International Atomic Energy Agency. This article describes the development of a Level 2 PSA methodology for a small nuclear power plant. Deterministic modeling of the accident progression is also considered, being essential for the construction of the sequence of events and subsequent management measures. Along with the development, a primary application of the methodology is being carried out to...
Lead Author: Jooyoung Park Co-author(s): Ronald Boring, Ronald.Boring@inl.gov
Thomas Ulrich, Thomas.Ulrich@inl.gov
An Approach to Dynamic Human Reliability Analysis using EMRALD Dynamic Risk Assessment Tool
Research for dynamic human reliability analysis (HRA) (a.k.a., simulation-based or computation-based HRA) has been required, as many researchers have emphasized the importance of dynamic approaches to probabilistic safety assessment (PSA). Transiting from static to dynamic HRA may be beneficial to realistically model and evaluate human actions as they would be performed in a system. In the static HRA, a human action is divided into human components like diagnosis and execution, then is quantified by summing the error probabilities of the components. However, this approach may miss the dynamic characteristic of the actual operation that operators continuously diagnose the given situation and execute proper actions based on procedures. Furthermore, the static HRA has estimated time availability for human actions based on structured interviews with knowledgeable experts like operators. It may be also challengeable to specifically evaluate if they can finish the actions by considering unexpected factors critical to the system safety. To treat these challenges, this study proposes an appr...
Lead Author: Jan Grobbelaar Co-author(s): Zander Mausolff, amausolff@terrapower.com
Brian Johnson, bjohnson@terrapower.com
Brandon Chisholm, BMCHISHO@SOUTHERNCO.COM
Probabilistic Risk Assessment of the Molten Chloride Reactor Experiment Conceptual Design
A probabilistic risk assessment (PRA) is being developed for the Molten Chloride Reactor Experiment (MCRE). The MCRE is a low power, fast reactor which will provide key reactor physics data to support the design and licensing of the commercial Molten Chloride Fast Reactor (MCFR). The reactor uses sodium chloride and highly enriched uranium trichloride (NaCl-UCl3) eutectic as its fuel. The United States Department of Energy (DOE) awarded a contract to Southern Company-led team including TerraPower (TP) in 2020 to support the design and development of the MCRE. It is proposed to site MCRE within a testbed at the Idaho National Laboratory (INL). The final operating location at INL will be determined following the NEPA review. DOE review and approval of the MCRE safety basis will be required.
The approach chosen for developing the analyses to support the MCRE safety basis is based on the Licensing Modernization Project’s (LMP) risk-informed performance-based (RIPB) approach documented in NEI 18-04. The longer-term goal in applying the LMP RIPB process on MCRE is to build experien...
Lead Author: Claire Blackett Co-author(s): Maren H. Rø Eitrheim, maren.eitrheim@ife.no
Andreas Bye, andreas.bye@ife.no
The Challenge of Assessing Human Performance and Human Reliability for First-of-a-Kind Technologies
There is growing interest worldwide in the potential of advanced reactor technologies such as Small Modular Reactors (SMRs) as a more competitive and efficient means of meeting future energy needs. SMRs represent a somewhat radical departure from the design of current nuclear power plants, with the promise of unique design attributes such as a smaller physical footprint, a smaller reactor core, simplification of the design, increased use of passive and/or inherent safety systems, and modular construction. Such attributes aim to minimize the potential for severe accidents to occur.
The development of first-of-a-kind (FOAK) technologies such as SMRs inevitably invite consideration of the impact of such designs on how these plants will be operated, as compared to current generation plants. For example, the NuScale design proposes a plant of up to 12 reactor modules operated by a minimum shift crew of two senior reactor operators (SROs) and one reactor operator (RO), from a single control room. This is a significant change from current reactor designs that typically feature a minimum c...
Lead Author: Claire Blackett Co-author(s): Maren H. Rø Eitrheim, maren.eitrheim@ife.no
Robert McDonald, robert.mcdonald@ife.no
Marten Bloch, marten.bloch@ife.no
Human Performance in Operation of Small Modular Reactors
The interest in Small Modular Reactors (SMRs) continues to grow worldwide, with even more countries and organisations exploring how SMRs could meet future energy needs. According to the International Atomic Energy Agency (IAEA), in 2020 there were over 70 SMR designs in development around the world. The interest in how SMR technology could be deployed has extended beyond energy production, and now includes using SMRs for e.g., district heating, desalination and even hydrogen production.
There have been significant advancements in recent years in the development of SMR technology, with the NuScale light water SMR being the first commercial reactor of this type to have received design approval by a regulatory body in 2020. Despite these advancements, the IAEA notes that there are still issues related to control room staffing and human factors engineering for multi-module SMR plant designs that require “considerable” attention. Due to the highly commercial nature of the SMR industry, publicly available information about plans for the conduct of operations in SMR control rooms has ...
Lead Author: Ola Bäckström Co-author(s): Pavel Krcal Pavel.Krcal@lr.org
Xuhong He Xuhong.He@lr.org
Use of PSA for Small Modular Reactors
PRA modelling approaches for nuclear installations of the current type have evolved over many years. A significant focus of the methodology has been devoted to the management of the complexity of the systems within a station and to the question how to appropriately estimate the metrics for an individual station. The acceptance framework for nuclear reactors is also focusing on individual stations in isolation. Further assumptions include the typical mission times of 24 respective 48 hours and to a great extent also independence of failures within an accident sequence.
The booming interest in small modular reactors driven by their cost efficiency and increased safety might challenge the established methodology and bring new impulses at multiple aspects. Amongst them can be mentioned: Risk metric and safe state, component types, passive design, software (digital) control systems, multi-unit analysis. Some of the challenges are also of interest to existing reactors, especially in the effort to modernize them.
The challenges will affect the risk assessment on different levels. This paper...
Summary of the Nuclear Risk Assessment 2019 Update for the Mars 2020 Mission Environmental Impact Statement
In the summer of 2020, the National Aeronautics and Space Administration (NASA) launched a spacecraft as part of the Mars 2020 mission. The rover on the spacecraft uses a Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) to provide continuous electrical and thermal power for the mission. The MMRTG uses radioactive plutonium dioxide. NASA prepared a Supplemental Environmental Impact Statement (SEIS) for the mission in accordance with the National Environmental Policy Act. The SEIS provides information related to updates to the potential environmental impacts associated with the Mars 2020 mission as outlined in the Final Environmental Impact Statement (FEIS) for the Mars 2020 Mission issued in 2014 and associated Record of Decision (ROD) issued in January 2015. The Nuclear Risk Assessment (NRA) 2019 Update includes new and updated Mars 2020 mission information since the publication of the 2014 FEIS and the updates to the Launch Approval Process with the issuance of National Security Presidential Memorandum 20 (NSPM-20). The NRA 2019 Update addresses the responses of the MMRT...
Use of Preliminary PRA to Inform Decisions During Initial NASA Gateway Development
How do you use Probabilistic Risk Assessment (PRA) to support a program in development before designs are even known? Traditional PRAs require detailed design information and early in a program life cycle such information is not available. National Aeronautics and Space Administration (NASA) Safety and Mission Assurance (SMA) developed a preliminary PRA for the Gateway Program based upon NASA reference designs utilizing data and models from pervious PRA studies as surrogates for the Gateway systems. Gateway will be a lunar outpost that supports missions to the moon and includes several elements/modules developed by NASA and International Partners. NASA SMA began supporting Gateway in fall 2017 during initial formulation and continues to support the Gateway Program today. This paper will explore how the NASA SMA developed preliminary PRA was used to inform Gateway Program decisions with specific examples provided....
Analysis of Frequency of Aircraft Impact from Overflights
Evaluation of the frequency of aircraft impact from overflights during the in-flight phase has become more challenging in recent years due to changes in the flight paths (even for itinerary flights) and difficulty in collecting flight frequency data. In present-day aviation, airplanes can fly using the Global Positioning System (GPS) and do not always have to follow the airways. Flight paths are primarily based on the shortest routes between the origin and destination navigated by the GPS. Therefore, the frequency of enroute overflights in the airspace nearby a nuclear facility cannot be just estimated by the air traffic along the nearby airways as specified in NUREG-800, Section 3.5.1.6. All overflights within certain distance from the facility should be considered. Air traffic in the airspace nearby a nuclear facility should be estimated using the Federal Aviation Administration (FAA) records on the flights crossing specific latitude/longitude boundaries, which includes not only aircraft operations into and out of a nearby airport, but also overflights through the same airspac...
Lead Author: Ola Bäckström Co-author(s): Pavel Krcal Pavel.Krcal@lr.org
Xuhong He Xuhong.He@lr.org
Use of PSA for Small Modular Reactors
PRA modelling approaches for nuclear installations of the current type have evolved over many years. A significant focus of the methodology has been devoted to the management of the complexity of the systems within a station and to the question how to appropriately estimate the metrics for an individual station. The acceptance framework for nuclear reactors is also focusing on individual stations in isolation. Further assumptions include the typical mission times of 24 respective 48 hours and to a great extent also independence of failures within an accident sequence.
The booming interest in small modular reactors driven by their cost efficiency and increased safety might challenge the established methodology and bring new impulses at multiple aspects. Amongst them can be mentioned: Risk metric and safe state, component types, passive design, software (digital) control systems, multi-unit analysis. Some of the challenges are also of interest to existing reactors, especially in the effort to modernize them.
The challenges will affect the risk assessment on different levels. This paper...
Evaluating the uncertainty bound of the multiple scenario CDF (Core Damage Frequency) by the distribution sampling of the basic events.
Probabilistic Risk Assessment (PRA) developed in order to investigate the factors that cause threat to nuclear power plants and improve the safety of nuclear power plants. (eg seismic, flooding, multiple components failure, human error). As a result, in addition to the Core Damage Frequency (CDF) obtained by probabilistic analysis, the main reason leading to the core meltdown and their importance can also be known. In some possible hazards (such as floods, fires, etc.), multiple scenarios may be used to develop and analyze the safety assessment of the nuclear power plant, and then the CDFs calculated from multiple scenarios will be summed up as the CDF of the power plant in these scenarios. However, this value is only a point estimate with unknowing uncertainty bound. In order to obtain more statistically significant analysis results, this paper uses the method of randomly sampling basic event values to quantify multi-scenario events, in order to obtain the CDF with statistically significant information....
A PSAM Profile is not yet available for this author.
Paper 2 FR84
Lead Author: Francesco Di Dedda Co-author(s): Anders Olsson, anders.olsson@vysusgroup.com
Thomas Augustsson, thomas.augustsson@okg.uniper.energy
PSA implementation of the Independent Core Cooling and new EOPs/SAMGs at Oskarshamn 3
An independent core cooling function (OBH) has been installed in unit 3 at the Oskarshamn NPP as one of the post-Fukushima actions. In conjunction with the OBH project an update of the Emergency Operating Procedures (EOPs) has been made and a set of the Severe Accident Management Guidelines (SAMGs) has been developed. A complete review and validation of both level 1 and level 2 PSA sequences for all operating modes was performed to achieve the goal of implementing the new system functions and new procedures in the full scope analysis.
Given the new system configuration, requirements on some systems affected the requirements on dependent systems functions. This interaction between different systems was challenging to assess in specific cases. Dedicated workshop activities with operating personnel were crucial to determine accident progression, especially when considering the new system functions and the manual actions according to the new EOPs/SAMGs. Changes in the electrical systems as well as the adoption of a new version the integral code MAAP and a new version of RiskSpectrum PSA...
A PSAM Profile is not yet available for this author.
Paper 3 JL65
Lead Author: James Lin
Analysis of Frequency of Aircraft Impact from Overflights
Evaluation of the frequency of aircraft impact from overflights during the in-flight phase has become more challenging in recent years due to changes in the flight paths (even for itinerary flights) and difficulty in collecting flight frequency data. In present-day aviation, airplanes can fly using the Global Positioning System (GPS) and do not always have to follow the airways. Flight paths are primarily based on the shortest routes between the origin and destination navigated by the GPS. Therefore, the frequency of enroute overflights in the airspace nearby a nuclear facility cannot be just estimated by the air traffic along the nearby airways as specified in NUREG-800, Section 3.5.1.6. All overflights within certain distance from the facility should be considered. Air traffic in the airspace nearby a nuclear facility should be estimated using the Federal Aviation Administration (FAA) records on the flights crossing specific latitude/longitude boundaries, which includes not only aircraft operations into and out of a nearby airport, but also overflights through the same airspac...
Lead Author: Ola Bäckström Co-author(s): Pavel Krcal Pavel.Krcal@lr.org
Xuhong He Xuhong.He@lr.org
Use of PSA for Small Modular Reactors
PRA modelling approaches for nuclear installations of the current type have evolved over many years. A significant focus of the methodology has been devoted to the management of the complexity of the systems within a station and to the question how to appropriately estimate the metrics for an individual station. The acceptance framework for nuclear reactors is also focusing on individual stations in isolation. Further assumptions include the typical mission times of 24 respective 48 hours and to a great extent also independence of failures within an accident sequence.
The booming interest in small modular reactors driven by their cost efficiency and increased safety might challenge the established methodology and bring new impulses at multiple aspects. Amongst them can be mentioned: Risk metric and safe state, component types, passive design, software (digital) control systems, multi-unit analysis. Some of the challenges are also of interest to existing reactors, especially in the effort to modernize them.
The challenges will affect the risk assessment on different levels. This paper...
Lead Author: Marilia Ramos Co-author(s): Mohammad Pishahang , mh.pishahang@risksciences.ucla.edu
Enrique Andrés Lopez Droguett, eald@ucla.edu
Ali Mosleh, mosleh@ucla.edu
Andres Alfredo Ruiz-Tagle Palazuelos , aruiztag@umd.edu
Integrating Human Behavior Modeling into a Probabilistic Wildfire Egress Planning Framework
Wildland Urban Interface (WUI) can be defined as “the zone of transition between unoccupied land and human development.” The communities in these areas are particularly vulnerable to wildfires that start and propagate in wildlands. According to the U.S. Fire Administration, the U.S. has more than 70 thousand communities at risk for WUI fires, and the WUI area grows by approximately 2 million acres per year. Numerous efforts have been undertaken to address the dangers of wildfires, including building more resilient infrastructures, advancing techniques for extinguishing fires and exploring the possibilities of controlled fires. Associated with these efforts is the pressing need to ensure the safe evacuation of communities in WUI once they are threatened by wildfires. Evacuation modeling and planning is a challenging and complex problem. It involves human decisions and actions concerning if, when, and how to evacuate; directly impacting the traffic flow during the evacuation. Furthermore, the available time for a community to evacuate is a dynamic element: it changes according to t...
Lead Author: Mohammad Pishahang Co-author(s): Andres Ruiz-Tagle, aruiztag@umd.edu
Enrique Lopez Droguett, eald@g.ucla.edu
Marilia Ramos, mariliar@g.ucla.edu
Ali Mosleh, mosleh@g.ucla.edu
Presenter of this paper: Marilia Ramos (marilia.ramos@ucla.edu)
WISE: a probabilistic wildfire egress planning framework
Wildfire is a significant threat to many communities in Wildland Urban Interface (WUI) areas, and ensuring an efficient evacuation of these communities in case of wildfire is a pressing challenge. Wildfire evacuation modeling consists of three main layers: fire model, human decision-making, and traffic models. An efficient evacuation planning needs thus a comprehensive understanding of each of these layers and their mutual interactions. Numerous methods have been proposed for wildfire risk assessment, focusing on each of these components, but few address the issue considering all these layers. This paper presents a framework for probabilistic evacuation planning in the case of wildfires. The Wildfire Safe Egress (WISE) framework integrates a human decision model, a traffic model, and wildfire dynamics modeling for estimating the probability that a community safely evacuates when in danger by a wildfire. The evacuation success is calculated through a comparison between two competing variables. The Available Safe Egress Time (ASET) determines the total amount of time before the fire re...
A PSAM Profile is not yet available for this author. Presenter Name: Marilia Ramos (marilia.ramos@ucla.edu) Bio:
Paper 3 MB291
Lead Author: Matthew Bucknor
Sodium Fire Protection Systems, Mitigation Strategies, and Risk Analysis
Liquid sodium coolant at nominal sodium-cooled fast reactor (SFR) operating temperatures, approximately 350°C – 550°C, will readily ignite in air environments. The severity of the resulting fire scenario depends on several factors including sodium temperature, amount of available oxygen, geometric factors such as the size and configuration of the leaked sodium volume, and the type of fire (pool fire versus spray fires). To minimize the consequences associated with these fires, sodium fire protection systems (SFPSs) and mitigation strategies are utilized to provide a means of detecting, locating, containing, and suppressing the fires. The SFPSs and mitigation strategies proposed for future U.S. SFRs are based on experience and data from historical SFR facilities and testing programs which no longer exist. Software tools utilized to simulate sodium fires are also based on historical efforts and testing data and their development has been fragmented throughout the last few decades due to interest and sporadic funding. This presentation provides a brief overview of sodium fires, SFPS...
Paper MB291 | |
Name: Matthew Bucknor (mbucknor@anl.gov)
Paper 4 OL137
Lead Author: Ola Bäckström Co-author(s): Pavel Krcal Pavel.Krcal@lr.org
Xuhong He Xuhong.He@lr.org
Use of PSA for Small Modular Reactors
PRA modelling approaches for nuclear installations of the current type have evolved over many years. A significant focus of the methodology has been devoted to the management of the complexity of the systems within a station and to the question how to appropriately estimate the metrics for an individual station. The acceptance framework for nuclear reactors is also focusing on individual stations in isolation. Further assumptions include the typical mission times of 24 respective 48 hours and to a great extent also independence of failures within an accident sequence.
The booming interest in small modular reactors driven by their cost efficiency and increased safety might challenge the established methodology and bring new impulses at multiple aspects. Amongst them can be mentioned: Risk metric and safe state, component types, passive design, software (digital) control systems, multi-unit analysis. Some of the challenges are also of interest to existing reactors, especially in the effort to modernize them.
The challenges will affect the risk assessment on different levels. This paper...
A Dynamic Cyber-Attack Analysis, Risk Assessment and Management Framework for Industrial Control Systems
As the number and quality of digital devices used to control industrial infrastructure continues to grow and evolve, assessing the cyber risk posed by these networked devices is a critical concern. Traditional security methods use a combination of intrusion prevention systems (IPSs) and intrusion detection systems (IDSs) to protect against cyber-attacks. However, these defenses do not provide real-time knowledge of the risk profile of an ICS under a cyber-attack scenario. Previous work has studied dynamic risk assessment as a way to provide near-real-time risk evaluation, with the assumption that the compromised device or component is known. However, these information are not given by existing IDSs. The location and level of compromise in the operational technology (OT) process is crucial for decision making since the risk analysis and management for a pump that deviates 10% from normal operation is likely to be very different than the analysis for a pump deviating by 50. The gap in cyber-attack detection and real-time understanding of the risk profile posed must be bridged by identi...
Paper FA293 | |
Name: Fan Zhang (fan@gatech.edu)
Paper 2 JA248
Lead Author: Jason Reinhardt Co-author(s): Ron Lafond (ronald.lafond@cisa.dhs.gov), Derek Koolman (derek.koolman@cisa.dhs.gov), Raymond Ludwig (raymond.ludwig@associates.cisa.dhs.gov), Lindsey Miles (lindsey.miles@cisa.dhs.gov), Jeffrey Munns (jeffrey.munns@associates.cisa.dhs.gov), Merideth Secor (merideth.secor@cisa.dhs.gov), Lauren Wind (lauren.wind@associates.cisa.dhs.gov)
A Risk Assessment and Reduction Approach for National Critical Infrastructure
The United States Department of Homeland Security Cybersecurity and Infrastructure Security Agency (CISA) leads the National effort to understand, manage, and reduce risk to our cyber and physical infrastructure. CISA must assess risks that cover a broad range of scenarios over a complex set of interdependent critical infrastructure (CI) systems. While many models and data sets exist that provide detailed analyses of threat and hazard impacts to CI, there is no overarching analytic structure that organizes and integrates these disparate sources into a unified risk assessment. CISA is building capabilities that will address these challenges to support stakeholders across all levels of government and the private sector. First, CISA has developed a National Critical Functions (NCFs) data structure to organize and describe critical infrastructure. This data set provides a set of decompositions structured as directed graphs that break down each identified NCF into enabling sub-functions that detail the operation and interdependencies across disparate CI systems. The functional description...
A PSAM Profile is not yet available for this author.
Paper 3 ME306
Lead Author: Isaac Faber Co-author(s): Elisabeth Paté-Cornell, mep@stanford.edu
Presenter of this paper: Elisabeth Pate-Cornell (mep@stanford.edu)
Warning and management of cyber threats by a hybrid AI system (robot and operator)
This paper presents a warning system and risk management model, in which early signals of cyber threats are generated using machine learning and artificial intelligence to support the defender’s decisions. Cyber threats and attacks are modeled as a set of discrete observable steps in the “kill chain”. A hybrid AI system (a “super-agent” including a robot and human being) allows the robot, when it has acquired sufficient information, to make automatic defensive responses before losses occur. The quantitative model that supports these decisions is based on machine learning and decision analysis. The model allows the robot to call on the operator (“person in the loop”) when the situation requires it. This overall model guides decisions to open or close gates in a system, based on attack and behavior signals at the beginning of the kill chain. ...
A PSAM Profile is not yet available for this author. Presenter Name: Elisabeth Pate-Cornell (mep@stanford.edu) Bio:
Professor of Management and Engineering at Stanford. Teaching and research in engineering risk analysis. Member of the National Academy of Engineering. Co-chair of NASEM committee on risk analysis methods for nuclear weapons and terrorism. 2021 IEEE Ramo medal in systems engineering and science.
Paper 4 VA167
Lead Author: Pavan Kumar Vaddi Co-author(s): Carol Smidts, smidts.1@osu.edu
Reinforcement Learning based Autonomous Cyber Attack Response in Nuclear Power Plants.
Cyber-attacks on digital industrial control systems (ICSs) are becoming increasingly frequent. Given the rise of digitalization in nuclear power plants (NPPs) and the potentially hazardous consequences of a successful cyber-attack on NPPs and similar safety-critical systems, it is imperative that research should be focused on ICS cyber-attack detection and mitigation. In this paper we explore the use of reinforcement learning (RL) to develop an autonomous cyber-attack response system for NPPs, specifically the digital feedwater control system (DFWCS) of a pressurized water reactor (PWR). The cyber-attacks are modeled as Stackelberg games between the defender i.e., the plant operator and the attacker, with the defender acting as the leader in the games. The system state transition probabilities are defined using probabilistic risk assessment (PRA). The optimal defender strategy is computed using multi-agent Q-learning, where the Stackelberg equilibrium over the current Q-values is used at every update. The advent of digital twins for nuclear power plants enables us to simulate a wide ...
Lead Author: Dohun Kwon Co-author(s): Gyunyoung Heo gheo@khu.ac.kr
Development of Operator Response Time Model for DICE(Dynamic Integrated Consequence Evaluation)
DPSA (Dynamic Probabilistic Safety Assessment) which can facilitate the time dependency has been actively underway worldwide. DPSA can provide explicit consideration for dependencies among systems, components caused by plant states, or operator’s actions based on a continuous or discrete-time basis. In Kyung Hee University, DICE (Dynamic Integrated Consequence Evaluation) a dynamic reliability analysis tool using DDET (Discrete Dynamic Event Tree), was developed as a supporting tool for DPSA research. The DDET method is generally used in many DPSA frameworks.
The diagnosis module, one of the modules in DICE, monitors plant status and decides the controls of the plant state by commanding components and operator action through a logical combination of physical variables. Particularly the decision by operator response time including omission or commission can greatly affect plant’s dynamics, so the diagnosis module in DICE attempts to try various approaches for operator models.
Although an existing HRA (Human Reliability Assessment) works a good job for static PSA by providing HEP ...
Lead Author: Steve Prescott Co-author(s): Zhegng Ma (zhegang.ma@inl.gov)
Svetlana Lawrence (svetlana.lawrence@inl.gov)
Robby Christian (robby.christian@inl.gov)
Daniel Nevius (daniel.nevius@inl.gov)
Using EMRALD to Simplify and Perform Dynamic Analysis with MAAP
Event Modeling Risk Assessment using Linked Diagrams (EMRALD) is a software tool developed at Idaho National Laboratory for researching the capabilities of dynamic probabilistic risk assessment (PRA). It provides a simple interface to represent complex interactions often seen when developing dynamic models. EMRALD can also interface with other applications by modifying inputs, running, and using their results within EMRALD for dynamic and integrated assessment. Linking with external codes was formerly done by user-defined scripts, requiring users to be familiar with both the scripting and the details of the application they wanted to use. A recent, enhanced feature to EMRALD provides a library capability to add custom forms for developing simple interfaces to run specific applications, making it so the user does not need to write scripts or have extensive training in the tool they want to pull data from. This feature is especially useful for thermal hydraulic applications such as Modular Accident Analysis Program (MAAP), in which after the plant model is developed, an inhouse MAAP ex...
Lead Author: Gulcin Sarici Turkmen Co-author(s): Alper Yilmaz yilmaz.15@osu.edu
Tunc Aldemir aldemir.1@osu.edu
USE OF MACHINE LEARNING TECHNIQUES TO REDUCE THE COMPUTATIONAL EFFORT FOR DYNAMIC PROBABILISTIC RISK ASSESSMENT
Dynamic probabilistic risk assessment (DPRA) is an important approach for assessing safety of nuclear power plant (NPP) operation. Since NPPs are highly complex systems, it is necessary to produce large amounts of data that represent different possible situations during NPP evolution following an accident in order to carry out the DPRA comprehensively. In addition to the fact that it may take months to produce such data with NPP accident analysis codes (e.g, RELAP5, MELCOR) and DPRA software developed for such a task (e.g., ADAPT), the task also requires the use of significant amount of computer and human resources. The effectiveness of models that have been developed using Machine Learning (ML) techniques in recent years, especially those that can represent time-dependent data and make predictions for the consequences of possible initiating events, have proven to be useful in reducing the computational effort for such a task. Recurrent Neural Network (RNN) approach represent some efficient ML methods that can be used in modeling the development of accidents and predicting potential ...
Name: Gulcin Sarici Turkmen (sariciturkmen.1@osu.edu)
Paper 4 VA167
Lead Author: Pavan Kumar Vaddi Co-author(s): Carol Smidts, smidts.1@osu.edu
Reinforcement Learning based Autonomous Cyber Attack Response in Nuclear Power Plants.
Cyber-attacks on digital industrial control systems (ICSs) are becoming increasingly frequent. Given the rise of digitalization in nuclear power plants (NPPs) and the potentially hazardous consequences of a successful cyber-attack on NPPs and similar safety-critical systems, it is imperative that research should be focused on ICS cyber-attack detection and mitigation. In this paper we explore the use of reinforcement learning (RL) to develop an autonomous cyber-attack response system for NPPs, specifically the digital feedwater control system (DFWCS) of a pressurized water reactor (PWR). The cyber-attacks are modeled as Stackelberg games between the defender i.e., the plant operator and the attacker, with the defender acting as the leader in the games. The system state transition probabilities are defined using probabilistic risk assessment (PRA). The optimal defender strategy is computed using multi-agent Q-learning, where the Stackelberg equilibrium over the current Q-values is used at every update. The advent of digital twins for nuclear power plants enables us to simulate a wide ...
Lead Author: John Russell Co-author(s): Carl Stern
Carl_Stern@mgtsciences.com
Advancing Intrusion Detection Sensor Performance Using Deliberate Motion Analytics
Excessive Nuisance Alarm Rates (NAR) are a major issue for all exterior intrusion detection systems. Sites with problematic sensor systems can experience an excessive number of nuisance alarms per day as a result of weather, animals, and other natural occurrences causing security personnel to become complacent to sensor alarm, thereby undermining sensor system detection capability. All sites that are utilizing current commercial systems are believed to experience elevated NAR. In addition to being susceptible to nuisance alarms, the cost to purchase and install exterior sensors in security perimeters is very high.
Sandia National Laboratories has developed a sensor algorithm that exploits deliberate motion to differentiate alarms caused by an intruder from those caused by other natural occurrences. The Deliberate Motion Analytics (DMA) algorithm is capable of fusing multiple sensors, such as radar, lidar, buried line sensors, microwaves, and video, to provide reliable detection. Preliminary results show that DMA will significantly reduce nuisance alarm rates even when sensors are se...
Paper JL284 | |
A PSAM Profile is not yet available for this author.
Paper 2 ES302
Lead Author: Emily Sandt Co-author(s): Adam Williams adwilli@sandia.gov
Tunc Aldemir aldemir.1@osu.edu
Technique for Managing STPA Results in Physical Security Applications
Mitigating risk at nuclear power plants and other nuclear facilities includes security of the site and material. Current processes to determine what needs to be protected—so-called vital area identification (VAI)—leverages safety analytic methods and previously completed safety analyses in its methodology through use of Fault Tree (FT) based logic models. Though successful to date, VAI traditional approaches heavily rely on completed nuclear safety analyses that may not exist for novel advanced or small modular reactor facilities. In response, a newly proposed method for informing the vital area identification process incorporates Systems Theoretic Process Analysis (STPA). STPA’s systems and control theory basis allows for a single analysis that incorporates multiple adversary objectives. This is both an advancement over the current methodology that only considers sabotage of material and can lead to a broad output of potential areas for protection. A frequent criticism of STPA is the lack of prioritization of its high output volume that can be challenging for practical impleme...
An oft-stated goal within the physical security community is to make security investment decisions within a risk context. For many, risk is defined in a traditional mathematical context of likelihood of occurrence (a probability or frequency) and consequence. Some will further multiply these values and aggregate them to obtain an annualized loss expectancy value for risk, as has been common in safety risk analysis. Others express security risk in terms of Threat, Vulnerability and Consequence, believing that risk can be computed by multiplying numerical values for these attributes. In all cases strong foundational issues exist in application of these numerical methods, including adversary definition, likelihood of attack (and its twin sibling “deterrence”), interdependence among the mathematical terms, adversary tactics, and the differences in adversary groups (known and unknown) with respect to their motivations, goal intensity, knowledge of the target, etc. Even where analysts have tried to be faithful in using these methods to compute “security risk”, the uncertainty ...
Lead Author: Tomasz KISIEL Co-author(s): Artur KIERZKOWSKI artur.kierzkowski@pwr.edu.pl
A method for managing a security checkpoint through multi-criteria analysis with consideration of safety and process performance.
The purpose of this paper is to develop a method for configuring a security screening checkpoint at an airport. The method will be based on such personnel management that the most advantageous ratio of safety to process efficiency is achieved. The paper uses a computer simulation method on the basis of which a multi-criteria analysis is conducted. Two criteria are taken into account: safety and process performance. Such a model has not yet been developed in the scientific literature and can be of significant interest to the airport security control manager. The article is part of the work related to the project: "Development of an innovative desk of the primary and supplementary training of the security control operator at the airport". ...
Session Chair: Craig Primer (craig.primer@inl.gov)
Paper 1 DI318
Lead Author: Diego Mandelli Co-author(s): Congjian Wang, congjian.wang@inl.gov
From machine learning to machine reasoning: a model-based approach to analyze equipment reliability data
In current nuclear power plants (NPPs) a large amount of condition-based data which can be used to assess and monitor component health and performance. Assessing component health from such data can be performed with a large variety of methods. While the analysis of numeric data can be performed with several methods, the extraction of information from textual data remains a challenge. Currently employed natural language processing (NLP) methods do not really provide quantitative information that might be contained in IRs. In addition, the integration of numeric and textual data to identify possible causal relationships between data elements is still an unresolved challenge. This paper presents an approach to extract information from textual (e.g., incident or maintenance reports) and numeric data that relies on model based system engineer (MBSE) models. MBSE are diagrams designed to represent system and component dependencies (from both a form and functional point of view). In our approach, MBSE models emulate system engineer knowledge about component/system architecture. NLP methods ...
Paper DI318 | |
Name: Diego Mandelli (diego.mandelli@inl.gov)
Paper 2 SC17
Lead Author: Sergio Cofre-Martel Co-author(s): Enrique Lopez Droguett eald@ucla.edu
Mohammad Modarres modarres@umd.edu
Physics-Informed Neural Networks for Remaining Useful Life Estimation for Mechanical Systems
Prognostics and health management (PHM) has become a key instrument in the reliability community. Great efforts have gone into estimating systems’ remaining useful life (RUL) by taking advantage of monitoring data and data-driven models (DDMs). The latter have gained significant attention since they are model-independent and do not require previous knowledge of the system under study, known as black-box behavior. Nevertheless, DDMs developed for PHM frameworks are commonly tested on simulated or experimental datasets, which do not present the characteristics and intricacies of data collected from monitoring sensor networks in real systems. Furthermore, the black-box behavior hinders DDMs’ interpretability, and thus they are difficult to trust in the maintenance decision-making process. In this regard, physics-informed models have been implemented through hybrid models, which present significant improvements in accuracy and interpretability. Particularly, physics-informed neural networks (PINNS) have been proposed in deep learning (DL) to either solve or discover partial different...
A PSAM Profile is not yet available for the presenter.
Paper 3 SA165
Lead Author: Sai Zhang Co-author(s): Fei Xu, Fei.Xu@inl.gov
Zhegang Ma, Zhegang.Ma@inl.gov
Natural Language Processing-Enhanced Common Cause Failure Data Analysis
Nuclear power plants (NPPs) have a variety of operating records either routinely maintained or conditioned on incident occurrences. While some records are structured, the others are not. Analyzing records with unstructured data (e.g., narratives) can be challenging, not evaluating them would be a missed opportunity since they likely contain valuable operating experience. Given modern artificial intelligence and machine learning capabilities, it could be feasible and efficient to analyze unstructured NPP operating records. This paper presents an exploratory study of using natural language processing (NLP) to enhance common cause failure (CCF) data analysis for NPPs. The NLP-enhanced CCF data analysis attempts to (a) improve understanding of deep-level CCF failure propagation process and (b) compliment limited data pool of CCF events by analyzing non-CCF failure events and estimating their likelihood of evolving into CCF events. Both of these efforts will be implemented using the same approach developed in this study. The NLP-enhanced approach of analyzing a single CCF report includes ...
Lead Author: Ahmad Al Rashdan Co-author(s): Roman Shaffer, romanshaffer@yahoo.com
Edward (Ted) L. Quinn, tedquinn@cox.net
Fitness of computer vision machine learning in the regulatory framework for safety-related or risk-significant applications
With the advancements made in the field of artificial intelligence (AI) to date, significant potential exists to utilize AI capabilities for nuclear power plant (NPP) applications. AI can replicate human decision making, and AI decision making is usually faster and more accurate. For implementations that impact critical NPP applications (e.g., safety-related or non-safety systems that potentially affect overall plant risk), a deeper regulatory analysis of the AI methods is required. AI applied to NPP operations falls within the category of digital I&C (DI&C) because such applications involve digital computer hardware and custom-designed software that input plant data, execute complex software algorithms, and output the results to a system or licensed human operator to potentially provoke an action. For AI methods to be compliant with current regulatory requirements for DI&C, AI compatibility must be evaluated to identify AI gaps that may exist to prevent effective deployment of AI in NPPs. This work aims to evaluate how example AI technologies, specifically computer vision machine le...
Paper AH324 | |
Name: Ahmad Al Rashdan (ahmad.alrashdan@inl.gov)
Session Th03 - Reliability Analysis and Risk Assessment Methods II
Session Chair: Zoltan Kovacs (kovacs@relko.sk)
Paper 1 J.145
Lead Author: Jorge Luis Hernandez Co-author(s): Brinkman, Johannes. L. - brinkman@nrg.eu
McLean Rob (RR) - rob.mclean@brucepower.com
Mandelli, Diego - diego.mandelli@inl.gov
Minibaev, Ruslan - R.Minibaev@iaea.org
Jeon, Hojun - jeonhojun@khnp.co.kr
Hortal, Francisco Javier - jav.hortal@gmail.com
Guigueno, Yves - yves.guigueno@irsn.fr
Nitoi, Mirela - mirela.nitoi@nuclear.ro
Rowekamp, Marina - Marina.Roewekamp@grs.de
Schneider, Raymond - schneire@westinghouse.com
Siu, Nathan - nosiubiz@gmail.com
Presenter of this paper: Marina Roewekamp (marina.roewekamp@grs.de)
Advantages and challenges in implementing advanced Probabilistic Safety Assessment approaches and applications for Nuclear Power Plants: IAEA overview
Authors: Brinkman, J. L.; Jeon, H.; Guigueno, Y.; Hortal, J.; Luis Hernandez, J.; Mandelli, D.; McLean R.; Minibaev, R.; Nitoi, M.; Röwekamp, M.; Schneider, R.; Siu, N..
Considerable progress has been made in recent years on enhancing and extending the probabilistic safety assessment (PSA) approach as well as in its applications.
Aiming to collect current experiences in Member States (MS) related to enhancements and developments in PSA and to develop a related technical document which can be used as a support for future updates of the related IAEA Safety Guides on PSA, IAEA started in 2018 a project supported by extrabudgetary funds from the USA. Consequently, the IAEA performed several Consultancy Meetings from 2019 to 2021 and two major Technical Meetings in 2019 and 2020, to draft the technical document compiling current status and experiences in MS regarding new areas considered as advanced PSA approaches and applications. The advanced PSA approaches aim at expanding the traditional PSA approaches by incorporating time related dependencies into the quasi-static Boolean logic st...
Name: Jorge Luis Hernandez (J.Luis-Hernandez@iaea.org) Presenter Name: Marina Roewekamp (marina.roewekamp@grs.de) Bio: - Diploma in Physics PhD (Dr. rer. nat. In Physical Chemistry / Materials Science) from University of Bonn
- Senior Chief Expert for Hazards and PSA at GRS – the Federal German Nuclear Technical Safety Organization – for > 33 years
- PSA work: mainly performing and/or reviewing Level 1 PSA, particularly for Internal and External Hazards (incl. hazard combinations)
- Member of the German PSA Expert Panel for > 15 years
- Former Chair and actual Vice Chair of OECD/NEA/CSNI Working Group on Risk Assessment (WGRISK)
- Chair of OECD/NEA CSNI Expert Group on Fire Risk (EGFR) and of Management Board of OECD/NEA FIRE (Fire Events Records Exchange) Project
- Consultant and/or reviewer for various IAEA Guides (SSG-64, SSG-67, SSG-68, DS523 (revision of SSG-3 on Level 1 PSA), DS528 (revision of SSG-4 on Level 2 PSA), TECDOCS on MUPSA, Advanced PSA Methods, Safety Assessment of Nuclear Installations Against Combinations of External Hazards, etc.
- IAPSAM Board of Directors member since
Paper 2 SM70
Lead Author: Eunseo So Co-author(s): Yunyeong Heo, Yunyeong.Heo@inl.gov
Mohammad Abdo, Mohammad.Abdo@inl.gov
Yong-Joon Choi, Yong-Joon.Choi@inl.gov
Development of Genetic Algorithms for Plant Reload Optimization for an Operating Pressurized Water Reactor
This paper summarizes the development, results, and enhancement activity of the Artificial Intelligence (AI) based automated nuclear power plant fuel reload optimization platform under the guidance of the United States Department of Energy, Light Water Reactor Sustainability Program, and Risk-Informed Systems Analysis Pathway. The research focuses on the optimization of the fuel arrangement to maximize the fuel cycle length.
The AI-based Genetic Algorithm works with both convex and non convex, constrained or unconstrained problems. This can help explain the relationship between the fuel arrangement and fuel cycle length, in particular, the surrogate models used to reconstruct the Multiphysics problem maps the features/inputs of the problem to the fuel cycle length to provide such explanation. The Genetic Algorithm is composed of several evolutionary processes: fitness evaluation, parent selection, crossover, mutation, survivor selection, and termination. Crossover and mutation are the main steps responsible for injecting randomness/heuristics to prevent the algorithm from getting s...
Paper SM70 | |
A PSAM Profile is not yet available for this author.
Paper 3 SS329
Lead Author: Ji Suk Kim Co-author(s): Man Cheol Kim, charleskim@cau.ac.kr
Effect of the Number of Ground Motion Subintervals on the Seismic PSA
There are two approaches to quantify a seismic probabilistic safety assessment(PSA) model. One is simulation approach such as Monte Carlo simulation and Latin Hypercube sampling, and the other is discrete approach[1]. The discrete approach has several advantages including the compatibility with internal event PSA quantification, preservation of logical links between the primary seismic event tree and secondary seismic event trees, and use of conventional PSA software[1,2]. However, in the discrete approach, the number of ground motion subintervals have been used with the lack of backgrounds for the effect on the seismic risks. In addition, the number of subintervals is limited in practice because the seismic PSA model should be quantified for each subinterval.
In this study, we examine the effect of the number of ground motion subintervals on the seismic risk and identify the possibility that the seismic risk may be underestimated when the number of subintervals is small. We also provide the method for finding the regions of ground motion level that the underestimation of the seismic...
Paper SS329 | |
Name: Ji Suk Kim (sssuke@cau.ac.kr)
Paper 4 AH324
Lead Author: Ahmad Al Rashdan Co-author(s): Roman Shaffer, romanshaffer@yahoo.com
Edward (Ted) L. Quinn, tedquinn@cox.net
Fitness of computer vision machine learning in the regulatory framework for safety-related or risk-significant applications
With the advancements made in the field of artificial intelligence (AI) to date, significant potential exists to utilize AI capabilities for nuclear power plant (NPP) applications. AI can replicate human decision making, and AI decision making is usually faster and more accurate. For implementations that impact critical NPP applications (e.g., safety-related or non-safety systems that potentially affect overall plant risk), a deeper regulatory analysis of the AI methods is required. AI applied to NPP operations falls within the category of digital I&C (DI&C) because such applications involve digital computer hardware and custom-designed software that input plant data, execute complex software algorithms, and output the results to a system or licensed human operator to potentially provoke an action. For AI methods to be compliant with current regulatory requirements for DI&C, AI compatibility must be evaluated to identify AI gaps that may exist to prevent effective deployment of AI in NPPs. This work aims to evaluate how example AI technologies, specifically computer vision machine le...
Paper AH324 | |
Name: Ahmad Al Rashdan (ahmad.alrashdan@inl.gov)
Session Th04 - ire Probabilistic Risk Analysis with FRI3D
Session Chair: Kurt Vedros (kurt.vedros@inl.gov)
Paper 1 RA340
Lead Author: Ram Sampath Co-author(s): Ram Sampath (ram@centroidlab.com)
Fire Modeling & Probabilistic Risk Analysis with FRI3D
The Fire Risk Investigation in 3D (FRI3D) automates many of the tasks for fire modeling and PRA under one system, reducing the overall maintenance costs and human errors for fire modeling. The platform also maintains all necessary information in one place allowing the analyst to evaluate scenarios without accessing numerous plant drawings and database and will give the analyst direct access (built in) to the industry approved fire modelling codes and methods. The use of FRI3D by operators, regulators and engineering consultancy firms is expected to lead to the enhancement of fire PRA, increased safety, and reduced operating costs. This workshop will go over a sample fire analysis using FRI3D on perform a sample fire analysis. It will go over a few typical fire scenarios encountered by the industry.
Please go to
https://fri3d.centroidlab.com/guides/fri3d-2022-summer-psam16-workshop
to download the materials and an evaluation copy of FRI3D for the workshop.
This step is just to follow along during the workshop and not necessary or mandatory.
...
Paper RA340 | |
Name: Ram Sampath (ram@centroidlab.com)
Paper 2 SM70
Lead Author: Eunseo So Co-author(s): Yunyeong Heo, Yunyeong.Heo@inl.gov
Mohammad Abdo, Mohammad.Abdo@inl.gov
Yong-Joon Choi, Yong-Joon.Choi@inl.gov
Development of Genetic Algorithms for Plant Reload Optimization for an Operating Pressurized Water Reactor
This paper summarizes the development, results, and enhancement activity of the Artificial Intelligence (AI) based automated nuclear power plant fuel reload optimization platform under the guidance of the United States Department of Energy, Light Water Reactor Sustainability Program, and Risk-Informed Systems Analysis Pathway. The research focuses on the optimization of the fuel arrangement to maximize the fuel cycle length.
The AI-based Genetic Algorithm works with both convex and non convex, constrained or unconstrained problems. This can help explain the relationship between the fuel arrangement and fuel cycle length, in particular, the surrogate models used to reconstruct the Multiphysics problem maps the features/inputs of the problem to the fuel cycle length to provide such explanation. The Genetic Algorithm is composed of several evolutionary processes: fitness evaluation, parent selection, crossover, mutation, survivor selection, and termination. Crossover and mutation are the main steps responsible for injecting randomness/heuristics to prevent the algorithm from getting s...
Paper SM70 | |
A PSAM Profile is not yet available for this author.
Paper 3 SS329
Lead Author: Ji Suk Kim Co-author(s): Man Cheol Kim, charleskim@cau.ac.kr
Effect of the Number of Ground Motion Subintervals on the Seismic PSA
There are two approaches to quantify a seismic probabilistic safety assessment(PSA) model. One is simulation approach such as Monte Carlo simulation and Latin Hypercube sampling, and the other is discrete approach[1]. The discrete approach has several advantages including the compatibility with internal event PSA quantification, preservation of logical links between the primary seismic event tree and secondary seismic event trees, and use of conventional PSA software[1,2]. However, in the discrete approach, the number of ground motion subintervals have been used with the lack of backgrounds for the effect on the seismic risks. In addition, the number of subintervals is limited in practice because the seismic PSA model should be quantified for each subinterval.
In this study, we examine the effect of the number of ground motion subintervals on the seismic risk and identify the possibility that the seismic risk may be underestimated when the number of subintervals is small. We also provide the method for finding the regions of ground motion level that the underestimation of the seismic...
Paper SS329 | |
Name: Ji Suk Kim (sssuke@cau.ac.kr)
Paper 4 AH324
Lead Author: Ahmad Al Rashdan Co-author(s): Roman Shaffer, romanshaffer@yahoo.com
Edward (Ted) L. Quinn, tedquinn@cox.net
Fitness of computer vision machine learning in the regulatory framework for safety-related or risk-significant applications
With the advancements made in the field of artificial intelligence (AI) to date, significant potential exists to utilize AI capabilities for nuclear power plant (NPP) applications. AI can replicate human decision making, and AI decision making is usually faster and more accurate. For implementations that impact critical NPP applications (e.g., safety-related or non-safety systems that potentially affect overall plant risk), a deeper regulatory analysis of the AI methods is required. AI applied to NPP operations falls within the category of digital I&C (DI&C) because such applications involve digital computer hardware and custom-designed software that input plant data, execute complex software algorithms, and output the results to a system or licensed human operator to potentially provoke an action. For AI methods to be compliant with current regulatory requirements for DI&C, AI compatibility must be evaluated to identify AI gaps that may exist to prevent effective deployment of AI in NPPs. This work aims to evaluate how example AI technologies, specifically computer vision machine le...
Paper AH324 | |
Name: Ahmad Al Rashdan (ahmad.alrashdan@inl.gov)
Session Th05 - Modernization Through Risk-Management
Lead Author: Lana Lawrence Co-author(s): Todd Anselmi, Todd.Anselmi@inl.gov
Diego Mandelli, Diego.Mandelli@inl.gov
Curtis Smith, Curtis.Smith@inl.gov
Guidance for Risk-Informed Reliability and Integrity Management Program Development
Every nuclear power plant in the U.S. and around the world is obligated to maintain high levels of safety with measures that ensure plant reliability and integrity. These programs have become increasingly risk-informed in recent years. New reactor designs are very focused on risk-informed approaches to support all stages of development—from initial design and licensing to plant operation and retirement. The License Modernization Project (LMP) initiative by the U.S. Nuclear Regulatory Commission (NRC) is just one example of a risk-informed approach being encouraged for implementation.
The LMP initiative resulted in issuance of Regulatory Guide (RG) 1.233, "Guidance for a Technology-Inclusive, Risk-Informed, and Performance-Based Methodology to Inform the Licensing Basis and Content of Applications for Licenses, Certifications, and Approvals for Non-Light Water Reactors." RG 1.233 endorses Nuclear Energy Institute (NEI) 18-04, Revision 1, “Risk-Informed Performance-Based Guidance for Non-Light Water Reactor Licensing Basis Development,” as one acceptable method for non-LWR desig...
Lead Author: Edward Chen Co-author(s): Han Bao han.bao@inl.gov
Tate Shorthill tate.shorthill@inl.gov
Carl Elks crelks@vcu.edu
Nam Dinh ntdinh@ncsu.edu
Application of Orthogonal-Defect Classification for Software Reliability Analysis
Modernization of existing and new nuclear power plants with digital instrumentation and control systems (DI&C) is a recent and highly trending topic. However, there lacks strong consensus on best-estimate risk methodologies by both the Nuclear Regulatory Commission and industry. This has resulted in hesitation for further modernization projects until a more unified methodology is recognized. In this work, we develop an approach called Orthogonal-defect Classification for Assessing Software Reliability (ORCAS) to quantify probabilities of various software failure modes in a DI&C system. The method utilizes accepted industry methodologies for software quality assurance that are also verified by experimental or mathematical formulations. In essence, the approach combines a semantic failure classification model with a reliability growth model to predict (and quantify) potential failure modes of a DI&C software system. The semantic classification model is used to address the question: how do latent defects in software contribute to different software failure root causes? The use of reliab...
Lead Author: Matthew Humberstone Co-author(s): Keith Compton, Keith.Compton@nrc.gov
Trey Hathaway, Alred.Hathaway@nrc.gov
Kurt Vedros, Kurt.Vedros@inl.gov
The Impact of External Hazards and FLEX Credit in the Application of LMP for Operating Reactors
As part of the Nuclear Regulatory Commission’s (NRC’s) effort to provide resources to longer-term, forward looking research projects with potential regulatory benefits, a future focused research (FFR) project was established to study the implementation of the Licensing Modernization Project (LMP) methodology for the operating reactors. This research effort used the LMP methodology and applied the NRC’s Level 3 probabilistic risk assessment (PRA) model results to gain feasibility insights. The initial phase of this effort used results from a limited scope of the NRC’s level 3 PRA model to explore key risk-insights of the licensing basis for reactors licensed under Title 10 of the Code of Federal Regulations (10 CFR) Part 50. The next phase of this effort addressed in this paper uses the expanded results from the NRC’s level 3 PRA model that includes model enhancements.
This paper compares the results derived from the level 1, level 2, and level 3 PRA models for internal events and internal floods with the results that include external events. Moreover, the paper uses this c...
Name: Matthew Humberstone (Matthew.Humberstone@nrc.gov)
Paper 4 ST181
Lead Author: Stanislav Hustak
PSA Applications for Dukovany NPP
Reliability and Risk Department in UJV Rez, a. s., the Czech Republic, has developed and currently maintains Living PSA project for Dukovany NPP, a four-unit nuclear power plant in the Czech Republic.
RiskSpectrum® PSA software has been used for development and quantification of the Living PSA model. It is an integrated model which comprises all initiating events, including internal and external hazards, for all plant operational modes in the same project. The PSA model is continuously updated and used extensively for various PSA applications at Dukovany NPP, such as risk monitoring, evaluation of Technical Specifications, event analysis, analysis of plant modifications etc. The use of the selected PSA applications to support Dukovany NPP risk management is required by the Czech regulatory Decree and supported by the Czech regulatory guidelines.
The paper describes the selected PSA applications at Dukovany NPP that have been performed recently, namely evaluation of Technical Specifications and requirements for availability of diverse and mobile (DAM) equipment. This evaluation follo...
Planning and evaluation of reliability demonstration testing with uncertainties
During the development of products, the proof of product reliability is often a challenging activity. Basic planning actions have to be executed in an early design stage of the product. First of all, two possible ways for the proof of reliability are available: testing without failures and testing with failures. Within the framework of this work, the focus will be on testing without failures, also called reliability demonstration testing (rdt).
At the planning phase, the following questions have to be analysed:
• What is probability of a successful rdt?
• Which number of specimen and testing duration is necessary?
• How to handle the unlike event of product failures while performing a rdt?
• Which uncertainties have to be regarded and how to consider in the evaluation phase.
Especially uncertainties and their consequences are part of this study. One example is the estimation of the shape parameter of a Weibull distributed failure characteristic in the planning phase. Possible sources for the estimation and the consequences of uncertainties of the estimation are analysed.
...
Name: Tobias Leopold (tobias.leopold@hs-esslingen.de)
Paper 2 AL3
Lead Author: Alexander Grundler Co-author(s): Martin Dazer, martin.dazer@ima.uni-stuttgart.de
Bernd Bertsche, bernd.bertsche@ima.uni-stuttgart.de
Efficient Reliability Demonstration using the Probability of Test Success and Bayes Theorem
In order to demonstrate the reliability of a component, the reliability engineer is often faced with multiple challenges. On the one hand the budget for testing is limited and on the other hand, the demonstration needs to be done as quick and with the most precise statistical information possible. To address these challenges, the concept of Probability of Test Success was developed. It enables the objective assessment of tests with regard to their chance of success and thus the ability to directly compare the tests as well as the planning of expenditure and cost estimation. Secondly, a great variety of approaches have been developed which, by means of Bayes' theorem, use available prior knowledge to correct the information obtained from the tests and thus reduce expenditures. However, the combination of the Probability of Test Success and Bayes' Theorem to plan efficient reliability demonstration tests has not been addressed up to now. Therefore, the aim of this paper is to do so. It is analysed how reliability demonstration tests can be planned using the Probability of Test Success ...
Name: Alexander Grundler (alexander.grundler@ima.uni-stuttgart.de)
Paper 3 DA21
Lead Author: Martin Dazer Co-author(s): Alexander Grunder; alexander.grundler@ima.uni-stuttgart.de
Achim Benz; achim.benz@ima.uni-stuttgart.de
Philipp Mell; philipp.mell@ima.uni-stuttgart.de
Marco Arndt; marco.arndt@ima.uni-stuttgart.de
Risk based reliability demonstration test planning for decision making under uncertainty
Reliability assurance by empirical data collected from lifetime tests is always subject to uncertainty and thus to a risk of making wrong decisions. The type-I statistical error is quantified and minimized over the generally known confidence interval to ensure that the reliability of the population in field operation is valid. The type-II statistical error quantifies the risk of a failed reliability test and thus the producer risk. A failed test generally means further iteration loops in the assurance process and should be avoided accordingly. However, in the context of reliability assurance, the type-II error is often neglected and consequently it is not known how high the probability of successful reliability demonstration is with the chosen test strategy. In this paper, a new method is presented that allows a calculation of the type-II error based on prior knowledge, which is called probability of test success (Pts). Pts enables the objective comparison of available test strategies for scenarios with a wide variety of boundary conditions such as accelerated testing, system and com...
Lead Author: Achim Benz Co-author(s): Alexander Grundler, alexander.grundler@ima.uni-stuttgart.de
Martin Dazer, martin.dazer@ima.uni-stuttgart.de
Bernd Bertsche, bernd.bertsche@ima.uni-stuttgart.de
Reliability Demonstration Test Planning for different Distributions of Field Load Spectra
In recent decades, product development cycles have become significantly shorter, with increasing reliability requirements and simultaneously decreasing budgets for tests. Even though a lot of knowledge about the failure behavior of the product can be generated by simulations, in the end a test is needed to demonstrate the reliability of the product. In order to select a test that optimally uses the resources of time, cost and number of samples for the desired reliability level with a sufficient confidence level, a simulation is performed. Previous work by Herzig et al. and Grundler et al. applied the concept of Probability of Success (PoS) for Success Run and End of Life tests introduced by Dazer et al. to consider accelerated Success Run and End of Life tests besides different failure mechanisms. This involved Monte Carlo simulations to derive the correlations of a successful test with the costs incurred, time, number of samples and the achievable demonstrated reliability. Previous work by Benz et al. introduces the concept of PoS for field load spectra deriving the demonstrated dam...
Lead Author: Heejong Yoo Co-author(s): Gyunyoung Heo (gheo@khu.ac.kr) *Corresponding author
Lessons-learned of using Monte Carlo method with importance sampling in fault tree quantification
In the quantitative evaluation of fault trees (FTs) and event trees (ETs) during a level 1 PSA, ETs are easily calculated by the product of each probability, while FTs need additional evaluation techniques to deal with Boolean logic. FTs used in nuclear engineering are usually classified as large FTs, which causes difficulty in calculating the top event probability when using the conventional methods. Other problems that could arise is that all conventional methods use minimal cut sets, which needs additional process of gaining the minimal cut sets. Validation issues are also present due to the fact that widely used methods are all using minimal cut sets, leading to the need of developing methods that is free of minimal cut sets. While there were some attempts to gain the top event probability of FTs by using the Monte Carlo method, which could be free from using minimal cut sets by setting a different algorithm, the time and computational cost for using the Monte Carlo method is always the main issue. In order to reduce computational resource and having its strong point in variance ...
Lead Author: Kyusik Oh Co-author(s): Sangjun Park (sangjun@kaeri.re.kr)
Gyunyoung Heo (gheo@khu.ac.kr) : corresponding author
Improving Measurement Reliability using Data Reconciliation and Digital Twin
The measurements may be less accurate because of defects in the measuring instruments or leakages in the system, but also of unknown reasons. These uncertainties are inevitable but can be managed if quantified. Error that can determine the cause, such as defects in the measuring instrument or leakages of facilities, is called gross error, and error that does not know the cause is called random error. In order to reduce or eliminate these errors, the application of data reconciliation and gross error detection technique is effective. These not only reduce random errors in measurements and eliminate gross errors, but also adjust the value to satisfy the correlations between them called physical models. These techniques were developed in various fields, but It seems rare that both how to update the physical model of the system and strategy to maintain the actual measurement system are explained simultaneously.
First, this paper introduces in-house code (R language) using data reconciliation and gross error detection algorithms. Three case studies using power plant simulation data are ex...
Lead Author: Woo Sik Jung Co-author(s): Seong Kyu Park (sparkpsa@ness.re.kr)
Balanced Fault Tree Modeling of Alternating Operating Systems in Probabilistic Safety Assessment
Nuclear power plants (NPPs) have alternating operation systems, such as the component cooling water system (CCWS), essential service water system (ESWS), essential chilled water system (ECWS), and chemical and volume control system (CVCS). Single-unit Probabilistic safety assessment (SUPSA) models for nuclear power plants (NPPs) have many failures of alternating systems. Furthermore, since NPPs undergo alternating operations between full power and low power and shutdown (LPSD), multi-unit PSA (MUPSA) models have failures of NPPs that undergo alternating operations between full power and LPSD.
Their failures for alternating operations are modeled using fraction or partitioning events in seismic SUPSA and MUPSA fault trees. Since partitioning events for one system are mutually exclusive, their combinations should be excluded in exact solutions. However, it is difficult to eliminate the combinations of mutually exclusive events without modifying PSA tools for generating MCSs from a fault tree. If the combinations of mutually exclusive events are not deleted, core damage frequency (CDF...
Lead Author: Achim Benz Co-author(s): Alexander Grundler, alexander.grundler@ima.uni-stuttgart.de
Martin Dazer, martin.dazer@ima.uni-stuttgart.de
Bernd Bertsche, bernd.bertsche@ima.uni-stuttgart.de
Reliability Demonstration Test Planning for different Distributions of Field Load Spectra
In recent decades, product development cycles have become significantly shorter, with increasing reliability requirements and simultaneously decreasing budgets for tests. Even though a lot of knowledge about the failure behavior of the product can be generated by simulations, in the end a test is needed to demonstrate the reliability of the product. In order to select a test that optimally uses the resources of time, cost and number of samples for the desired reliability level with a sufficient confidence level, a simulation is performed. Previous work by Herzig et al. and Grundler et al. applied the concept of Probability of Success (PoS) for Success Run and End of Life tests introduced by Dazer et al. to consider accelerated Success Run and End of Life tests besides different failure mechanisms. This involved Monte Carlo simulations to derive the correlations of a successful test with the costs incurred, time, number of samples and the achievable demonstrated reliability. Previous work by Benz et al. introduces the concept of PoS for field load spectra deriving the demonstrated dam...
Name: Achim Benz (achim.benz@ima.uni-stuttgart.de)
Session Th13 - Cyber Security II
Session Chair: Ali Ayoub (aliayoub@mit.edu)
Paper 1 ZH169
Lead Author: Yunfei Zhao Co-author(s): Linan Huang (huanglinan29@gmail.com)
Quanyan Zhu (quanyan.zhu@nyu.edu)
Carol Smidts (smidts.1@osu.edu)
Bayesian games for optimal cybersecurity investment with incomplete information of the attacker
The trend of digitization in various industrial systems has exposed these systems to increasing cyberattacks. Therefore, it is of vital importance to reduce the cybersecurity risk of industrial systems through cost-effective decisions on cybersecurity investment. In making such decisions, the defender is usually faced with the challenge that arises from incomplete information on the attacker. In this paper, we propose a Bayesian games approach to model the optimal cybersecurity investment strategy under such situations. In this approach, the defender categorizes the attacker into a finite number of types, e.g., various levels of capability, and assigns a probability distribution over the different types of attackers. Then the defender optimizes his/her cybersecurity investment based on risk assessment considering the possible attack efforts of these various types of attackers, with the objective of minimizing the expected cyberattack loss and the cybersecurity investment cost. The proposed method is demonstrated using a numerical example. We perform a sensitivity analysis for model p...
Lead Author: Pavan Kumar Vaddi Co-author(s): Michael C. Pietrykowski, pietrykowski.6@osu.edu;
Xiaoxu Diao, diao.38@osu.edu;
Yunfei Zhao, zhao.2263@osu.edu;
Carol Smidts, smidts.1@osu.edu
Dynamic Probabilistic Risk Assessment for Cyber Security Risk Analysis in Nuclear Reactors
The increasing adaptation of nuclear power plants (NPPs) to incorporate software-based components along with digital communication networks in their operation has resulted in improved control, automation, monitoring and diagnostics, while simultaneously opening those power plants to a new dimension of risk, cyber-attacks. Additionally, the attackers have become more knowledgeable about the vulnerabilities associated with such software systems and network architectures. Hence there is a need to systematically study and quantify the risks associated with cyber-attacks on NPPs and the existing cyber defenses. In this paper we present a dynamic probabilistic risk assessment (DPRA) framework for nuclear power plants in the context of cyber security. In addition to stochastic events such as component failures, the framework implements cyber-attacks along with defenders’ i.e., the plant operators, and the attackers’ behaviors and their interactions in a game theory-based framework. The proposed DPRA framework is demonstrated using the secondary side of a pressurized water reactor (PWR)....
Lead Author: Woo Sik Jung Co-author(s): Seong Kyu Park (sparkpsa@ness.re.kr)
Balanced Fault Tree Modeling of Alternating Operating Systems in Probabilistic Safety Assessment
Nuclear power plants (NPPs) have alternating operation systems, such as the component cooling water system (CCWS), essential service water system (ESWS), essential chilled water system (ECWS), and chemical and volume control system (CVCS). Single-unit Probabilistic safety assessment (SUPSA) models for nuclear power plants (NPPs) have many failures of alternating systems. Furthermore, since NPPs undergo alternating operations between full power and low power and shutdown (LPSD), multi-unit PSA (MUPSA) models have failures of NPPs that undergo alternating operations between full power and LPSD.
Their failures for alternating operations are modeled using fraction or partitioning events in seismic SUPSA and MUPSA fault trees. Since partitioning events for one system are mutually exclusive, their combinations should be excluded in exact solutions. However, it is difficult to eliminate the combinations of mutually exclusive events without modifying PSA tools for generating MCSs from a fault tree. If the combinations of mutually exclusive events are not deleted, core damage frequency (CDF...
Lead Author: Achim Benz Co-author(s): Alexander Grundler, alexander.grundler@ima.uni-stuttgart.de
Martin Dazer, martin.dazer@ima.uni-stuttgart.de
Bernd Bertsche, bernd.bertsche@ima.uni-stuttgart.de
Reliability Demonstration Test Planning for different Distributions of Field Load Spectra
In recent decades, product development cycles have become significantly shorter, with increasing reliability requirements and simultaneously decreasing budgets for tests. Even though a lot of knowledge about the failure behavior of the product can be generated by simulations, in the end a test is needed to demonstrate the reliability of the product. In order to select a test that optimally uses the resources of time, cost and number of samples for the desired reliability level with a sufficient confidence level, a simulation is performed. Previous work by Herzig et al. and Grundler et al. applied the concept of Probability of Success (PoS) for Success Run and End of Life tests introduced by Dazer et al. to consider accelerated Success Run and End of Life tests besides different failure mechanisms. This involved Monte Carlo simulations to derive the correlations of a successful test with the costs incurred, time, number of samples and the achievable demonstrated reliability. Previous work by Benz et al. introduces the concept of PoS for field load spectra deriving the demonstrated dam...
Name: Achim Benz (achim.benz@ima.uni-stuttgart.de)
Session Th14 - Machine Learning IV
Session Chair: Craig Primer (craig.primer@inl.gov)
Paper 1 VI170
Lead Author: Vivek Agarwal Co-author(s): Koushik A. Manjunatha, Andrei V. Gribok, Torrey J. Mortenson, and Harry Palas
Scalable Risk-Informed Predictive Maintenance Strategy for Operating Nuclear Power Plants
Over the years, the nuclear fleet has relied on labor-intensive, time-consuming preventive maintenance (PM) programs, driving up operation and maintenance (O&M) costs to achieve high capacity factors. The primary objective of the research presented in this paper is to develop scalable technologies deployable across plant assets and the nuclear fleet in order to achieve risk-informed predictive maintenance (PdM) strategies at commercial nuclear power plants (NPPs). A well-constructed risk-informed PdM approach for an identified plant asset was developed in this research, taking advantage of advancements in data analytics, machine learning (ML), artificial intelligence (AI), risk model, and visualization. These technologies would allow commercial NPPs to reliably transition from current labor-intensive PM programs to a technology-driven PdM program, eliminating unnecessary O&M costs.
The research and development approach presented in the paper is being developed as part of a collaborative research effort between Idaho National Laboratory and Public Service Enterprise Group (PSEG) Nucl...
Lead Author: Plínio Ramos Co-author(s): July B. Macedo - julybias@gmail.com
Caio B. S. Maior - caio.maior@ufpe.br
Márcio C. Moura - marcio.cmoura@ufpe.br
Isis D. Lins - isis.lins@ufpe.br
Combining BERT with numerical features to classify injury leave based on accident description
Workplace safety is a major concern in many industries. In this context, accident investigation reports provide useful knowledge to support companies to propose preventive and mitigative measures. However, the information presented in accident reports databases is normally large, complex, also filled out with redundant data. Thus, a complete human review of the entire database is arduous, considering numerous reports produced by a company. Therefore, natural language processing (NLP)-based techniques are suitable for analyzing a massive amount of textual information. In this paper, we adopted NLP techniques to determine whether or not an injury leave would be expected. The methodology was applied on 648 accident reports collected from an actual hydroelectric power company and focused on the accident agent categories. We employ Bidirectional Encoder Representations from Transformers (BERT), a state-of-art natural language processing method, to tackle the aforementioned problem. The text representations provided by BERT model were combined with numerical and binary features extracted f...
A PSAM Profile is not yet available for this author.
Paper 3 AH327
Lead Author: Ahmad Al Rashdan
Mapping inspection procedure requirements into data metrics for automation-assisted inspection preparation
In the nuclear industry, regulatory compliance activities constitute an appreciable portion of a nuclear plant's data collection and analysis efforts, and therefore represent a significant portion of a nuclear utility operations and maintenance budget, with regulatory inspections as a major component. These inspections involve many different types of information needed by the U.S. Nuclear Regulatory Commission’s (NRC) Reactor Oversight Program. The Department of Energy’s Light Water Reactor Sustainability (LWRS) program launched an effort to automate data collection and analysis for regulatory inspection preparation. The effort would also assess the ability of automation technologies to provide potentially more efficient means for nuclear utilities to verify and demonstrate regulatory compliance. A key aspect of automating this process is to convert plant performance related to each of the inspection requirements into quantifiable data metrics. The development of those data metrics is discussed using the NRC problem identification and resolution inspection as a use case. ...
Paper AH327 | |
Name: Ahmad Al Rashdan (ahmad.alrashdan@inl.gov)
Paper 4 AC15
Lead Author: Achim Benz Co-author(s): Alexander Grundler, alexander.grundler@ima.uni-stuttgart.de
Martin Dazer, martin.dazer@ima.uni-stuttgart.de
Bernd Bertsche, bernd.bertsche@ima.uni-stuttgart.de
Reliability Demonstration Test Planning for different Distributions of Field Load Spectra
In recent decades, product development cycles have become significantly shorter, with increasing reliability requirements and simultaneously decreasing budgets for tests. Even though a lot of knowledge about the failure behavior of the product can be generated by simulations, in the end a test is needed to demonstrate the reliability of the product. In order to select a test that optimally uses the resources of time, cost and number of samples for the desired reliability level with a sufficient confidence level, a simulation is performed. Previous work by Herzig et al. and Grundler et al. applied the concept of Probability of Success (PoS) for Success Run and End of Life tests introduced by Dazer et al. to consider accelerated Success Run and End of Life tests besides different failure mechanisms. This involved Monte Carlo simulations to derive the correlations of a successful test with the costs incurred, time, number of samples and the achievable demonstrated reliability. Previous work by Benz et al. introduces the concept of PoS for field load spectra deriving the demonstrated dam...
Name: Achim Benz (achim.benz@ima.uni-stuttgart.de)
Session Th15 - Human Reliability Analysis V
Session Chair: Ronald Boring (ronald.boring@inl.gov)
Paper 1 ME315
Lead Author: Tom Ulrich Co-author(s): Stephen Hancock, stephen.hancock@inl.gov
Roger Lew, rogerlew@uidaho.edu
Tyler Westover, tyler.westover@inl.gov
Richard Boardman, richard.boardman@inl.gov
Simulators and Operating Concepts for Hydrogen Production
Operating concepts for close-coupled nuclear and hydrogen plants require simulator development to address operator human factors and performance issues. In addition, this research supports the development of plant-to-plant control systems interfaces that may involve mixed analog and digital distributed control system logic. This work supports operational safety while enabling nuclear plants to dispatch electricity to the hydrogen plant or the grid within minutes of a request from the grid operators. The Human Systems Simulation Lab at the Idaho National Laboratory was used to evaluate the ability of nuclear power plant operators to respond to normal and potentially off-normal events when switching between the traditional fully electric generation and hybrid electric and thermal energy operation modes to support hydrogen production with a nuclear power plant....
Lead Author: Kanoko Nishiono Co-author(s): Marilia Ramos, marilia.ramos@ucla.edu
Yoshikane Hamaguchi, hamaguchi_yoshikane_b3n@nra.go.jp
Ali Mosleh, mosleh@ucla.edu
Dependency Analysis within Human Failure Events for Nuclear Power Plant: Comparison between Phoenix and SPAR-H
Robust and realistic Human Error Probability (HEP) estimation within Human Reliability Analysis (HRA) relies upon, among other factors, appropriate consideration of the dependency between human failure events (HFEs). The approach for assessing dependency varies throughout HRA methods. The reasoning and cognitive basis behind the different approaches for dependency, their quantitative rationale, and their impact on the HEP are still subject to investigation from the HRA community. This paper aims to discuss the characteristics of HRA methodology considering dependency through a comparison between two approaches: Phoenix, developed by the University of California, Los Angeles, and SPAR-H, developed by Idaho National Laboratories. The comparison of their qualitative frameworks will be performed through three elements: HRA variables, environmental factors considered, and causal modeling methods. Additionally, the following two elements will be discussed for comparing the quantitative analysis: dependency value estimation method and HEP estimation method considering dependency.
SPAR-H an...
Lead Author: Roger Lew Co-author(s): Ronald L. Boring
Thomas A. Ulrich
Applications of the Gamified Rancor Microworld Simulator Model for Dynamic Human Reliability Simulation
Significant effort has been put into the development of high-fidelity thermohydraulic modeling for nuclear power and process control generally. Plants utilize full-scope simulators for engineering and training purposes. While not perfect, they tend to represent the physical configuration and control systems of plants with a high degree of fidelity. The disadvantages of such models are that they are complex, difficult to modify, and difficult to couple with other models. Full-scope simulators are also not optimized for speed, and even with modern computers conducting Monte Carlo simulations with 10s of thousands or 100s of thousands of runs is not logistically feasible. Reduced order models (ROMs) are simplified engineering models that validate particular aspects (e.g. steady state performance) against physical systems or higher fidelity models. They can then be utilized within their validated envelopes for gaining insights into engineered systems. ROMs address the complexity and coupling disadvantages of full-scope models due to their simplified nature.
In the human factors domain a...
Lead Author: Achim Benz Co-author(s): Alexander Grundler, alexander.grundler@ima.uni-stuttgart.de
Martin Dazer, martin.dazer@ima.uni-stuttgart.de
Bernd Bertsche, bernd.bertsche@ima.uni-stuttgart.de
Reliability Demonstration Test Planning for different Distributions of Field Load Spectra
In recent decades, product development cycles have become significantly shorter, with increasing reliability requirements and simultaneously decreasing budgets for tests. Even though a lot of knowledge about the failure behavior of the product can be generated by simulations, in the end a test is needed to demonstrate the reliability of the product. In order to select a test that optimally uses the resources of time, cost and number of samples for the desired reliability level with a sufficient confidence level, a simulation is performed. Previous work by Herzig et al. and Grundler et al. applied the concept of Probability of Success (PoS) for Success Run and End of Life tests introduced by Dazer et al. to consider accelerated Success Run and End of Life tests besides different failure mechanisms. This involved Monte Carlo simulations to derive the correlations of a successful test with the costs incurred, time, number of samples and the achievable demonstrated reliability. Previous work by Benz et al. introduces the concept of PoS for field load spectra deriving the demonstrated dam...
Lead Author: Philipp Mell Co-author(s): Marco Arndt; marco.arndt@ima.uni-stuttgart.de
Martin Dazer; martin.dazer@ima.uni-stuttgart.de
Non-orthogonality in DoE: practical relevance of the theoretical concept in terms of regression quality and test plan efficiency
Whenever scientific problems aren’t well understood in their physical properties or cannot be solved analytically, the approach of statistical design of experiments (DoE) is the only alternative. Yet, many DoE approaches are mathematically derived and underly assumptions and restrictions which might be hard or even impossible to be met in practice. Therefore, numerous research gaps regarding the practical implementation of DoE test plans remain. One typical requirement is that the experimental design has to be orthogonal. This condition demands that the investigated factors be set exactly to the given factor levels. This is usually not impossible, inevitably leading to deviations from the ideal condition. The literature therefore suggests a number of different metrics to measure the
non-orthogonality of a test plan, which are presented and compared in this paper. The question arises, how crucial the impact of a particular deviation from the ideal orthogonal design is. This can be assessed by studying two central quantities of a DoE test plan: First, the power, which shows how like...
Name: Philipp Mell (philipp.mell@ima.uni-stuttgart.de)
Paper 2 LE81
Lead Author: Eugene Levner Co-author(s): BORIS KRIHELI borisk@hit.ac.il
Achieving reliable low-cost detection of faulty parts in cyber-physical systems using unreliable detection sensors
Consider the problem of efficient and reliable detection of faulty parts in a large-scale educational cyber-physical system (ECPS). The ECPS is a network of several hundred computers, audio and video devices, and sensor/control devices that are located at homes and in university classrooms, interact with each other via the Internet, and serve for on-line or hybrid education. To locate the faulty parts, the ECPS uses a set of unreliable sensors that can test the system components one after another. For any possibly failed component, the following data is collected and used: (a) the cost and time of the component to be checked by a sensor; (b) the initial probability of component failure; (c) the probabilities of false negative and false positive (“false-alarms”) test results; and (d) the required safety level p0, which is defined as the probability of correctly detecting a faulty part; this parameter is set in advance by the decision maker and far exceeds the known reliability values of individual sensors.
To achieve the required level of safety, we develop a new method that c...
Lead Author: Askin Guler Co-author(s): Andrew Worrall worralla@ornl.gov
George Flanagan flanagangf@ornl.gov
Development and Capabilities of the Molten Salt Reactors Reliability Database
Molten Salt Reactors Reliability Database (MOSARD) developed at Oak Ridge National Laboratory (ORNL) aims to help molten salt reactor (MSR) system designers, researchers and regulators to evaluate the reliability of the systems from the knowledge of the components reliability. The collaboration between ORNL, the Electric Power Research Institute (EPRI), Vanderbilt University (VU) is aimed at establishing the structure and contents of a component reliability database for MSRs. VU is tasked with collecting Molten-Salt Reactor Experiment (MSRE) component reliability data with the help of ORNL, and EPRI is providing insight to the project from its role as a postulated end user of the database. MOSARD is a timely input for licensing efforts of the advanced MSR designs including the risk-informed, performance-based regulations endorsed by Nuclear Regulatory Commission. MSRE valve reliability data collected from semi-annual reports and operational summaries have been used as an initial data set to crea...
Paper YI337 | |
Name: Askin Guler (yigitoglua@ornl.gov)
Paper 4 CR227
Lead Author: Fernando Ferrante Co-author(s): Ali Mosleh mosleh@g.ucla.edu
Enrique Lopez Droguett eald@g.ucla.edu
Justin Hiller jhiller@ameren.com
Sergio Cofré-Martel scofre@umd.edu
A Bayesian Method for Estimating Potential Impact of Increase in STI on Component Failure Rates
Extending the time interval between inspections of surveillance test intervals (STIs) for risk-informed applications such as the surveillance frequency control program (SFCP) in the U.S. includes guidance on addressing the potential impact of a component’s failure rate due to unseen or and in-progress failure mechanisms. The STI extension methods described in the Nuclear Energy Institute (NEI) guidance for SFPC (NEI 04-10) involve conservatively modeling STI-modified components in a probabilistic risk assessment (PRA) model to assess potential risk impacts. While the guidance in NEI 04-10 provides details in terms of addressing the overall impact, it also includes a step to account for a periodic reassessment of the overall program impact. For this step, NEI 04-10 provides two options for how a periodic reassessment may be performed in terms of incorporating revised STIs into the base PRA model. The first option is to use the original conservative data assumptions that were utilized in performing the initial STI assessment, while the second option is to utilize data collection and ...
A PSAM Profile is not yet available for the presenter.
Session Th22 - Integration of Deterministic & Probabilistic Methods for Existing Plants & Advanced Reactors
Session Chair: Han Bao (han.bao@inl.gov)
Paper 1 PA39
Lead Author: Tarannom Parhizkar Co-author(s):
Gabriel San Martín. gsanmartin@g.ucla.edu
ENRIQUE LOPEZ DROGUETT eald@g.ucla.edu
Quantum-Based Fault Tree Analysis
Fault tree analysis is a technique widely used in the study of the reliability of complex
systems. In a fault tree, events could have two states of failure or working, that can
be presented as a binary number in the quantification process. In this study, a
quantum-based method is introduced to be utilized in the fault tree analysis. In the
quantum-based method, instead of a binary number for presenting event states, a
qubit is used that includes a coherent superposition of the binary number, i.e., a
single qubit can be described by a complex linear combination of quantum states of |0> and |1>; presenting failure and working states of events, respectively. Advantages of the method include its capability for updating the qubits based on the real-time
sensor data and evaluating multiple scenarios simultaneously by leveraging on
quantum superposition. The updated qubits are used to calculate the failure
probability of the top event accordingly. The quantum-based fault tree method could
be used for real-time failure analysis of complex systems. In this study, a simple case
study is presen...
A PSAM Profile is not yet available for this author.
Paper 2 WE77
Lead Author: Wenjie Xia Co-author(s): Weibing Huang, huangwb@cnnp.com.cn
Zhenqi Wang, wangzq02@cnnp.com.cn
Johan Sorman, Johan.Sorman@lr.org
Yi Zou, Yi.Zou@lr.org
Presenter of this paper: Johan Sorman (johan.sorman@lr.org)
A risk monitor tool for transferring plant logs
Implementing the use of risk monitors at nuclear stations has traditionally required manual input of information regarding plant configuration. This paper outlines the findings of a project for developing and implementing a tool for mapping and transferring information from plant logs and planning tools automatically into a risk monitor at Sanmen Nuclear Power plant in China.
The requirement on the import tool at Sanmen Nuclear Power Station were to be able to:
• import logs from the plants work order systems.
• evaluate plans produced in a planning tool (e.g. upcoming plan) frequently.
The tool was developed to eliminate manual efforts as much as possible when importing logs from existing systems and databases to the risk monitor. It reads event log data from a desired data source, converts and merges it with event logs in the risk monitor database. The merged log is then validated for inconsistencies and can be either saved to an XML file or imported directly into the risk monitor. The import tool key features include:
• Support multiple data source types: It supports to rea...
A PSAM Profile is not yet available for this author. Presenter Name: Johan Sorman (johan.sorman@lr.org) Bio: Johan Sorman holds a master degree from the Royal Institute of Technology in Stockholm, Sweden. Between 1993 and 1999 he worked as a PSA engineer for the nuclear industry in Sweden. Since 2000 he has been responsible for global sales, marketing and training for RiskSpectrum software.
Paper 3 YI78
Lead Author: Jun Qi Co-author(s): Yi Zou yi.zou@lr.org
Johan Sörman Johan.Sorman@lr.org
Presenter of this paper: Johan Sorman (johan.sorman@lr.org)
CNNP Trip Monitor
Following the implementation of risk monitors at nuclear stations in China, the concept of trip monitor was developed by the CNNP and Lloyd´s Register. This paper outlines the findings in a project for developing a trip monitor to accommodate the implementation of a trip monitor at Qinshan nuclear stations, mainland China.
The criteria on a trip monitor is different to that of a risk monitor where the Probabilistic Safety Assessment (PSA) using Fault Tree and Event Tree analysis constitutes the basis and can as such be readily used for the purpose. A trip monitor requires building a new Fault Tree and Event Tree model with focus on representing availability for systems required for production.
In China, a lot of work in reducing the frequency of unplanned shutdown has been done. These efforts are carried out from a qualitative point of view, but there is little work to assess the risk of unplanned shutdown from a quantitative point of view. With this background, CNNP and LR developed a trip monitor for Qinshan II, the first application in China to evaluate the risk of trip for a n...
A PSAM Profile is not yet available for this author. Presenter Name: Johan Sorman (johan.sorman@lr.org) Bio: Johan Sorman holds a master degree from the Royal Institute of Technology in Stockholm, Sweden. Between 1993 and 1999 he worked as a PSA engineer for the nuclear industry in Sweden. Since 2000 he has been responsible for global sales, marketing and training for RiskSpectrum software.
Paper 4 TH317
Lead Author: Tate Shorthill Co-author(s): Han Bao, han.bao@inl.gov
Edward Chen, echen2@inl.gov
Heng Ban, heng.ban@pitt.edu
An Application of a Modified Beta Factor Method for the Analysis of Software Common Cause Failures
This paper presents an approach for modeling software common cause failures (CCFs) within digital instrumentation and control systems. CCFs consist of a concurrent failure between two or more components due to a shared failure cause and coupling mechanism. This work emphasizes the importance of identifying software-centric attributes related to the coupling mechanisms necessary for simultaneous failures of redundant software components. The groups of components which share coupling mechanisms are called common cause component groups (CCCGs). Most CCF models rely on operational data as the basis for establishing CCCG parameters and predicting CCFs. This work is motivated by two primary concerns: (1) a lack of operational and CCF data for defining software CCF model parameters; (2) the need to model single components as part of multiple CCCGs simultaneously. An approach was developed to account for these concerns by leveraging existing techniques; a modified beta factor method allows single components to be placed within multiple CCCGs, while a second technique provides software-speci...
Lead Author: Theresa Stewart Co-author(s): Ali Mosleh, mosleh@ucla.edu
Probabilistic Physics of Failure Modeling of Non-metallic Pipelines in Oil and Gas Applications
The use of thermoplastic composite pipelines (TCP) has grown in recent years as an alternative to steel pipelines in the offshore oil and gas industry, due to thermoplastic composites having excellent corrosion resistance and greater flexibility than traditional thermoset-matrix composites. In order to understand the performance of these materials in the field, studies have derived experimental, analytical, and theoretical estimates of the mechanical and thermal properties of TCP in dry conditions, but studies in wet or acidic environments are currently restricted to the empirical domain. This study proposes a method to incorporate degradation of the mechanical properties of TCP resulting from exposure to environmental factors into a long-term mechanical analysis. To do this, the mechanical properties of the TCP will be updated at each time interval before evaluating the pipe under a range of combined pressure and bending loads using FEA analysis. ...
Paper TH35 | |
A PSAM Profile is not yet available for this author.
Paper 2 RI146
Lead Author: Ricardo Lopez Co-author(s): Jorge Ballesio, jorge.ballesio@nasa.gov
Robert Cross, robert.cross-1@nasa.gov
Michael Worden, Mike.Worden@bsee.gov
Estimating Tropical Cyclone Threats to Floating Rigs in the Gulf of Mexico
Offshore drilling operations in the Gulf of Mexico are particularly vulnerable during hurricane season. When a weather threat arises, a decision to evacuate the rig and/or move to a safe location may need to be made. Depending on the activities in progress at the time of the threat, securing the well, evacuating, and/or moving to a safe location can take a considerable amount of time. This transition time is called T-time. T-time is not only rig dependent, but also depends on the activity being performed at the time of the threat. For these reasons it is important to assess tropical cyclone threats and the time it takes for them to reach the rig location. The objective of this study is to use the available 50 years of past cyclone history to estimate cyclone threats at any location in the Gulf of Mexico.
The cyclone threat is estimated based on the rig location as well as the start date and duration of the offshore activity. By threat, it is meant the likelihood that a specific location with an associated offshore activity would be exposed to an upcoming cyclone whose forecast...
Lead Author: Colin Schell Co-author(s): Austin Lewis (adlewis@umd.edu)
Andres Ruiz-Tagle (aruiztag@umd.edu)
Katrina Groth (kgroth@umd.edu)
Construction and Verification of a Bayesian Network for Third-Party Excavation Risk Assessment (BaNTERA)
According to the Pipeline and Hazardous Material Safety Administration (PHMSA), third-party damage is a leading cause of natural gas pipeline accidents. Although the risk of third-party damage has been widely studied in the literature, current models do not capture a sufficiently comprehensive set of up-to-date root cause factors and their dependencies. This limits their ability to achieve an accurate risk assessment that can be traced to meaningful elements of an excavation. This paper presents the construction, verification, and validation of a probabilistic Bayesian network model for third-party excavation risk assessment, BaNTERA. The model was constructed and its performance verified using the best available industry data and previous models from multiple sources. Historical industry data and nationwide statistics were compared with BaNTERA’s damage rate predictions to validate the model. The result of this work is a comprehensive risk model for the third-party damage problem in natural gas pipelines....
Lead Author: Tate Shorthill Co-author(s): Han Bao, han.bao@inl.gov
Edward Chen, echen2@inl.gov
Heng Ban, heng.ban@pitt.edu
An Application of a Modified Beta Factor Method for the Analysis of Software Common Cause Failures
This paper presents an approach for modeling software common cause failures (CCFs) within digital instrumentation and control systems. CCFs consist of a concurrent failure between two or more components due to a shared failure cause and coupling mechanism. This work emphasizes the importance of identifying software-centric attributes related to the coupling mechanisms necessary for simultaneous failures of redundant software components. The groups of components which share coupling mechanisms are called common cause component groups (CCCGs). Most CCF models rely on operational data as the basis for establishing CCCG parameters and predicting CCFs. This work is motivated by two primary concerns: (1) a lack of operational and CCF data for defining software CCF model parameters; (2) the need to model single components as part of multiple CCCGs simultaneously. An approach was developed to account for these concerns by leveraging existing techniques; a modified beta factor method allows single components to be placed within multiple CCCGs, while a second technique provides software-speci...
Lead Author: Ivo Häring Co-author(s): Vivek Sudheendran, vivek.sudheendran@desy.de
Roman Sankin, roman.sankin@de.bosch.com
SysML supported functional safety ISO 26262 and cybersecurity STRIDE/HEAVENS assessment for automotive model-based system engineering
To manage the increasing complexity of modern automotive systems, development companies adhere to model based systems engineering (MBSE). Within MBSE processes, suitable modeling approaches need to be selected and combined. Modelling and simulation approaches include semi-formal modeling, software generation, engineering simulation and software emulation. By now, even the selection, tailoring and interfacing of modeling approaches can be supported within framing methodologies. Within such a digitalized development process context, the presentation addresses the question, how to use SysML modeling to support efficiently the functional safety as well as the cybersecurity (IT security) assessment within the early stages of the system development process in the automotive domain.
The feasibility of the approach is realized by the development of a concept for functional safety and cybersecurity analysis which supports the Software Platform Embedded Systems (SPES) framework. The concept is documented with metamodels and is backed by SysML profiles which extend the SPES profile within the IBM Rational Rhapsody environment. The profile for the safety analysis supports ISO 26262 functional safety process on the system level. The profile for cybersecurity analysis supports assessment at the system level adhering to the guidelines of the Microsoft STRIDE based HEAaling Vulnerabilities to Enhance Software Security and Safety (HEAVENS) security model, which was specifically developed for the automotive domain.
SysML model-based prototypes, i.e. SysML system designs including their functional safety and cybersecurity assessment, are developed, which validate the approach within an automotive MBSE pilot project. A sample prototype application shows the feasibility of the approach and allows to estimate the effort of SysML supported functional safety and cybersecurity assessments within a SPES conform environment.
Main results include the feasibility of reuse and invention of SPES oriented SysML models (e.g. context, scenario, goal, function) intended for system design. The functional safety and cybersecurity relevant model extensions and refinements are realized within these system models. The refinements and extensions result in functional safety relevant models which support item definition, hazard and risk analysis, functional safety concept and technical safety concept. Similarly, cybersecurity relevant SysML models help in Target of Evaluation (TOE) description, threat analysis and risk assessment and cybersecurity requirement derivation according to the HEAVENS approach.
The automations imparted on these extended SysML models by using helpers, enhance the usability of the models within the approach. For instance, the helpers provide automatic functional safety and cybersecurity parameter determination within models (e.g. ASIL determination, security level derivation) and filtered graphical views given sufficient inputs. Application of a model checker assists fast execution of the analyses and generation of the assessment artifacts, e.g. tabular overview of risks and their control with safety functions or cyber threats and related counter measures.
Lead Author: Cesare Frepoli Co-author(s): Jarrett Valeri, jarrett.valeri@fpolisolutions.com, Robert P. Martin, rpmartin@bwxt.com
DEVELOPMENT OF AN ENTERPRISE DIGITAL PLATFORM FOR RISK-INFORMED DESIGN
Currently, many advanced reactor designs are under development in the U.S., promising sustainable solutions to the growing world energy needs. In response, the U.S. Nuclear Regulatory Commission (NRC) staff is moving forward with development of the 10 CFR Part 53 rulemaking, which will establish a new risk-informed framework for licensing and regulating such new designs. The motivation has been to develop a technology-agnostic regulatory framework; but in practice, it is important that the new rule is used and is useful, meaning that there is no unreasonable increase in regulatory burden and thus to the scope to the safety assessment. This is because the risk assessment is now folded in the design process itself, rather than being a simple confirmatory step of the design. Such a level of sophistication is only possible and practical in a highly automated and scrutable digital framework.
This paper describes a solution to this problem. An agile, generic, digital platform, called FPoliAAP, was developed to facilitate orchestration of complex workflows, taking advantage of modern soft...
A PSAM Profile is not yet available for this author.
Paper 3 AR216
Lead Author: Sascha Schmidt
Model-Based Reliability Engineering of Automotive Drivetrain Architectures With Multi-Trajectory Simulation
Key words: Reliability Analysis Methods and Tools; Dynamic Reliability and Safety,
Simulation, Example application (safety-critical automotive systems)
The architecture of complex systems is often decided in an early stage of the design process. This leads to risky outcomes, as there is not yet much information available about the planned system. Non-functional properties such as reliability and safety are crucial for critical systems, yet the effect of low-level design decisions in modules on the overall system behavior is often unclear because of their emergent nature. Formal models and performability evaluation algorithms in the field of model-based systems design are useful tools to improve this situation [1-3].
In reliability and safety, the classic models such as fault trees and reliability block diagrams allow static systems, but fall short in the description of dynamic processes. Such a behavior is important to cover in the model for systems including dynamic fault tolerance, or if there is a significant influence of the underlying timed behavior. Dynamic models in the re...
Lead Author: Tate Shorthill Co-author(s): Han Bao, han.bao@inl.gov
Edward Chen, echen2@inl.gov
Heng Ban, heng.ban@pitt.edu
An Application of a Modified Beta Factor Method for the Analysis of Software Common Cause Failures
This paper presents an approach for modeling software common cause failures (CCFs) within digital instrumentation and control systems. CCFs consist of a concurrent failure between two or more components due to a shared failure cause and coupling mechanism. This work emphasizes the importance of identifying software-centric attributes related to the coupling mechanisms necessary for simultaneous failures of redundant software components. The groups of components which share coupling mechanisms are called common cause component groups (CCCGs). Most CCF models rely on operational data as the basis for establishing CCCG parameters and predicting CCFs. This work is motivated by two primary concerns: (1) a lack of operational and CCF data for defining software CCF model parameters; (2) the need to model single components as part of multiple CCCGs simultaneously. An approach was developed to account for these concerns by leveraging existing techniques; a modified beta factor method allows single components to be placed within multiple CCCGs, while a second technique provides software-speci...
Lead Author: Jukka Koskenranta Co-author(s): Ilkka Paavola Ilkka.Paavola@fortum.com
Rasmus Hotakainen Rasmus.Hotakainen@fortum.com
Loviisa nuclear power plant spent fuel storage risk analysis
Spent fuel of Loviisa NPP is stored submerged in water pools of spent fuel storage (SFS) at Loviisa site at least 10 years and up to many decades. First 1 – 2 years the spent fuel is cooled in refuelling pools in reactor buildings. After SFS the spent fuel will be moved to final repository at Olkiluoto.
Loviisa NPP PRA covers level 1 and level 2 PRA for reactors and refuelling pools for both unit 1 and 2 and common spent fuel storage for both units. Loviisa NPP PRA covers also all operating and shutdown states and all types of initiating events. Current seismic PRA is outdated and under significant update.
Loviisa SFS PRA models SFS with maximum expected heating capacity of current operating license from 2030. Mission times varies from no time to recover up to 2 months. Criteria for result evaluation is fuel exposure or mechanical break. Equipment failures have only minor impact to the risk because of low heating rate. Seismic initiating events cause over 50 % of fuel damage frequency (FDF) and large release frequency. Also early release frequency is considered. FDF 1,9E-7/a is a...
Name: Jukka Koskenranta (jukka.koskenranta@fortum.com)
Paper 2 YO298
Lead Author: Yong-Joon Choi Co-author(s): Chris Gosdin (cgosdin@fpolisolutions.com)
Gabrielle Palamone (gabrielle.palamone@fpolisolutions.com)
Cesare Frepoli (frepolc@fpolisolutions.com)
Jason Hou (jason.hou@ncsu.edu)
Safety Analysis of Accident Tolerance Fuel (ATF) with Increased Enrichment and Extended Burnup: Simulation Tools Review
One of the main obstacles for deploying near-term accident-tolerant fuels (ATFs) with higher burnup is related to successfully passing regulatory-required fuel safety assessments. A phenomenon called fuel fragmentation, relocation, and dispersal (FFRD) that is observed during postulated accident events may cause fuel damage exceeding the postulated safety limits. There have been multiple research efforts dedicated to investigation of the FFRD phenomenon and its consequences. However, most of them have focused on conventional fuel (e.g., non-ATF). The recent study by the U.S. Nuclear Regulatory Commission [1] addresses that fuel fragmentation can be observed starting from 55GWd/MTU burnup (average burnup of US nuclear power plant is 45GWd/MTU) for standard UO2 fuel during a design basis loss of coolant accident (LOCA). Generally, ATFs have advantage of better mechanical strength under high-temperature accident conditions over the traditional fuel (i.e., zircalloy cladding). The increased fuel enrichment and associated increased burnup would allow extension of refueling cycle from 18 ...
Lead Author: Eunseo So Co-author(s): Yunyeong Heo, Yunyeong.Heo@inl.gov
Mohammad Abdo, Mohammad.Abdo@inl.gov
Yong-Joon Choi, Yong-Joon.Choi@inl.gov
Development of Genetic Algorithms for Plant Reload Optimization for an Operating Pressurized Water Reactor
This paper summarizes the development, results, and enhancement activity of the Artificial Intelligence (AI) based automated nuclear power plant fuel reload optimization platform under the guidance of the United States Department of Energy, Light Water Reactor Sustainability Program, and Risk-Informed Systems Analysis Pathway. The research focuses on the optimization of the fuel arrangement to maximize the fuel cycle length.
The AI-based Genetic Algorithm works with both convex and non convex, constrained or unconstrained problems. This can help explain the relationship between the fuel arrangement and fuel cycle length, in particular, the surrogate models used to reconstruct the Multiphysics problem maps the features/inputs of the problem to the fuel cycle length to provide such explanation. The Genetic Algorithm is composed of several evolutionary processes: fitness evaluation, parent selection, crossover, mutation, survivor selection, and termination. Crossover and mutation are the main steps responsible for injecting randomness/heuristics to prevent the algorithm from getting s...
A PSAM Profile is not yet available for this author.
Paper 4 FR99
Lead Author: Frida Olofsson Co-author(s): Anders Olsson, anders.olsson@vysusgroup.com
Challenges and lessons learned from a PSA on a spent fuel pool facility
Performing a PSA on a spent fuel pool storage facility provides new challenges but also new insights. Even though the nuclear fuel is the common subject for the risk to be evaluated, the differences compared to a traditional PSA for a nuclear power plant are several.
In the spent fuel pool facility, the risk is evaluated both regarding the process of the transport containers and fuel elements handling, as well as for the long-term storage in spent fuel pools. The studied end-states and sequences depend on where in the facility the spent fuel is situated.
Another of the most appearing differences from a traditional PSA is the long time windows. This issue pervades for example in the analysis of manual actions as well as when modelling mission times for systems and components. The facility has a low grade of automatization and, therefore, the importance of human actions is even more prominent. The system mission times originates from deterministic acceptance criteria. This assumption has proven to have an important impact on the analysis.
Most methodologies used in the spent fuel poo...
A PSAM Profile is not yet available for this author.
Session F01 - Physical Security III
Session Chair: Svetlana Lawrence (svetlana.lawrence@inl.gov)
Paper 1 WG301
Lead Author: W. Gary W. Rivera Co-author(s): Emily Sandt, esandt@sandia.gov
Evaluation and Analysis of Person-Passable Openings Through Security Boundaries
Researchers at Sandia National Laboratories, in conjunction with the Nuclear Energy Institute and Light Water Reactor Sustainability (LWRS) Programs, have been conducted testing and analysis to reevaluate and redefine the minimum passable opening size through which a person can effectively pass and navigate. Physical testing with a representative population has been performed on both simple two-dimensional (rectangular and circular cross sections up to 36-inch in depth) and more complex three-dimensional (circular cross sections of longer lengths and changes in direction) opening configurations. The primary impact of this effort is to define the scenarios in which an adversary could successfully pass through a potentially complex opening, as well as define the scenarios in which an adversary would not be expected to successfully traverse a complex opening. This data can then be used to support risk-informed decision-making. At its inception, the project intended to investigate openings that could be found to intersect security boundary layers (e.g., drainage culvert), but through c...
Lead Author: Shawn St. Germain Co-author(s): Shawn St. Germain ( shawn.stgermain@inl.gov )
Overview of the LWRS Program's Physical Security Pathway
The U.S. Department of Energy Light Water Reactor Sustainability (LWRS) Program's Physical Security Pathway helps define the direction and efforts to develop methods, tools, and technologies which will optimize, balance, and modernize a nuclear facility’s security posture. This LWRS research pathway: (1) conduct research on risk-informed techniques for physical security that account for a dynamic adversary; (2) apply advanced modeling and simulation tools to better inform physical-security scenarios and reduce uncertainties in force-on-force modeling; (3) assess benefits from proposed enhancements and novel mitigation strategies and explore changes to best practices, guides, or regulation to enable modernization; and (4) enhance and provide the technical basis for stakeholders to employ new security methods, tools, and technologies....
Lead Author: Vaibhav Yadav Co-author(s): Robby Christian, robby.christian@inl.gov
Steven Prescott, steven.prescott@inl.gov
Shawn St. Germain, shawn.stgermain@inl.gov
Risk-informed Physical Security Optimization for Nuclear Power Plants
The concept of risk-informed physical security of nuclear power plants has recently been extensively explored across stakeholders such as the nuclear industry, DOE, NEI, NRC, and researchers at national laboratories and universities. Risk-informed physical security holds promise for advanced assessment and optimization of NPP physical security postures leading to efficient, economical and safer plant operations. Recently the NRC issued a revision to the regulatory guide 5.76 that transitions from the prior prescriptive regulatory requirement for physical security to newer guidance that is based on reasonable assurance of protection time. The new NRC guidance paves way for physical security performance assessment to be tied with existing risk-based approaches for plant safety and associated metrics such as time to core-damage. This paper presents a novel computational framework for risk-informed physical security aimed at performing analysis such as security design optimization, armed-guard reduction, crediting plant mitigating strategies in security and more. ...
Lead Author: Samuel Abiodun Olatubosun Co-author(s): Md Ragib Rownak (rownak.1@buckeyemail.osu.edu)
Yunfei Zhao (zhao.2263@osu.edu)
Carol Smidts (smidts.1@osu.edu)
Abdollah Shafieezadeh (shafieezadeh.1@osu.edu)
Human reliability assessment for physical security: human responses under extreme threats
Human reliability assessment for physical security: human responses underextreme threats
Md Ragib Rownak1, Samuel Abiodun Olatubosun1, Yunfei Zhao1, Carol Smidts1, Abdollah Shafieezadeh2
1Department of Mechanical and Aerospace Engineering, The Ohio State University, Columbus, OH 43210, USA
2Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, Columbus, OH 43210, USA
Abstract
To further human reliability assessment research for the purpose of physical security, a study on reported human responses during specific real and simulated extreme situations has been conducted. Those responses which are either agitated or depressive in extreme conditions are rarely consistent with responses understood or expected by the public and the media. Rather than responding in irrational and/or self-interested ways under extreme conditions, individuals typically respond in rational and prosocial ways. Though panic behaviors do occur, research suggests this is only when the perception of immediate threats, closing exit routes and a lack of help or resources are immin...
A PSAM Profile is not yet available for this author.
Session F02 - Structures
Paper 1 BE100
Lead Author: Marco Behrendt Co-author(s): Matthias G. R. Faes, matthias.faes@kuleuven.be
Marcos A. Valdebenito, marcos.valdebenito@uai.cl
Michael Beer, beer@irz.uni-hannover.de
Capturing epistemic uncertainties in the power spectral density for limited data sets
In stochastic dynamics, it is indispensable to model environmental processes in order to design structures safely or to determine the reliability of existing structures. Wind or earthquake loads are examples of these environmental processes and may be described by stochastic processes. This type of process can be characterised by means of the power spectral density (PSD) function in the frequency domain. With the PSD function governing frequencies and their amplitudes can be determined. For the reliable generation of such a load model described by a PSD function, the uncertainties that occur in time signals must be taken into account. In this paper, an approach is presented to derive an imprecise PSD model from a limited amount of data. The spectral densities at each frequency are described by intervals instead of relying on discrete values. The advantages of the imprecise PSD model are illustrated and validated with numerical examples in the field of stochastic dynamics. ...
Name: Marco Behrendt (behrendt@irz.uni-hannover.de)
Paper 2 BI52
Lead Author: Marius Bittner Co-author(s): Marco Behrendt behrendt@irz.uni-hannover.de
Jasper Behrensdorf behrensdorf@irz.uni-hannover.de
Michael Beer beer@irz.uni-hannover.de
Epistemic uncertainty quantification of localised seismic power spectral densities
The modelling and quantification of seismic loadings such as earthquakes to improve the safe design of structures is a challenging task. In particular, the unpredictable nature of earthquake characteristics like amplitude, dominant frequencies, and duration pose a great risk especially for sensitive structures like power plants, oil rigs, high-rise buildings, or large-span structures. The analysis, understanding and evaluation of those seismic characteristics and their influence on safe structural design is especially important for regions prone to earthquakes. The tectonic mechanisms leading to seismic underground waves are complex but measurements of earthquakes and their mechanical causes on surfaces are available manifold. A new procedure is presented herein for describing uncertainties in the power spectral density (PSD) function of seismic loadings and utilises the novel approach of Sliced-Normal distributions to describe multivariate probability density functions over frequency and amplitude. This representation enables analysts of stochastic dynamic systems the usage of a com...
Name: Marius Bittner (bittner@irz.uni-hannover.de)
Paper 3 DY330
Lead Author: Gyunseob Song Co-author(s): Man Cheol Kim - charleskim@cau.ac.kr
An estimation method for heat pipe cascading failure frequency for micro modular reactor PSA
Initiating event analysis is one of essential elements in probabilistic safety assessment (PSA) to estimate core damage frequency (CDF) or large early release frequency (LERF) as risk metrics. As several types of reactors have been developed which are so called Generation Ⅳ reactor, it is expected that several types of initiating events which are not considered in traditional reactors exist. The frequency of an initiating event is generally estimated using historical data from operating reactors. However, there is no observed experience about expected initiating events for developmental stage reactors and hence methodologies to estimate frequencies of expected initiating events should be developed.
In Generation Ⅳ reactors, heat pipe is widely considered as heat removal system because of its passive property. Even if a heat pipe is designed to have enough capability in normal operating conditions, the performance of a heat pipe may depend on heat load. Therefore, it is possible that individual failures of heat pipes cause cascading failure of other heat pipes due to the increase ...
Paper DY330 | |
Name: Gyunseob Song (dyrnfmxm678@cau.ac.kr)
Paper 4 OL294
Lead Author: Samuel Abiodun Olatubosun Co-author(s): Md Ragib Rownak (rownak.1@buckeyemail.osu.edu)
Yunfei Zhao (zhao.2263@osu.edu)
Carol Smidts (smidts.1@osu.edu)
Abdollah Shafieezadeh (shafieezadeh.1@osu.edu)
Human reliability assessment for physical security: human responses under extreme threats
Human reliability assessment for physical security: human responses underextreme threats
Md Ragib Rownak1, Samuel Abiodun Olatubosun1, Yunfei Zhao1, Carol Smidts1, Abdollah Shafieezadeh2
1Department of Mechanical and Aerospace Engineering, The Ohio State University, Columbus, OH 43210, USA
2Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, Columbus, OH 43210, USA
Abstract
To further human reliability assessment research for the purpose of physical security, a study on reported human responses during specific real and simulated extreme situations has been conducted. Those responses which are either agitated or depressive in extreme conditions are rarely consistent with responses understood or expected by the public and the media. Rather than responding in irrational and/or self-interested ways under extreme conditions, individuals typically respond in rational and prosocial ways. Though panic behaviors do occur, research suggests this is only when the perception of immediate threats, closing exit routes and a lack of help or resources are immin...
A PSAM Profile is not yet available for this author.
Session F03 - COVID/Health
Session Chair: Garill Coles (garill.coles@pnnl.gov)
Paper 1 BR91
Lead Author: Stefan Bracke Co-author(s): Alicia Puls; apuls@uni-wuppertal.de
COVID-19 pandemic: Analyzing spreading behavior of different infection waves within the first two years in Germany by the use of reliability methods
Since December 2019, the world is confronted with the COVID-19 pandemic, caused by the Coronavirus SARS-CoV-2. The COVID-19 pandemic with its incredible speed of spread shows the vulnerability of a globalized and networked world. The first two years of the pandemic were characterized by several infections waves, marked by different spreading behaviors, described by length, peak and speed. The infection waves caused a heavy burden on health systems and severe restrictions on public life within a lot of countries, like educational system shutdown, travel restrictions, limitations regarding public life or a comprehensive lockdown.
The goal of the presented research study is the analysis of the development of the four dominant infection waves in Germany within the first two years of COVID-19 pandemic time period (February 2020 – February 2022). The analyses are focusing on infection occurrence and spreading behavior, in detail on attributes like length, peak and speed of each wave. Furthermore various impacts of lockdown strategies (hard, soft) or health protection measures, vaccinati...
Lead Author: Karthik Sankaran Co-author(s): Bineh Ndefru (bndefru@ucla.edu), Theresa Stewart (theresa@risksciences.ucla.edu), Prof. Ali Mosleh (mosleh@g.ucla.edu), Arjun Earthperson (aarjun@ncsu.edu), Natalie Zawalick (nataliez@ucla.edu)
Risk-Informed Decision-Making Tool for COVID-19 Community Behavior and Intervention Scenario Assessment
The spread of the COVID-19 pandemic across the world has presented a unique problem to researchers and policymakers alike. In addition to uncertainty around the nature of the virus itself, the impact of rapidly changing policy decisions on the spread of the virus has been difficult to predict. Using an epidemiological SIRD model as a basis, this paper presents a methodology developed to address the wide variety of uncertain factors impacting disease spread, and ultimately to understand how a policy decision may impact the community long term. The model being presented, named the COVID-19 Decision Support (CoviDeS) tool, is an agent-based time simulation model which uses Bayesian networks to determine state changes of each individual. The CoviDeS model has a level of interpretability more extensive than many of the existing models, allowing for insights to be drawn regarding the relationships between various inputs and the transmission of the disease. Test cases will be presented for different scenarios that demonstrate relative differences in transmission resulting from different pol...
A PSAM Profile is not yet available for this author.
Paper 3 SB270
Lead Author: Stefania Benucci Co-author(s): Enrico Casu ecasu@aurigaconsulting.it
Andrea Mancini amancini@aurigaconsulting.it
A new simplified methodology for Quantitative Risk Assessment of Carbon Capture and Storage Plant
Currently the most quantitative risk assessment performed for Carbon Capture and Storage Project neglected several known consequences, as well as cryogenic effects, visibility issues, focusing only on toxicological characteristics of carbon dioxide.
In this paper, we would like to present as the absolutely not negligible contribution related to cryogenic effects and visibility issues can significantly change the results in terms of risk for personnel, people and assets, proposing a new simplified methodology for performing quantitative risk assessment capable to take into account all the facets of phenomena consequent from carbon dioxide releases (only with the use of PHAST software for consequences evaluation).
In fact, contact of equipment with solid carbon dioxide created during the release when temperature and pressure fallen below the triple point can lead to significant domino effect due to structural embrittlement. Cold burns and visibility issues can increase the risk for people related to carbon dioxide, because the damage distances associated to toxicological effects can b...
A PSAM Profile is not yet available for this author.
Paper 4 OL294
Lead Author: Samuel Abiodun Olatubosun Co-author(s): Md Ragib Rownak (rownak.1@buckeyemail.osu.edu)
Yunfei Zhao (zhao.2263@osu.edu)
Carol Smidts (smidts.1@osu.edu)
Abdollah Shafieezadeh (shafieezadeh.1@osu.edu)
Human reliability assessment for physical security: human responses under extreme threats
Human reliability assessment for physical security: human responses underextreme threats
Md Ragib Rownak1, Samuel Abiodun Olatubosun1, Yunfei Zhao1, Carol Smidts1, Abdollah Shafieezadeh2
1Department of Mechanical and Aerospace Engineering, The Ohio State University, Columbus, OH 43210, USA
2Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, Columbus, OH 43210, USA
Abstract
To further human reliability assessment research for the purpose of physical security, a study on reported human responses during specific real and simulated extreme situations has been conducted. Those responses which are either agitated or depressive in extreme conditions are rarely consistent with responses understood or expected by the public and the media. Rather than responding in irrational and/or self-interested ways under extreme conditions, individuals typically respond in rational and prosocial ways. Though panic behaviors do occur, research suggests this is only when the perception of immediate threats, closing exit routes and a lack of help or resources are immin...
A PSAM Profile is not yet available for this author.
Session F04 - Seismic
Session Chair: Zoltan Kovacs (kovacs@relko.sk)
Paper 1 FL13
Lead Author: Floris Goerlandt Co-author(s): Lauryne Rodrigues, Lauryne.Rodrigues@dal.ca
Luana Souza Almeida, Luana.Almeida@dal.ca
Ronald Pelot, Ronald.Pelot@dal.ca
Assessing risk to marine transportation assets and multi-modal community supply in a Cascadia M9.0 megathrust earthquake and tsunami event
Natural disasters such as earthquakes can severely impact multi-modal logistics networks. On the Canadian West Coast, a Cascadia Subduction Zone earthquake could lead to severe damage to critical infrastructures on a large regional scale, both due to the direct impacts of ground shaking and those caused by a subsequent tsunami. In the immediate disaster response phase, affected communities would furthermore require emergency supplies such as fuel, food, and medicine. An improved understanding of the vulnerabilities and impacts of a Cascadia earthquake disaster scenario, in particular to marine transportation assets, multi-modal logistic networks, and coastal communities, is important to improve disaster preparedness risk management. This article presents an integrated set of modeling approaches to analyse these risks, providing a high-level overview of the models’ objectives, rationale, and outputs. Subsequently, selected results are presented for a plausible Cascadia M9.0 megathrust earthquake and tsunami, focusing on the impacts on marine assets and the maritime dimension of the ...
Lead Author: Jin Ho Lee Co-author(s): Hieu Van Nguyen, nvh10chelsea@gmail.com, Jung Han Kim, jhankim@pusan.ac.kr
Probabilistic Site Response Analysis Considering Variability of Soil Properties
Because seismic waves are influenced by variability at local soil sites, their effects on site responses are studied by means of a probabilistic site response analysis based on the random vibration theory. Monte Carlo simulations are employed to take into consideration the effects of variability of the layer thickness, low-strain shear-wave velocity, and nonlinear dependence of the shear modulus and hysteretic damping on shear strain at soil sites. The probabilistic approach is applied to generic soil sites to evaluate their soil-amplification functions. Seismic hazard curves and uniform hazard response spectra (UHRS) can be obtained using the amplification functions and the hazard curves of outcropping bedrock motions. Then, the seismic risk for structures at the sites is calculated to produce ground motion response spectra (GMRS) for a seismic design. It can be observed from the simulations that the variability in the layer thickness and the low-strain shear-wave velocity influences the soil-amplification functions and the resulting UHRS and GMRS values for the soil sites significa...
Lead Author: Zoltan Kovacs Co-author(s): Tibor Zold, tibor.zold@seas.sk
Seismic re-evaluation of the Unit 1&2 of the Mochovce NPP
The Slovak Nuclear Regulatory Authority (UJD SR) requires the licensee to keep the plant seismic capacity to a level generally accepted by the international community. The re-evaluation of the seismic capacity of an existing plant is generally due to the following reasons:
• evidence of a seismic hazard at the site that is greater than the design basis earthquake, arising from new or additional data and
• regulatory requirements stricter than those valid at the time of design and construction and take into account the state of knowledge and the actual condition of the plant.
The original design basis earthquake of the plant with VVER440 type reactors was 0.06 g. The first project of seismic re-evaluation in 1997 increased the RLE (Review Level Earthquake) to 0.10 g. Significant number of plant upgrading measures were implemented.
As a consequence of a new hazard evaluation carried out at the site, a new project was launched by the licensee, which further upgrades the seismic capacity of the plant to the RLE = 0.15 g (SL-2 value). The seismic re-evaluation is being performed ...
Theoretical comparison of models for a seismically induced joint failure probability
An earthquake simultaneously challenges multiple structures, systems, components of nuclear power plants. Seismic probabilistic risk assessment evaluates this phenomenon with a failure condition that a component fails when a seismic response exceeds a component capacity. In literature, there are several models for a seismically induced joint failure probability: a model used in the seismic safety margins research program (Model 1), a model in the SECOM2 (Model 2), and the Reed-McCann procedure (Model 3). We also discuss a model that applies the separation of independent and common variables method to response and capacity (Model 4). In Model 4, common variables among more than two components are explicitly considered. These four methods are analytically compared to clarify their relation. First, it is shown that the first two methods are equivalent by showing their derivations. Next, Model 4 is shown as a limited case of Model 1 by showing that Model 4 results in a multivariate normal distribution with nonnegative correlation coefficients. Finally, Model 3 is shown as a limited case ...
Lead Author: Thor Myklebust Co-author(s): Tor Stålhane stalhane@ntnu.no Sinuo Wu 238868@student.usn.no
Agile safety case for vehicle trial operations
In the last years, there has been an increase in agile development methods when developing safety-critical software. This approach fits well with the incremental improvement of autonomous vehicles, incremental expansion of the operational design domain, and new intelligent roadside units.
There have to be new trials of self-driving vehicles in the years to come due to the expected improvements in the vehicles and intelligent roadside units. Therefore, it is essential that the process, including needed evidence for a safety case, is both agile and standardized to ensure confidence and trust by all parties involved.
This paper shows how the trial operator can develop an Agile safety case for vehicle trial operations to ensure frequent updates based on:
• The agile safety case
• ISO 22737:2021 Low-speed automated driving
• BSI PAS 1881:2020 "Assuring the safety of automated vehicle trials and testing - specification" standard
• The BSI PAS 1883:2020 "Operational design domain (ODD) taxonomy for an automated driving system (ADS) - Specification" standard.
The agile development...
Lead Author: Camila Correa-Jullian Co-author(s): John McCullough jmccull@ucla.edu
Marilia Ramos marilia.ramos@ucla.edu
Jiaqi Ma jiaqima@ucla.edu
Enrique Lopez Droguett eald@ucla.edu Ali Mosleh mosleh@ucla.edu
Presenter of this paper: Marilia Ramos (marilia.ramos@ucla.edu)
Safety Hazard Identification for Autonomous Driving Systems Fleet Operations in Mobility as a Service
The safe and reliable operation of Automated Driving Systems (ADS) in the context of Mobility as a Service (MaaS) depends on a multitude of factors external to the vehicle’s functionality and performance. In particular, it is expected that Level 4 ADS operations are supported by the actions of remote operators, specifically during the initial stages of deployment. In the future, fleet operators are expected to work with one or multiple ADS developers as technology providers to transform their fleets. Therefore, fleet management of ADS vehicles involved in MaaS will play an important role in ensuring traffic safety. In this work, we consider the role of fleet operators as separate entities than ADS developers. Fleet operators functions comprise a fleet management center (FMC), where the ADS vehicle is monitored and supervised, and an operations center (OPC), focused on vehicle inspection, maintenance and storage. Based on a L4 ADS MaaS system breakdown and identification of critical operational stages, we identify operational hazards through Event Sequence Diagram (ESD) and Fault Tr...
Lead Author: Franciszek Restel Co-author(s): Lukasz Wolniewicz, lukasz.wolniewicz@pwr.edu.pl
Identification of safety relevant activities of train crews using the Functional Resonance Analysis Method (FRAM)
During training the classical traing process of train crews are used real vehicles. There are two disadvantages of this approach. The first, vehicles are out of order to perform commercial tasks. This factor cerates high costs and, to minimze them, the tarining time is kept as short as possible. Secondly, there is no possibilty to trainee dangerous sitations, as for example fire on board of a train. Thus, using of Virtual Reality in the traning process is a key undertaking to improve safety and efficiency of railway operation processes. The problem occurrs how to choose safety relevant situations for implenetation as scenarios in the Virtual Reality environment. The paper proposes a method for determining train crew activities based on activity execution variability. The variability of activity execution is characterized by precision and timeliness. The accuracy and timeliness of train crew activity performance were estimated mainly based on a survey of train crews, as well operation data from the Polish Railway Network Manager.
The research problem is focussed on selection of the mo...
Name: Franciszek Restel (franciszek.restel@pwr.edu.pl)
Paper 4 SU247
Lead Author: Susanna Kristensen Co-author(s): Yiliu Liu
yiliu.liu@ntnu.no
Ingrid Bouwer Utne
ingrid.b.utne@ntnu.no
Dynamic risk analysis of maritime autonomous surface ships
Background
In the future, ships may utilize different technologically advanced solutions to perform their missions, for example supervisory risk control and intelligent power management (Utne et al., 2020), use of alternative energy sources (Pan et al., 2021), and use of sensors and cameras for ocean surveillance and navigation (Pizarro & Singh, 2003), and more. Maritime autonomous surface ships (MASS) are under development (IMO, 2021). The risks during MASS operations will be affected by dynamic factors relating to the operation environment, the technical systems, and the mission specifications. This introduces the need for dynamic risk analysis.
For MASS, sensors, actuators, and computers, gradually take over the task previously performed by the crew (Utne et al., 2017). The implementation of autonomous functionalities on ships can have many advantages. However, it also makes the realization of necessary functions on board the vessel, such as maintaining an adequate level of situational awareness and performing safe navigation, more dependent on technical components. Better situa...