PSAM12 - Probabilistic Safety Assessment and Management
Tuesday, June 24, 2014

Program Book - Schedule - Monday - Tuesday - Wednesday - Thursday - Friday - SEARCH PAPERS

KEY: -Paper; -Biography; -Presentation


Sessions:
Plenary - Luncheon - T01 - T02 - T03 - T04 - T05 - T06 - T07 - T011 - T012 - T013 - T014 - T015 - T016 - T017 - T021 - T022 - T023 - T024 - T025 - T026 - T027


T00 Tuesday Plenary:

9:00 AM

Globalization 3.0 Credit Purge Cycle: Short Term Incomme & Long Term Wealth

John O'Donnell
Online Trading Academi, CA, USA

Abstract: The inflation vs deflation debate as defined by the Austrian School of Economics & discuss and show examples of how to identify high probability low risk trades in the global capital markets using the newly patented OTA core strategy to identify supply v demand of institutional order flow on a price chart John is known for his focus and thoughts on issues such as historic boom/bust business cycles and the potential coming burst of the credit bubble in the “Globalization 3.0 Era.” His background in both education and financial services gives him a unique ability to teach complex financial theories and trading skills to beginner investors and seasoned traders alike.

Bio: Mr. O’Donnell’s background in both education and financial services gives him a unique ability to teach complex financial theories and trading skills to beginner investors and seasoned traders alike. As Chief Knowledge Officer, John has been an instrumental player in making Online Trading Academy the premier trading educators in the world with 35 physical learning centers in 8 countries. Mr. O’Donnell earned a BS in Science from Southwest Baptist University and has personally been involved in the stock market since 1968. In 1975, he founded the Economic Monetary Investment Research Society, Atlanta. He began his career as a public school teacher, which he later transitioned to public corporations, working as an investment banker. Mr. O’Donnell has 40+ years of successful corporate leadership experience. He was the Founder and CEO of Precious Metals Exchange. His next venture was as CEO with Penn Pacific Financial Corp, a public company. He then founded and became the CEO of Republic Resources, Inc, which was the world’s largest publicly traded grower of jojoba oil with a $75 million market cap. Prior to joining Online Trading Academy Mr. O’Donnell was Co-founder and CEO of Double Win Capital, Inc, a boutique investment banker. Mr. O’Donnell’s leadership and vision has not gone unnoticed, as he was a Finalist two consecutive years in the “Entrepreneur of the Year” contest in Orange County, Calif managed by Ernst & Young Inc magazine. Mr. O’Donnell came to Online Trading Academy in 1998. As one of the first equity partners and pioneers in developing the education division he helped transition the company from a floor based equities broker/dealer model with $500 million per day in day trades to one of the largest Direct Market Access franchisor trading schools that are currently operating today. Mr. O’Donnell is instrumental in Online Trading Academy’s business development initiatives with strategic partners and industry leaders like NYSE, CME, and NASDAQ. He has been a featured speaker at many major active trader/investor expos in New York, London, Paris, Rio de Janeiro, Toronto, Las Vegas, San Francisco, Miami, and Dallas. He has been interviewed on and featured in a variety of financial media such as Wall Street Journal, CNBC, Bloomberg, Fox Business, FT.com, Equities Magazine and Traders Journal. He is also co-host of PowerTradingRadio.com This year marks his greatest accomplishment, in his opinion, becoming a second time grandparent.

T00 Tuesday Luncheon:

12:00 Noon

Which way PRA?

Woody Epstein (a), Jerzy Grynblat (b)
a) Senior Prinicipal Consultant, Lloyd’s Register Energy - Japan, and b) Nuclear Business Director for Lloyd’s Register and Scandpower

Abstract: March 11, 2011 was a wakeup call. The events of that day, and for several months afterwards, convinced many of us
that to help society deal with disastrous events we might somehow have to change the way we do probabilistic risk assessment. It was not only of the disaster at the Fukushima Daiichi Nuclear Power Station, but the impacts on oil and gas plants, public infrastructure, business continuity, supply chain, emergency preparedness and response, medical facilities, the understanding of extreme natural events, risk communication with the public … the 3.11 list seems endless.
How can we continue to make PRA relevant in the light of March, 11?
Over 50 risk professionals from the nuclear, health, oil/gas, aerospace industries, from academia and from government were asked to write down a couple of topics/ideas which they think have been weak points of PRA, things we must change going forward, and perhaps even tentative solutions. During this talk, we will present some of their ideas and analyze how they pertain to the future of PRA.

Woody Epstein Bio: Since 1983, Woody Epstein has been a quantitative risk assessment (QRA) consultant, manager, mathematician, and technical advisor for large organizations, both public and private. Since 2011, he has been the Manager of Risk Consulting, for Lloyd’s Register Consulting, Japan; from 2001 – 2011, he was the Operations Manager and Manager of Risk Consulting for ABS Consulting, Japan. In March, 2011, Tokyo Institute of Technology invited Woody to be a visiting scientist at Tokyo Institute of Technology, where he authored an independent evaluation of the accident at Fukushima Daiichi for the Ninokata Laboratory, “A PRA Practioner looks at the Great East Japan Earthquake and Tsunami” In August, 2012, he was the operations manager for the International Atomic Energy Agency’s Mission to the Onagawa NPS, to do a damage walk down of the station after the Great Eastern Japan Earthquake.. From March, 2013 until the present, Woody has been the project manager for the active faults studies for the Japan Atomic Power Company and the Tohoku Electric Power Company at the Tsuruga and Higashidori NPPs. In August, 2013, he served as the operations manager for the United Nations Scientific Committee for the Effects of Radiation Mission to Fukushima Prefecture to listen to and film the Fukushima people. He is one of the founders of the Open PSA Initiative, is a Core Group Member of the Resilience Engineering Group, member of the Japan Nuclear Safety Institute’s Technical Review Committee for PRA and Seismic PRA and is a member of the Risk Technical Committee of the Atomic Energy Society of Japan.

Jerzy Grynblat Bio: Jerzy Grynblat started working in the nuclear risk management business in the mid 1970s. During his carrier he has been involved in several reliability and economical analysis for power industry, including nuclear, coal, oil and solar energy. Mr. Grynblat has, among other assignments, worked in several nuclear power plants modernisation projects for Swedish utility OKG Aktiebolag. In those projects Mr. Grynblat has worked with establishing of safety criteria, performing deterministic safety analysis and preparing safety related licensing documentation to be submitted to Swedish Nuclear Power Inspectorate. Mr. Grynblat possesses a broad experience within the field of deterministic safety analysis, safety standards, norms and criteria for Nuclear Power Plants. Mr. Grynblat was in 1984 one of the co-founder of RELCON, a risk management consulting company that joined Scandpower in the beginning of 2007, now itself a member of the Lloyd’s Register Group. Mr. Grynblat was the president of the company between 1995 and 2010. RELCON developed and marketed RiskSpectrum®, the software that dominates the probabilistic risk analysis market in the nuclear business. RiskSpectrum PSA is licensed for use at more than 50% of the world’s nuclear power plants. Following the acquisition of Scandpower by Lloyd’s Register Mr. Grynblat has been appointed in the beginning of February 2010 as the Nuclear Business Director for Lloyd’s Register and Scandpower.

T01 Aviation and Space I

10:30 Honolulu

Chair: Sergio Guarro, ASCA Inc.

116

Cabin Environment Physics Risk Model

Christopher J. Mattenberger (a) and Donovan L. Mathias (b)

a) Science and Technology Corporation, Moffett Field, CA, USA, b) NASA Ames Research Center, Moffett Field, CA, USA

This paper presents a Cabin Environment Physics Risk (CEPR) model that predicts the time for an initial failure of Environmental Control and Life Support System (ECLSS) functionality to propagate into a hazardous environment and trigger a loss-of-crew (LOC) event. This physics-of-failure model allows a probabilistic risk assessment of a crewed spacecraft to account for the cabin environment, which can serve as a buffer to protect the crew during an abort from orbit and ultimately enable a safe return. The results of the CEPR model replace the assumption that failure of the crew-critical ECLSS functionality causes LOC instantly, and provide a more accurate representation of the spacecraft’s risk posture. The instant-LOC assumption is shown to be excessively conservative and, moreover, can impact the relative risk drivers identified for the spacecraft. This, in turn, could lead the design team to allocate mass for equipment to reduce overly conservative risk estimates in a suboptimal configuration, which inherently increases the overall risk to the crew. For example, available mass could be poorly used to add redundant ECLSS components that have a negligible benefit but appear to make the vehicle safer due to poor assumptions about the propagation time of ECLSS failures.

541

Quantitative Launch and Space Transport Vehicle Reliability and Safety Requirements: Useful or Problematic?

Sergio Guarro

ASCA Inc., Redondo Beach, USA

The setting of quantitative reliability or safety requirements for launch and space-transport vehicles (LVs , STVs) is an established practice in the space business, applied via related provisions in the contracts by which LV and STV development and acquisition activities are assigned to supplier organizations. At face value this is a reasonable approach to establishing a desired level of safety and reliability for space missions, a compelling need given that extremely valuable payload, or even human life in the case of crewed vehicles, may be at stake. However serious issues presently make the approach difficult to successfully implement, ranging from ambiguities in the requirement definitions themselves, such as the often encountered separation of the definitions of LV reliability into "design" and "mission," to lack of realism in the setting of the requirement values in relation to the state of advancement of the current LV / STV technology and to the magnitude of the uncertainty in reliability and risk assessments relying on the currently available quantification data. These issues and the resulting hindrance to the very reliability and safety assessment techniques the setting of quantitative requirements is in theory designed to foster and stimulate are identified and discussed.

470

Conception of Logistic Support Model for Controlling Passengers Streams at the Wroclaw Airport

Kierzkowski Artur and Kisiel Tomasz

Wrocław University of Technology, Wrocław, Poland

The article presents a preliminary concept of a model of logistical support for the functioning of the Wrocław Airport in regards to controlling the passenger streams in the airport terminal. The model of logistical support for the functioning of the Wrocław Airport will be a dynamic model in which the original schedule of processes may change along with the duration of its execution. Dynamic reliability tools allow for a much better description of the surrounding reality by taking into account the changes in the time of input values for the calculations. Thereby the dynamic reliability breaks with the the traditional approach to calculations, in which after the determination of states and relationships of a system, time-invariant input values for the calculations are introduced (results of calculations can show the dynamics through relationships of elements of the system). In the article we presented a developed preliminary concept of the functioning of a passenger terminal based on designated main characteristics of the random entries of passengers and random durations of individual activities. The article also presents the direction of the future development of the presented concept.

243

The Effects of Light Exposure on Flight Crew Alertness Levels to Enhance Fatigue Risk Management Predication Models

L. Brown (a), A.M.C. Schoutens (b), G. Whitehurst, T. Booker, T. Davis, S. Losinski, and R. Diehl (a)

a) Western Michigan University, Kalamazoo, USA, b) FluxPlus, BV, The Netherlands

Fatigue Prediction models are used as scheduling tools to manage risks associated with fatigue. They allow for proactive identification of possible hazards and the use of risk management tools to mitigate risks aspects of a Safety Management System (SMS). These tools provide ways for airlines to predetermine which flights and schedules may have more risk and allow operators to intervene and proactively reduce risk when possible. This process allows the operator to identify inherent risk built into flight schedules to maximize alertness. This adds another layer to the operators’ safety management systems. Current fatigue prediction models do not account for light/night effect on alertness levels. The effects of light on alertness have been well established and could be built into fatigue risk management systems. Recent research conducted by Brown et al., (2014) examined whether timed ocular light exposure could mitigate fatigue, reducing physiological, perceived and cognitive fatigue —to transform aviation alertness models [4]. The availability of this information opens up a new range of possibilities, making it possible to “build” light/dark effect into crew alertness models and scheduling tools to improve aviation safety, crew member health and manage risk.

472

Conception of Logistic Support Model for the Functioning of a Ground Handling Agent at the Airport

Kierzkowski Artura, Kisiel Tomasz

Wrocław University of Technology, Wrocław, Poland

This article presents the concept of a model of logistical support for the functioning of a ground handling agent. The developed concept of a model will enable the analysis and optimization of logistical processes associated with ground handling of an aircraft. In particular, the model will enable the multi-criteria analysis of aspects determining the effectiveness of the executed processes, such as the analysis of the functioning of the ground handling agent for the projected traffic flows -flight timetable, which will be executed in the next flight season -distant time horizon, the analysis of the functioning of the airport for the projected traffic flows -flight timetable, which is executed in the current season -a given day when there are more known premises regarding the projected (few hour) period -short-term horizon. In addition, the projected results will include the impact of the aspects of reliability of the operational equipment -on the basis of operational data, functions characterizing the reliability (probability density function of time between damages and probability density function of the time of restoration of operating capability) will be developed.

T02 Fire Modeling and Applications

10:30 Kahuku

Chair: Yan Gao, Westinghouse Electric Company

426

Preliminary Assessment of the Probabilistic Risk of Nuclear Power Plant Against to the Aircraft Impact Loading

Daegi Hahm, Sang Shup Shin, and In-Kil Choi

Korea Atomic Energy Research Institute, Deajeon, Korea

In Korea, a research to develop the aircraft impact risk quantification technology was initiated in 2012 by Korea Atomic Energy Research Institute. This paper presents the purpose and objectives of that research project and introduces the interim results during the first two years. In the first year, aircraft impact accident scenario was developed and the reference parameter of the aircraft impact accident was determined. To determine the reference loading parameter, we performed repetitive simulations for many analysis cases with considering the variations of loading parameters such as mass, velocity, angle of crash, etc. A revised version of Riera's analysis method, which is appropriate for a simplified impact analysis was applied to the simulation procedure. The target nuclear power plant is one of typical PWR type NPPs. In the second year, the floor response spectra for the locations of important components were evaluated for the estimation of the failure probabilities and fragility functions of structures and equipments. Some representative floor response spectra for containment building and primary auxiliary building were presented in this paper. A conceptual technical procedure to assess the aircraft impact risk of NPP by using the previous results was proposed. It is expect that the aircraft impact risk onto the NPPs will be estimated in near future.

530

Significance of Structural Integrity Assessment in the Sustenance of Nigeria’s Infrastructural Development

Olaniyi Abraham Oluseun (a), Ogunseye Olatunde David (b), Engr. ‘Wale Lagunju FNSE (c)

a) Ministry of Works, Akure, Ondo State, Nigeria, b) Federal Polytechnic, Bida, Niger State, Nigeria, c) INTECON PARTNERSHIP LTD., Ibadan, Oyo State, Nigeria.

In recent years, Nigeria has made important strides towards improving its infrastructural sector of the economy, spending billions of naira yearly on the construction of infrastructure but with little provision for operations and maintenance. Thus, there has been extensive damage and consequent under-utilization of various civil engineering infrastructure; highways, buildings, pipelines, railways, refineries and so on. Structural Integrity is a crucial component of the engineering field and is considered essential for engineers to learn and apply to their work. Consequently, consistent inspections must be carried out on most structures to ensure that they do indeed possess adequate structural integrity. For regular inspections, non-destructive test (NDT) methods may provide a relatively swift and inexpensive means to establish whether a structure is still in a serviceable condition or not without impairing parts or the entire structure. Results of these investigations, which may improve the quality of information by eliminating the prejudice, associated with current visual inspection techniques. This paper examines the mode of assessing the structural integrity of concrete structures under the destructive and non-destructive approaches, considering the various tools that are in use and introducing preferred tools and their nondestructive applications taking as a case study, the structural integrity assessment of Ijora-Apapa bridge, in Lagos, southwest Nigeria, on which various tests such as the compressive strength, degree of deterioration and surface delamination, and reinforcement cover measurements were carried out, in the capacity of inspection and analyses of in-situ concrete structures, in an attempt to proffer solutions to the extensive damage of the civil engineering infrastructure and allow Nigeria to retain wealth as a nation and expand upon a solid industrial philosophy.

591

Fire Maintenance Rule (a)(4) Implementations in Us Nuclear Plants

Yan Gao (a), Victoria K Anderson (b), Anil K. Julka (c)

a) Westinghouse, Windsor, CT, b) NEI, Washington, DC, c) NextEra Energy, Juno Beach, FL

An overview and lessons learned of the latest 10 CFR50.65(a)(4) guidance update to include fire risk evaluations and its implementations in U.S. nuclear plants are presented. By December 1, 2013, all the US nuclear plants implemented this new NRC requirement for the Maintenance Rule (MR) (a)(4) program to include the fire risk evaluation and management actions as part of the existing at power MR (a)(4) program. This paper will introduce the background, the need, the requirement, the process and some of the implementation details of incorporating fire risk assessment within the existing MR (a)(4) program. This paper will also discuss some of the applicable program interactions within a nuclear plant, such as the interactions among Fire Protection, Appendix R, PRA and Work Control programs and activities. All of these functions/programs are required to support a successful fire MR (a)(4) implementation. Some of the technical and implementation issues, such as use of safe shutdown analysis, qualitative and quantitative risk analysis application, equipment scoping and risk management actions are also discussed in the paper. Some of the lessons learned since the December 1, 2013 implementation of this new program are also presented.

235

Event Tree Methodology as Analytical Tool for Fire Events

Svante Einarsson (a), Michael Tuerschmann (b), Marina Roewekamp (a)

a) Gesellschaft für Anlagen-und Reaktorsicherheit (GRS) mbH, Köln, Germany, b) Gesellschaft für Anlagen-und Reaktorsicherheit (GRS) mbH, Berlin, Germany

A key element of performing Fire PSA is the determination of fire induced failure probabilities of components and cables for those fire sources identified as relevant. Such determination is usually made by means of fire event trees. The Fire PSA analyst derives specific fire event trees for all possible fire sequences taking into account plant characteristics (e.g. on-site plant internal or only external fire brigade), the compartment specific situation and boundary conditions (e.g. room volume and ventilation conditions), potential fire sources (e.g. location, material) and safety targets (e.g. components, cables). Generic fire event trees could be a helpful starting point for the analyst, but these generic event trees must be adapted within a plant specific Fire PSA, e.g. the branch points have to be checked if they are really reflecting the plant characteristics and the branch point probabilities have to be determined by applying plant specific data. Generic event trees can be applied for another purpose as well. A set of standardized generic event trees can be used to describe the main fire specific characteristics of fire events observed from the operating experience. This approach is particularly convenient for the analysis of sets of fire events. Within an ongoing research and development project a set of generic fire event trees has been developed, consisting of: a time dependent event tree which sub-divides a fire event into different phases, an event tree specifically addressing fire detection, and an event tree specifically addressing fire suppression. The set of generic fire event trees characterizes all the possibilities of the phases of fire initiation, fire development and spreading as a stochastic process. Each fire event having occurred represents a realization of this process and can be described by a corresponding sequence number. Presently, the international fire events database OECD FIRE contains more than 420 fire events from 146 nuclear power plants (PWR and BWR) from 12 countries. The above mentioned set of generic fire event trees can be used to analyze the fire events reported to the OECD FIRE Database. In other words, for the entity of fire events observed from the operating experience collected from nuclear power plants in these countries the corresponding sequence numbers of the generic fire event trees can be determined. The triplet of sequence numbers represents an additional attribute of each reported fire event, which will be stored in the OECD FIRE Database as additional information. The paper presents examples how to use this new attribute of the OECD FIRE Database to retrieve additional information on trends of the fire events observed, which may be used to solve future fire analysis tasks.

162

The Implementation Standard for Internal Fire Probabilistic Risk Assessment of Nuclear Power Plants

Toshiyuki Takagi (a), Naoyuki Murata (b)

a) Tohoku University, Aoba-ku, Sendai, Japan, b) Japan Nuclear Safety Institute, Minato-ku, Tokyo, Japan

Internal Fire PRA (IFPRA) is an analysis method which can quantitatively evaluate plant damage states, including core damage frequency. Because PRA results can be used for identifying the causes of end states, such as simultaneous multiple components failures (SMCF) of safety significant components induced by the effect of fire, by analyzing the results systematically, it is utilized worldwide. Because the Implementation Standard for IFPRA has not yet been developed in Japan, Japanese utilities have not evaluated fire consequences sufficiently. Considering the circumstances, the Atomic Energy Society of Japan (AESJ) has been working on development of the “Implementation Standard for Internal Fire Probabilistic Risk Assessment of Nuclear Power Plants” since fiscal year 2012. This IFPRA standard prescribes the requirements and specific methods to implement Level 1 PRA for accidents initiated by internal fire at NPPs during power operation. This standard is being finished at the Fire PRA subcommittee under the Risk Technical Committee for the Standards Committee of AESJ, and is expected to be published in fiscal year 2014. This standard will give great support to PRA engineers for performing the IFPRA with adequate quality, by identifying vulnerabilities associated with internal fire, and moreover will contribute to further improvement of the NPPs’ safety.

230

Technical Reliability of Active Fire Protection Features – Generic Database Derived from German Nuclear Power Plants

Burkhard Forell, Svante Einarsson, Marina Roewekamp

Gesellschaft für Anlagen-und Reaktorsicherheit (GRS) mbH, Köln, Germany

In the frame of Probabilistic Fire Safety Analysis fire event and fault trees specific to the conditions of the nuclear power plant under consideration need to be established for estimating the corresponding branch point probabilities and end states for core or fuel damage frequency. That also requires applying technical and human reliability data for fire specific event sequences. The technical reliability of fire detection systems, fire and smoke extraction dampers, fire doors and fire extinguishing systems and equipment including extinguishing media supplies has been estimated. The data has been evaluated by analyzing the documentation of periodic in-service inspections as well as additional information and reports which resulted from the inspection findings. For more complex systems, in addition to the components’ reliability data, fault trees are presented to calculate the system’s reliability. This type of data already published 2005 in the document on PSA Data supplementing the German PSA Guideline and has now been extended to cover 111 plant operational years of six power reactor units of different age and type. The generic data may also be applied as a-priori information for estimating the reliability of components with similar design and equivalent inspection and maintenance practice for nuclear power plants abroad.

T03 Reliability Analysis and Risk Assessment Methods II

10:30 O'ahu

Chair: Smain Yalaoui, Canadian Nuclear Safety Commission

54

Risk Assessment and Vulnerable Path in Security Networks Based on Neyman-Pearson Criterion and Entropy

Ruimin Hu (b,a), Haitao Lv, and Jun Chen (a)

a) National Engineering Research Center for Multimedia Software, Wuhan University, Wuhan, China, b) School of Computer, Wuhan Univeristy, Wuhan, China

In this paper, the protection coverage area of a security system is considered. The protection coverage is determined by applying the protection model of security systems, which is brought forward according to Neyman-Pearson Criterion. The protection model can be used to define the protection probability on a grid-modeled field. The security systems deployed in a guard field are regarded abstractly as a diagram. On the basis of the entropy theory, we propose the risk entropy, which can be used to quantificationally evaluate the risk of arbitrary position in an area. Using a graph model for perimeter, we use Dijkstra's shortest path algorithm to find protection breach paths. The protection probability on the vulnerable path is considered as the risk measure of a security network. Furthermore, we study the effects of some parameters on the risk and the breach protection probability and present simulations. Ultimately, we can gain insight about the risk of a security network.

471

Dependability Evaluation of Data Center Power Infrastructures Considering Substation Switching Operations

Suellen Silva, Bruno Silva, Paulo Romero Martins Maciel (a), Armin Zimmermann (b)

a) Federal University of Pernambuco, Recife, Brasil, b) Ilmenau University of Technology, Ilmenau, Germany

Electrical power systems (EPS) are systems that include energy generation, transmission and distribution. One of the most important components of EPS corresponds to the electrical substation, which is utilized to control, modify, distribute and direct the electricity flow. The quality level of these systems is regulated by using Service Level Agreements (SLAs) which specify, for instance, maximum downtime per year. Penalties may apply if the quality level is not satisfied. On the other hand, fault tolerance techniques employ redundant equipment to increase the availability level of general systems, and the use of spare devices may incur additional infrastructure costs. Thus, to meet the SLA requirements, electrical system designers need to evaluate the dependability level of these systems. It is important to state that the use of software tools is suitable for dependability metrics evaluation, since it is not trivial to simulate or analyze complex systems. Modeling techniques with a strong mathematical background such as Stochastic Petri Nets (SPN) and Reliability Block Diagrams (RBD) can be adopted to assess dependability in power systems. This work proposes a methodology, which includes a hierarchical heterogeneous modeling technique that considers the advantages of both stochastic Petri nets (SPN) and reliability block diagrams (RBD) to evaluate data center power infrastructures considering substation switching operations. A case study is provided to demonstrates the feasibility of the proposed methodology.

57

Fukushima Accident Implications on PSA and on the Regulatory Framework in Canada

Y. Akl, S. Yalaoui

Canadian Nuclear Safety Commission, Ottawa, Canada

Fukushima Daiichi accident has led the Canadian Nuclear Safety Commission (CNSC) to take three major undertakings regarding the conduct of the Probabilistic Safety Assessment (PSA). First, is the amendment of the Regulatory Document S-294 “Probabilistic Safety Assessments for Nuclear Power Plants” to require the extension of Level 1 and Level 2 PSA to cover irradiated fuel bay events; the inclusion of external events and their potential combinations; and the multi-unit considerations. The second undertaking is the re-evaluation, by the licensees, of the site-specific external hazards to evaluate if the current design protection is sufficient. The third undertaking is about the new requirement directing a utility to provide a whole site PSA, or a methodology for a whole site PSA, as well as an update of the baseline PSA to take into account Fukushima–driven enhancements. This paper will provide a brief description of the CNSC undertakings with the focus on the technical challenges regarding the development of the whole site PSA.

59

Strength of ZBDD Algorithm for the Post Processing of Huge Cutsets in Probabilistic Safety Assessment

Woo Sik Jung (a) and Jeff Riley (b)

a) Sejong University, Gwangjin-Gu, Seoul, South Korea, b) Electric Power Research Institute, Palo Alto, CA, USA

Zero-suppressed Binary Decision Diagram (ZBDD) algorithm was an important variation of a BDD algorithm, since it quickly solves a very large fault tree with a very low truncation limit when performing a Probabilistic Safety Assessment (PSA) of a nuclear power plant. The ZBDD algorithm can generate huge cutsets with a very small truncation limit for the accurate core damage frequency (CDF) calculation. Furthermore, the ZBDD algorithm can perform cutset post-processing that is called as a cutset recovery in a PSA. This paper explains the strength of the ZBDD algorithm for the efficient cutset post-processing. The post-processing of huge cutsets was performed and compared by FTREX and QRECOVER. Test results show that the ZBDD algorithm can play an important role for performing an efficient PSA by providing significantly reduced cutset quantification and processing time.

96

Scoping Estimates of Multiunit Accident Risk

Martin A. Stutzke

U.S. Nuclear Regulatory Commission, Rockville, Maryland, USA

Many nuclear power plants (NPPs) are co-located at a single site. Although NRC regulations recognize the potential for multiunit accidents, probabilistic risk assessments of NPPs have mainly focused on estimating the risk of a single NPP. This paper develops a scoping approach for estimating the total multiunit site risk that uses information from a single-unit Level 3 probabilistic risk assessment.

T04 The Petro-HRA Project: Adapting SPAR-H to a Petroleum Context I

10:30 Waialua

Chair: Ronald Boring, Idaho National Laboratory

92

Analysis of Human Actions as Barriers in Major Accidents in the Petroleum Industry, Applicability of Human Reliability Analysis Methods (Petro-HRA)

Karin Laumann (a), Knut Øien (b), Claire Taylor (c), Ronald L. Boring (d), Martin Rasmussen (a)

a) Norwegian University of Science and Technology, Trondheim, Norway, b) SINTEF, Trondheim, Norway, c) Institute for Energy Technology, Halden, Norway, d) Idaho National Laboratory, Idaho Falls, US

This paper presents an ongoing project called “Analysis of human actions as barriers in major accidents in the petroleum industry, applicability of human reliability analysis methods (Petro-HRA)”. The primary objective of this project is to test, evaluate and adjust Human Reliability Analysis (HRA) methods for use in quantifying the likelihood of human error and identifying the impact of human actions on the post-initiator barriers in the main accident scenarios in the petroleum industry. This project has chosen the HRA method “Standardized Plant Analysis Risk -Human Reliability Analysis” (SPAR-H) as the main method to adjust to a petroleum industry context. SPAR-H is a quantification method and it does not include description of the “qualitative data collection,” “task identification” and “task analysis” part of an HRA analysis. This project aims at developing guidelines for all the steps in performing an HRA analysis with SPAR-H in the petroleum industry.

181

Qualitative Data Collection for Human Reliability Analysis in the Offshore Petroleum Industry

Claire Taylor

OECD Halden Reactor Project, Institute for Energy Technology (IFE), Halden, Norway

Effective Human Reliability Analysis (HRA) requires both a qualitative analysis of potential human errors and a quantitative assessment of the likelihood of those errors. One of the main conclusions from an International HRA Empirical Study is the importance of qualitative analysis when performing HRA. Although qualitative data collection is relatively well established for HRA in the nuclear industry, there is very little written guidance available on how to perform such data collection. Most HRA methods do not provide guidance on how to do this, even when the method specifies that this activity should be performed. In addition, HRA is still a relatively new concept in the petroleum industry and so there exists little experience in this industry of qualitative data collection for the purposes of HRA and quantification. The Petro-HRA project is funded by the Research Council of Norway and includes a workpackage to evaluate methods for qualitative data collection, with the aim of developing written guidelines for HRA analysts working in the petroleum industry. This paper describes the objectives and research approach for this workpackage, and the findings to date from interviews with HRA analysts working in the petroleum industry on the Norwegian Continental Shelf.

224

Defining Human Failure Events for Petroleum Risk Analysis

Ronald L. Boring (a) and Knut Øien (b)

a) Idaho National Laboratory, Idaho Falls, Idaho, USA, b) SINTEF, Trondheim, Norway

In this paper, an identification and description of barriers and human failure events (HFEs) for human reliability analysis (HRA) is performed. The barriers, called target systems, are identified from risk significant accident scenarios represented as defined situations of hazard and accident (DSHAs). This report serves as the foundation for further work to develop petroleum HFEs compatible with the SPAR-H method and intended for reuse in future HRAs.

147

Human Reliability Assessment of Blowdown in a Gas Leakage Scenario on Offshore Production Platforms: Methodological and Practical Experiences

Sondre Øie, Koen van de Merwe, Sandra Hogenboom (a), Karin Laumann (b), and Kristian Gould (c)

a) DNV GL, Høvik, Norway, b) NTNU, Trondheim, Norway, c) Statoil, Oslo, Norway

This paper aims to share insights gained through the use of a Human Reliability Assessment (HRA) in Quantitative Risk Analysis (QRA) of offshore gas leakage accident scenarios. HRA was applied to one of the basic events in the QRA event tree, ‘failure to manually activate blowdown’. Based on a case study, the chosen approach to HRA is presented along with examples of how it was applied in practice. Using available guidelines, a set of well-established methods was selected for task analysis, human error identification (HEI), -modelling, and -reduction. A nuclear-specific HRA, the Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) technique, was chosen for quantification of Human Error Probability (HEP). Some challenges were identified, for which methodological adaptations and improvements are suggested. This especially concerns the accuracy of HEI, the detail level and petroleum relevance of Performance Shaping Factor (PSF), as well as integration of HRA results in the QRA model with respects to the function of time. Overall, the method was found to be successful in analyzing human errors and identifying risk reducing measures when directly applied to a petroleum context. This opens for a new type of risk informed decision-making not previously made available by traditional QRA practices.

T05 Risk Management Methods and Applications for Asset Management

10:30 Wai'anae

Chair: Stephen Hess, Electric Power Research Institute

63

Risk Informed Margins Management as part of Risk Informed Safety Margin Characterization

Curtis Smith

Idaho National Laboratory, Idaho Falls, USA

The ability to better characterize and quantify safety margin is important to improved decision making about Light Water Reactor (LWR) design, operation, and plant life extension. A systematic approach to characterization of safety margins and the subsequent margin management options represents a vital input to the licensee and regulatory analysis and decision making that will be involved. In addition, as research and development in the LWR Sustainability (LWRS) Program and other collaborative efforts yield new data, sensors, and improved scientific understanding of physical processes that govern the aging and degradation of plant SSCs needs and opportunities to better optimize plant safety and performance will become known. To support decision making related to economics, readability, and safety, the Risk Informed Safety Margin Characterization (RISMC) Pathway provides methods and tools that enable mitigation options known as risk informed margins management (RIMM) strategies. The purpose of the RISMC Pathway is to support plant decisions for RIMM with the aim to improve economics, reliability, and sustain safety of current NPPs over periods of extended plant operations. The goals of the RISMC Pathway are twofold:

Develop and demonstrate a risk-assessment method that is coupled to safety margin quantification that can be used by NPP decision makers as part of risk-informed margin management strategies. Create an advanced RISMC Toolkit that enables more accurate representation of NPP safety margins. The methods and tools provided by RISMC are essential to a comprehensive and integrated RIMM approach that supports effective preservation of margin for both active and passive SSCs. We discuss the methods and technologies behind RIMM in this paper.

110

IPOP, an Industrial Assets Management Tool to Support Integrated LifeCycle Management

Jérôme Lonchampt, Karine Aubert-Fessart

EDF R&D, Chatou, France

IPOP, for Investments Portfolio Optimal Planning, is a tool dedicated to industrial assets management. It features different quantification modules to support decision making regarding maintenance of major components of a nuclear power plant and spare part purchases. This paper describes IPOP and its link with EPRI Integrated Life Cycle Management (ILCM) software suite. The integrated use of the tool is illustrated with a test-case.

211

Integrated Life Cycle Management for Nuclear Power Plant Long Term Operation

Stephen M. Hess and Charles A. Mengers

Electric Power Research Institute, West Chester, PA, USA

High capacity factors and low operating costs have contributed to making commercial nuclear power plants (NPPs) some of the most economical low carbon emission power generators in the world. As a result of both economic and environmental (e.g. climate change) imperatives, it is envisioned that operation of the current fleet of NPPs will extend significantly beyond their original period of licensed operation. However, a decision to extend NPP life involves inter-related technical, economic, regulatory and public policy issues. Due to the long timeframes involved there are large uncertainties associated with the elements that are evaluated to arrive at such a decision. This is particularly important given the potentially large capital expenditures that could be necessary to maintain high levels of safety, operational and economic performance over the intended extended period of operation. In this paper we describe development of an Integrated Life Cycle Management (ILCM) approach that provides methods to assess asset life probabilities through physics of failure based analyses that are then used to develop optimal component refurbishment and replacement strategies that can be implemented at either the plant or fleet level.

559

Asset Integrity – Process Safety Management (Techniques and Technologies)

Soliman A. Mahmoud

Engineering Specialist, Saudi Aramco Oil Company, Saudi Arabia

This paper discuses concepts and methodologies to Asset Integrity and Process Safety Management (AI-PSM) of Hydrocarbon Operations and elaborates on Inherently Safe Design as a predictive method to meet Process Safety requirements early at the Design Stage. Technologies to aid in AI-PSM, including Focused Asset Integrity Review, monitor performance and manage the integrity barriers will also be discussed in this paper.

T06 Aging Management Issues for Nuclear (Spent) Fuel and HLW Transport and Storage

10:30 Ewa

Chair: Bernhard Droste, BAM Federal Institute for Materials research and Testing

179

Reliability of Cask Designs under Mechanical Loads in Storage Facilities

Uwe Zencker, Linan Qiao, Mike Weber, Eva-Maria Kasparek, Holger Völzke

BAM Federal Institute for Materials Research and Testing, Berlin, Germany

The storage of radioactive waste is currently managed in Germany with dual purpose metal casks designed for both transport and storage. Various cask designs are available ranging from thickwalled cylindrical designs made of ductile cast iron to thin-walled cubical designs made of steel. The BAM Federal Institute for Materials Research and Testing evaluates the reliability of cask designs for a safe enclosure of radioactive waste including construction, material specifications, manufacturing procedures, and quality assurance measures for production and operation according to the state of the art in science and technology. Mechanical loads arise from accidents during handling procedures of the casks inside storage facilities during interim storage (e.g. stacking or lifting of the cask). These load scenarios are often numerically simulated. Therefore, reliable numerical calculations are essential. Mostly, the cask has to withstand a drop from a given height onto a defined target. For interim storage, this target models the foundation of the storage building. A systematic investigation of effects of small design changes or small variations of test conditions was conducted under horizontal or vertical drop test conditions of cylindrical casks as well as cubical containers. The paper gives advice for a reliable and safe cask design.

180

Considerations of Aging Mechanisms Influence on Transport Safety and Reliability of Dual Purpose Casks for Spent Nuclear Fuel or HLW

Bernhard Droste, Steffen Komann, Frank Wille, Annette Rolle, Ulrich Probst, Sven Schubert

BAM Federal Institute for Materials Research and Testing, Berlin, Germany

When storage of spent nuclear fuel (SNF) or high-level waste (HLW) is done in dual purpose casks (DPC), the effects of aging on safety relevant DPC functions and properties have to be managed in a way that a safe transport after the storage period of several decades is capable, and can be justified and certified permanently throughout that period. The effects of aging mechanisms (like e.g. radiation, different corrosion mechanisms, stress relaxation, creep, structural changes and degradation) on the transport package design safety assessment features have to be evaluated. The consideration of these issues in the DPC transport safety case will be addressed. Special attention is given to all cask components which cannot be directly inspected or changed without opening the cask cavity, what are the inner parts of the closure system and the cask internals, like baskets or spent fuel assemblies. The design criteria of that transport safety case have to consider the operational impacts during storage. Aging is not subject of technical aspects only, but also of “intellectual” aspects, like changing standards, scientific/ technical knowledge development and personal as well as institutional alterations. Those aspects are to be considered in the management system of the license holders and in appropriate design approval update processes. The paper addresses issues which are subject of an actual IAEA TECDOC draft “Preparation of a safety case for a dual purpose cask containing spent nuclear fuel”.

413

Development of Domestic Maritime Transportation Scenario for Nuclear Spent Fuel

Min Yoo and Hyun Gook Kang

KAIST, Daejeon, Korea

Spent fuel transportation of South Korea is to be conducted through near sea because it is able to ship a large amount of the spent fuel far from the public comparing to overland transportation. The maritime transportation is expected to be increased and its risk has to be assessed. For the risk assessment, this study utilizes the probabilistic safety assessment (PSA) method and the notions of the combined event. Risk assessment of maritime transportation of spent fuel is not well developed in comparison with overland transportation. For the assessment, first, the transportation scenario should be developed and categorized. Categories are assorted into accident type (routine, ship damage, cask damage), health effect type (direct external exposure and indirect internal exposure). This scenario will be exploited for the maritime transportation risk model which includes consequence and accident probability.

183

The German Aging Management Approach for Dry Spent Fuel Storage in Dual Purpose Casks

Holger Völzke

Federal Institute for Materials Research and Testing (BAM), Berlin, Germany

Since the decision by the German government to face out nuclear electricity generation the total amount of spent nuclear fuel and high level wastes from reprocessing is limited and well determined. In addition the siting and licensing procedure to establish a final repository has been ruled by a new law in the mid of 2013 and further delays are very likely until a deep geological repository may start its operation. In the meantime dry interim storage in dual purpose casks being permanently certified for interim storage as well as transportation is the established technical solution. Several on-site as well former centralized facilities are operated successfully for many years but storage licenses are generally limited to 40 years and future lifetime extensions are predictable. Permanent aging management for storage facilities and casks is necessary to demonstrate compliance with safety requirements and furthermore to gain relevant data and information about the technical conditions of the facilities and their components for future lifetime extensions. For that reason procedures and measures are currently improved and the approach is explained in this paper. In addition, the current status and latest experiences concerning periodic safety inspections and aging management measures are discussed.

468

Understanding the Environment on the Surface of Spent Nuclear Fuel Interim Storage Containers

Charles R. Bryan and David G. Enos

Sandia National Laboratories, Albuquerque, NM, USA

A primary concern with dry storage of spent nuclear fuel is chloride-induced stress corrosion cracking, caused by deliquescence of salts deposited on the stainless steel canisters. However, limited access through the ventilated overpacks and high surface radiation fields impede direct examination of cask surfaces for CISCC, or sampling of surface deposits. Predictive models for CISCC must be able to predict the occurrence of a corrosive chemical environment (a chloride-rich brine formed by dust deliquescence) at specific locations (e.g. weld zones) on the canister surface. The presence of a deliquescent brine is controlled by the relative humidity (RH), which is a function of absolute humidity and cask surface temperature. This requires a thermal model that includes the canister and overpack design, canister-specific waste heat load, and passive cooling by ventilation. Brine compositions vary with initially-deposited salt assemblage, reactions with atmospheric gases, temperature, and the relative rates of salt deposition and reaction; predicting brine composition requires site-specific compositional data for atmospheric aerosols and acid gases. Aerosol particle transport through the overpack and deposition onto the canister must also be assessed. Initial field data show complex variability in the amount and composition of deposited salts as a function of canister surface location.

T07 Dynamic Reliability I

10:30 Kona

Chair: Cristian Rabiti, Idaho National Laboratory

68

Automatic Synthesis of Fault Trees from Process Modelling with Application in Ship Machinery Systems

Gabriele Manno (a), Alexandros S. Zymaris, and Nikolaos M.P. Kakalis (b)

a) DNV GL, Strategic Research & Innovation, Høvik, Norway, b) DNV GL, Strategic Research & Innovation, Piraeus, Greece

System safety analysis provides assurance that the system satisfies safety-constrains even in the presence of components failures. Traditionally, safety analyses are performed based on various formal and informal requirements and design documents. These analyses can often be subjective and are dependent on the skills/expert knowledge of the practitioner. Moreover, the construction of Fault Trees (FTs) is generally a time consuming and tedious activity especially when complex systems are assessed. In this paper we propose a methodology in which FTs are generated automatically from the resolution of formal system engineering models of the system under investigation. The process models are able to capture both the steady-state and dynamic behavior of components in their nominal and failure states. For the development of the process models the DNV COSSMOS library/platform was used, that is the first formal process modelling platform developed for marine energy systems. Since functional dependencies are captured by the input-output relations implemented in the system model, the methodology allows for a separation of concerns when modelling different components. Moreover, a library of components can be generated for reuse in other applications. In the view of the authors, not only does the proposed methodology allow for automatic synthesis of FTs and state-space exploration, but it also bridges the gap existing between safety and process engineering analyses.

365

Ontology-based Disruption Scenario Generation for Critical Infrastructure

Paolo Trucco, Boris Petrenj and Massimiliano De Ambroggi

Politecnico di Milano, Milan, Italy

Critical Infrastructures (CIs) are exposed to a wide spectrum of threats which vary in nature and can be either internal or external. THREVI2 project aims at assisting Authorities and Operators to get comprehensive information of all potential disruption scenarios relevant for CIP through creating a comprehensive and multi-dimensional all-hazards catalogue for CI. It consists of two ontologies (CI systems and Hazards & Threats affecting CI) connected through vulnerability and (inter)dependency models. Its final implementation in a software tool (PATHFINDER) is aimed to support analysts in specifying the overall system of systems and generating a set of relevant disruption scenarios. The paper presents the adopted ontology development process and describes the main features of the final integrated set of ontologies. CI ontology covers Energy, Transport, Water and Telecommunications sectors, comprising 11 subsectors in total – each being described through two sub-ontologies (physical and functional) interconnected within the service delivery topology. Hazard & Threat ontology systematically characterises different typologies of events, their attributes, types and possible effects to CI systems. The validation process of the final ontologies is also described. Finally, the progress and challenges in modeling interdependencies is discussed, as well as further developments that will take place.

227

Methodologies for a Dynamic Probabilistic Risk Assessment of the Fast Cascade Occurring in Cascading Failures Leading to Blackouts

Pierre Henneaux (a,b), Daniel Kirschen (b), and Pierre-Etienne Labeau (a)

a) Université libre de Bruxelles, Brussels, Belgium, b) University of Washington, Seattle, USA

Blackouts result from cascading failure in transmission power systems. The typical development of a cascading failure can be split in two phases. In an initial slow cascade phase, an initiating contingency triggers a thermal transient developing on characteristic times much larger than the electrical time constants. This transient increases significantly the likelihood of additional contingencies. The loss of additional elements can then trigger an electrical instability. This is the origin of a subsequent fast cascade, where a rapid succession of events can lead the system to blackout. Based on these two phases and because cascading mechanisms occurring in each phase are very different, the blackout Probabilistic Risk Assessment (PRA) can be decomposed in two levels. A methodology for the level-I (PRA of the slow cascade) was already developed. Level-II analysis is the assessment of the fast cascade. It starts when the transmission power system becomes electrically unstable and finishes when the system reaches an electrically stable state (blackout state or operational state with load shedding). The aim of this paper is to discuss possible adequate methodologies for the level-II and to apply one of them to a test system.

372

RAVEN, a New Software for Dynamic Risk Analysis

C. Rabiti, A. Alfonsi, J. Cogliati, D. Mandellia, R. Kinoshita

Idaho National Laboratory, Idaho Falls, USA

RAVEN (Risk Analysis Virtual Environment) is a generic software driver to perform parametric and probabilistic analysis of code simulating complex systems. Initially developed to provide dynamic risk analysis capabilities to the RELAP-7 code, [1] RAVEN capabilities are currently being extended by adding Application Programming Interfaces (APIs). These interfaces are used to allow RAVEN to interface with any code as long as all the parameters that need to be perturbed are accessible by inputs files or directly via python interfaces. RAVEN is capable to investigate the system response, probing the input space using Monte Carlo, grid strategies, or Latin Hyper Cube schemes, but its strength is its focus toward system feature discovery, such as limit surfaces, separating regions of the input space leading to system failure, using dynamic supervised learning techniques. The paper will present an overview of the software capabilities and their implementation schemes followed by some application examples.

375

Dynamic Methods for the Assessment of Passive System Reliability

Acacia Brunett, David Grabaskas, and Matthew Bucknor

Nuclear Engineering Division, Argonne National Laboratory, Argonne, IL, U.S.

Passive systems present certain challenges for conventional reliability assessments due to their ability to fail functionally without any physical component failures. This results in difficulties when modeling the system using fault trees. Also, the behavior of passive systems can be entirely time-dependent, meaning that their representation in conventional event trees where time is not explicitly modeled is ineffective. Dynamic methods, which utilize computer simulations to dictate the timing of events, offer a possibility to alleviate some of these issues. In this work, a methodology is presented for utilizing discrete dynamic event trees (DDETs) to characterize passive system reliability. A demonstration problem has been chosen which analyzes a long-term station blackout (SBO) in a generic advanced small modular reactor (advSMR) coupled with the reactor cavity cooling system (RCCS), a passive cooling system that relies on natural convection and radiation to reject heat from the reactor guard vessel. Uncertainties related to characteristics affecting the performance of the RCCS are identified, and the methodology for addressing uncertainties in the sequencing of events in scenario progression is presented. This work is part of an ongoing project at Argonne National Laboratory to demonstrate methodologies for assessing passive system reliability.

T11 Eliability Analysis and Risk Assessment Methods III

1:30 PM Honolulu

Chair: HyungJu Kim, NTNU, Department of Marine Technology, Norway

102

A PRA Application to Support Outage Schedule Planning at OL1 and OL2 Units

Hannu Tuulensuu

Teollisuuden Voima Oyj, Eurajoki, Finland

For Olkiluoto 1 (OL1) and Olkiluoto 2 (OL2) nuclear power plant units, planned outages are done annually. Every second year a refuelling outage (duration about 1 week) and every second year a maintenance outage (duration about 2-3 weeks) is performed. To ensure nuclear safety in such short outage times, well planned outage schedules are required. Because of this, a PRA application to support the outage schedule planning has been developed. The PRA application for outage risk management has six goals: (1) to support outage schedule planning, (2) to assess plant modifications during outage, (3) to estimate core damage and radioactive release frequencies of the outage, (4) to identify "weak points" of the outage, (5) to teach riskinformed thinking to the outage schedule coordinators and (6) to develop the outage PRA models. The PRA application is performed hour by hour throughout the whole outage. A risk profile for the outage as a function of time is the main result of the analysis. The assessment is updated when outage schedule is updated. Based on the results, the PRA application to support outage schedule planning is an efficient way to improve risk management during outages.

120

Loss Of Offsite Power Frequency Calculation II

Zhiping Li

Callaway Energy Center-Ameren Missouri, Fulton, USA

The availability of alternating current (AC) power is essential for safe operation and accident recovery at commercial nuclear power plants. Normally, AC power is supplied by offsite sources via the electrical grid. Loss of this offsite power has significant contribution to the overall risk at nuclear power plants. Reliable offsite power is one key to minimizing the probability of severe accidents. The probability of losing all offsite power is an important input to nuclear power plant probabilistic safety assessments. Several studies have analyzed data on LOOP and/or offsite power restoration. However, significant differences in LOOP event description, category, duration, and applicability exist between the LOOP events used in NUREG/CR-6890 and the EPRI LOOP Reports. Different LOOP frequency calculation methods are used in NUREG/CR-6890 and in the EPRI’s LOOP Reports. While the author was updating LOOP frequency for some nuclear power plants, it was found that there is a need to clarify how the LOOP frequency should be calculated. Loss of Offsite Power Frequency Calculation was presented to PSA2013, Columbia, SC in September 2013. A LOOP frequency calculation for an inland plant is performed. Insight about site specific LOOP frequency calculation and some discussion about applicability of LOOP events are presented. In addition, in Loss of Offsite Power Frequency Calculation II, LOOP frequencies for different categories will be calculated. Comparison and discussions about different LOOP frequency calculation methods will be presented.

320

Mean Fault Time for Estimation of Average Probability of Failure on Demand PFD avg

Isshi KOYATA (a), Koichi SUYAMA (b), and Yoshinobu SATO (c)

a) The University of Marine Science and Technology Doctoral Course, Course of Applied Marine Environmental Studies , Tokyo, Japan and Japan Automobile Research Institute, Tokyo, Japan, b) The Universityof Marine Science and Technology Doctoral Course, Professor, Tokyo, Japan, c) Japan Audit and Certification Organization for Environment and Quality, Tokyo, Japan

In functional safety standards, the safety integrity of safety-related system operated in the low demand-mode of operation is defined as its average probability of dangerous failure on demand, PFDavg. In this paper, we firstly formulate the PFDavg resulting from the undetected failures being maintained by proof tests from the two-viewpoints of the mean fault time, the reliability, and the risk assessment of safety-related system. Based on the formulation, the mean fault time is derived using the proof test interval for 1-outof-n redundant systems. The mean fault time is useful for the exact estimation of safety integrity using Markov-state transition diagrams.

164

Reliability Analysis Including External Failures for Low Demand Marine Systems

HyungJu Kim, Stein Haugen (a), and Ingrid Bouwer Utne (b)

a) Department of Production and Quality Engineering NTNU, Trondheim, Norway, b) Department Marine Technology NTNU, Trondheim, Norway

Marine systems fail not only due to equipment failure, but also because of external events like fire or flooding. Fire in the engine room, for example, can damage the main engine and it can lead to propulsion system failure regardless of the reliability of the main engine itself. Many redundancy requirements to vessels include these external events as a cause to system failure and they require physically separated redundancy in order to prevent system failure by a single fire or flooding. We need to consider external events, as well as equipment failure when analyzing the reliability of a marine system. The main objective of this paper is to introduce a reliability analysis models for (i) equipment failure and (ii) external failure of low demand marine systems. A Markov model is suggested to calculate the hazardous event frequency (HEF) in this study. The paper also investigates the contribution of the two different types of failures (i & ii). The paper provides a case study of a fire pump in a passenger ship which analyses the contribution of each failure type.

339

Heterogeneous Redundancy Analysis based on Component Dynamic Fault Trees

Jose Ignacio Aizpurua, Eñaut Muxika (a), Ferdinando Chiacchio (b), and Gabriele Manno (c)

a) University of Mondragon, Mondragon, Spain, b) University of Catania, Catania, Italy, c) Strategic Research & Innovation DNV GL, Høvik, Norway

The aggregation of hardware components to add recovery capabilities to a system function may result in high costs. Instead of adding redundancies with homogeneous nature aimed at providing recovery capabilities to a predefined system function, there is room in some scenarios to take advantage of over-dimensioning design decisions and overlapping structural functions using heterogeneous redundancies: components that, besides performing their primary intended design function, restore compatible functions of other components. In this work, we present a methodology to evaluate systematically the effect of failures of alternative redundancy and reconfiguration strategies, fault detection, and communication implementations on system dependability. To this end, a modeling approach called Generic Dependability Evaluation Model and its probabilistic analysis paradigm using Component Dynamic Fault Trees are presented. Application to a railway example is presented showing tradeoffs between dependability and cost when deciding to implement possible redundancy and reconfiguration strategies. Finally, details of the experiment prototype implemented using real railway communication elements are described so as to validate the design concepts treated throughout the paper.

T12 Application of Probability and Physics for NASA Risk Assessment Applications

1:30 PM Kahuku

Chair: Jason Reinhardt, Stanford University

151

Probabilistic Analysis of Asteroid Impact Risk Mitigation Programs

Jason C. Reinhardt, Matthew Daniels, and M. Elisabeth Paté-Cornell

Stanford University, Stanford, United States of America

Encounters with near-Earth asteroids (NEAs) are rare, but can have significant consequences for humanity. Probabilistic analysis of asteroid impact risks is important to fully understand the danger that they pose. This work builds on the prior development of a method and model to simulate the distribution of asteroid impact magnitudes on the Earth's surface over a 100-year period. This approach enables analysis of the full distribution of impact events, including those that are large and infrequent. Results of this approach have shown some of the greatest risks to life and property over the next century are posed by objects in the 300-1000 meter diameter range, which impact the Earth more frequently than those greater than 1km in diameter, and can still produce impact events with global effects. This paper extends previous work to assessing NEA risk mitigation efforts. We compare three types of possible space missions to alter the orbits of hazardous asteroids: kinetic impactors, standoff nuclear explosions, and gravity tractors. Each type of mission is assessed in terms of its reduction of impact risks. The analytic framework and results of this work can serve as input to a wide set of decisions including technology investments in potential countermeasures.

72

Physics-based Entry, Descent and Landing Risk Model

Ken Gee (a), Loc Huynh (b), and Ted Manning (a)

a) NASA Ames Research Center, Moffett Field, USA, b) Science and Technology Corporation, Moffett Field, USA

A physics-based risk model was developed to assess the risk associated with thermal protection system failures during the entry, descent and landing phase of a manned spacecraft mission. In the model, entry trajectories were computed using a three-degree-of-freedom trajectory tool, the aerothermodynamic heating environment was computed using an engineering-level computational tool and the thermal response of the TPS material was modeled using a one-dimensional thermal response tool. The model was capable of modeling the effect of micrometeoroid and orbital debris impact damage on the TPS thermal response. A Monte Carlo analysis was used to determine the effects of uncertainties in the vehicle state at Entry Interface, aerothermodynamic heating and material properties on the performance of the TPS design. The failure criterion was set as a temperature limit at the bondline between the TPS and the underlying structure. Both direct computation and response surface approaches were used to compute the risk. The model was applied to a generic manned space capsule design. The effect of material property uncertainty and MMOD damage on risk of failure were analyzed. A comparison of the direct computation and response surface approach was undertaken.

121

Physics-Based Fragment Acceleration Modeling for Pressurized Tank Burst Risk Assessments

Ted A. Manning, Scott L. Lawrence

NASA Ames Research Center, Moffett Field, CA, USA

As part of comprehensive efforts to develop physics-based risk assessment techniques for space systems at NASA, coupled computational fluid and rigid body dynamic simulations were carried out to investigate the flow mechanisms that accelerate tank fragments in bursting pressurized vessels. Simulations of several configurations were compared to analyses based on the industry-standard Baker explosion model, and were used to formulate an improved version of the model. The standard model, which neglects an external fluid, was found to agree best with simulation results only in configurations where the internal-to-external pressure ratio is very high and fragment curvature is small. The improved model introduces terms that accommodate an external fluid and better account for variations based on circumferential fragment count. Physics-based analysis was critical in increasing the model’s range of applicability. The improved tank burst model can be used to produce more accurate risk assessments of space vehicle failure modes that involve high-speed debris, such as exploding propellant tanks and bursting rocket engines.

192

A Failure Propagation Modeling Method for Launch Vehicle Safety Assessment

Scott Lawrence, Donovan Mathias, and Ken Gee

NASA Ames Research Center

A method has been developed with the objective of making the potentially intractable problem of launch vehicle failure propagation somewhat less intractable. The approach taken is to essentially decouple the potentially multi-stepped propagation process into a series of bi-component transition probabilities. These probabilities are then used within a simple Monte Carlo simulation process through which the more complex behavior evolves. The process is described using a simple model problem and some discussion of enhancements for real-world applications is included. The role of the model within a broader analysis process for assessing abort effectiveness from launch vehicle failure modes is also described.

191

An Integrated Reliability and Physics-based Risk Modeling Approach for Assessing Human Spaceflight Systems

Susie Go, Donovan Mathias (a), Chris Mattenberger (b), Scott Lawrence, and Ken Gee (a)

a) NASA Ames Research Center, Moffett Field, CA, USA, b) Science and Technology Corp., Moffett Field, CA, USA

This paper presents an integrated reliability and physics-based risk modeling approach for assessing human spaceflight systems. The approach is demonstrated using an example, end-to-end risk assessment of a generic crewed space transportation system during a reference mission to the International Space Station. The behavior of the system is modeled using analysis techniques from multiple disciplines in order to properly capture the dynamic time-and state-dependent consequences of failures encountered in different mission phases. We discuss how to combine traditional reliability analyses with Monte Carlo simulation methods and physics-based engineering models to produce loss of-mission and loss-of-crew risk estimates supporting risk-based decision-making and requirement verification. This approach facilitates risk-informed design by providing more realistic representation of system failures and interactions; identifying key risk-driving sensitivities, dependencies, and assumptions; and tracking multiple figures of merit within a single, responsive assessment framework that can readily incorporate evolving design information throughout system development.

T13 External Events Hazard/PRA Modeling for Nuclear Power Plants I

1:30 PM O'ahu

Chair: Michael Saunders, ERIN Engineering and Research, Inc.

19

Apportioning Transient Combustible Fire Frequency via Areal Factors: More Complicated Than it May Seem

Raymond H.V. Gallucci

U.S. Nuclear Regulatory Commission (USNRC), MS O-10C15, Washington, D.C. 20555

Apportioning the frequency of transient combustible fires to vary within a physical analysis unit for a fire probabilistic risk assessment (PRA) has been discussed and attempted by various analysts to date with limited success. The technique presented here illustrates the complexity involved in such a calculation, considering the constraints on preserving transient fire ignition frequencies within the Fire PRA, which may lend insight into why this has proven a difficult process. While the approach offered can be used, the goal is more to provide “food for thought” that may lead to a more straightforward, even if approximate, technique that would reasonably represent the reality of the situation without being overly complex.

267

Characterizing Fire PRA Quantitative Models: An Evaluation of the Implications of Fire PRA Conservatisms

M.B. Saunders, E.T. Burns

ERIN Engineering and Research, Inc., Walnut Creek, California, USA

Conservative bias may be present in fire PRAs due to limitations in data or methodologies. An evaluation was performed to characterize the current situation with fire PRA models and the implications regarding perceived risk associated with the degree of conservative bias. The principal areas of the fire PRA data and modeling that may be subject to such biases were identified and the impacts these biases have on the reported point estimate CDF and the contributors were quantified. These biases were assessed using a number of sensitivity studies where in a set of modeling approaches or assumptions were varied from the NUREG/CR-6850 guidance that are considered to be conservatively biased. Three point estimates were developed using NUREG/CR-6850 guidance and by incrementally removing biases by crediting more realistic approaches supported in part by revised guidance or in-progress industry and NRC efforts. The conclusion from the evaluation is that reasonable (realistic) approaches to the assessment of the fire hazard will result in a reduced estimate of the fire risk, will likely change the primary risk insights, and could greatly influence the priority that is assigned to possible plant changes resulting from a re-characterization of the causes of risk significant fires and fire zones.

454

Approach for Integration of Initiating Events into External Event Models

Nicholas Lovelace, Matt Johnson (a), and Michael Lloyd (b)

a) Hughes Associates, Lincoln, NE, USA, b) Risk Informed Solutions Consulting Services, Ball Ground, GA, USA

Probabilistic Risk Assessments (PRAs) are increasingly being used as a tool to support risk informed applications. As a result, the scope and quality of these PRA models has expanded to account for the risk associated with external events, such as fires, seismic events, external floods, and high winds. Improved methods developed to create a PRA model for one of these external events can often be used to improve the process used to develop PRA models for other external events. This paper explores one such method. It describes an improvement in the method to incorporate fire initiating events into a Fire PRA model. The improved method reduces the potential for mapping external event failures to multiple induced initiating events. Such mapping can have the undesirable effect of generating duplicate cutsets, subsuming issues, etc. during the quantification process. This is especially an issue when the quantification engine applies the “rare event” approximation and cutset basic events probabilities are relatively high (i.e., greater than 0.1). The improved method allows multiple external event induced initiating event mapping, while addressing the limitations of the quantification engine.

44

Development of Margin Assessment Methodology of Decay Heat Removal Function Against External Hazards − Project Overview and Preliminary Risk Assessment Against Snow

Hidemasa Yamano, Hiroyuki Nishino, Kenichi Kurisaka, and Takaaki Sakai (a), Takahiro Yamamoto, Yoshihiro Ishizuka, Nobuo Geshi, Ryuta Furukawa, and Futoshi Nanayama (b), and Takashi Takata (c)

a) Japan Atomic Energy Agency, Ibaraki, Japan, b) National Institute of Advanced Industrial Science and Technology, Ibaraki, Japan, c) Osaka University, Osaka, Japan

This paper describes mainly preliminary risk assessment against snow in addition to the project overview. The snow hazard indexes are the annual maximum snow depth and the annual maximum daily snowfall depth. Snow hazard curves for the two indexes were developed using 50-year weather data at a typical sodium-cooled fast reactor site in Japan. Snow hazard categories were obtained from a combination of the daily snowfall depth (snowfall speed) and snowfall duration that can be calculated by dividing the snow depth by the snowfall speed. For each snow hazard category, accident sequences were evaluated by producing event trees that consist of several headings representing the loss of the decay heat removal. Snow removal operation and manual operation of the air cooler dampers were introduced into the event trees as the accident managements. In this paper, a snow risk assessment showed less than 10-6/reactor-year of core damage frequency. A dominant snow hazard category was a combination of 1−2 m/day of snowfall speed and 0.75−1.0 day of snowfall duration. Sensitivity analyses indicated important human actions, which were improvement of the speed of snow removal and awareness of snow removal necessity.

590

Screening of Seismic-Induced Fires

James C. Lin, Donald J. Wakefield (a), and John Reddington (b)

a) ABSG Consulting Inc., Irvine, California, United States, b) First Energy Nuclear Operating Company, Akron, Ohio, United States

Seismic-induced fire has been an issue not addressed quantitatively in both the nuclear plant seismic PRAs and fire PRAs mainly because of the lack of data and a method to estimate the likelihood of a seismic-induced fire. One approach to identify the seismic-induced fire scenarios and evaluate the occurrence frequencies of these scenarios is to perform a screening analysis based on both the likelihood and the impact of such scenarios. Based on frequency of seismically induced fire initiation, there are two aspects to screening fire scenarios: (1) to assess the subset of seismic failure modes that may contribute to fires, the fragility for the structural failure modes including support/anchorage failures conservatively bounds the seismic failure potential; (2) the other factor that can be considered is the conditional probability of potential fire ignition. The seismic screening capacity can be determined by identifying an assumed fragility with which a convolution of the seismic hazard exceedance curves will result in a frequency of SSC failure integrated over the entire seismic hazard acceleration range below an acceptable screening value. For the remaining SSCs that survive the seismic capacity screening, additional screens based on fire consequences can be performed to reduce the number of scenarios to a minimal set for further detailed, quantitative evaluations.

T14 Cyber Security and Digital I&C

1:30 PM Waialua

Chair: Jose Emmanuel Ramirez-Marquez, Stevens Institute of Technology

28

Minimization of Vulnerability for a Network under Diverse Attacks

Jose Emmanuel Ramirez-Marquez (a) and Claudio Rocco (b)

a) School of Systems and Enterprises, Stevens Institute of Technology, Hoboken, NJ, USA, b) Facultad de Ingeniería, Universidad Central de Venezuela, Caracas, Venezuela

This paper describes an approach to minimize the vulnerability of a network under a defender attacker context. To do so, vulnerability is defined in the context of a resilience-building framework and corresponding mathematical formulations are provided. The solution to network optimization model is based on a three-phased approach consisting on identifying Pareto optimal defense strategies with respect to cost and vulnerability for a known set of network attacks. These solutions are then utilized to identify the network defense strategy that can offer the best protection against any of the attacks. Examples are used to illustrate the approach.

90

Applications of Bayesian Networks for Evaluating Nuclear I&C Systems

Jinsoo Shin, Rahman Khalil Ur (a), Hanseong Son (b), and Gyunyoung Heo (a)

a) Kyung Hee University, Yongin-si, Gyeonggi-do, Korea, b) Joongbu University, Geumsan-gun, Chungnam, Korea

The research presented, in this article, has been performed under the Korean research reactor project, an ongoing program to develop an optimized instrumentation & control (I&C) architecture and cyber security assessment of research reactors. The optimization of instrumentation and control systems and cyber security issues have been emphasized due to competitiveness of business (i.e. cost). Furthermore, these issues became more significant with the introduction of digital I&C systems. In this article, we have presented research activities performed for I&C architecture analysis and cyber security assessment for a reactor protection system (RPS). In I&C part, the architecture formulation, reliability feature analysis, cost estimation and cost-availability optimization of I&C architectures has been presented. In cyber security part, the cyber security risk evaluation model has been developed by integrating architecture model and activity-quality evaluation model, and analysis for cyber security evaluation for I&C system is presented. A probabilistic Bayesian network approach has been applied for I&C and cyber security analysis.

367

Portfolio Analysis of Layered Security Measures

Samrat Chatterjee, Stephen C. Hora, Heather Rosoff

CREATE, University of Southern California

Layered defenses are necessary for protecting the public from terrorist attacks. Designing a system of such defensive measures requires consideration of the interaction of these countermeasures. In this article, we present an analysis of a layered security system within the lower Manhattan area. It shows how portfolios of security measures can be evaluated through portfolio decision analysis. Consideration is given to the total benefits and costs of the system. Portfolio diagrams are created that help communicate alternatives among stakeholders who have differing views on the trade-offs between security and economic activity.

32

Cyber security: the Risk of Supply Chain Vulnerabilities in an Enterprise Firewall

Marshall A. Kuypers, Greg Heon, Philip Martin, Jack Smith, Katie Ward, and Elisabeth Paté-Cornell

Stanford University, Stanford, CA

Cyber security is a critical concern for many organizations. One defense approach is to install firewalls, but their effectiveness is uncertain and the cheapest model may not be the best. One may try to inspect them for vulnerabilities that may have been introduced in the product’s supply chain. Most existing models that quantify cyber risk do not address that issue, and the risk that corrupted components could be successfully inserted into a secure network is not directly considered other than by characterizing the supplier. We present a probabilistic risk analysis model for a firewall linking its parts to the different stages of production. We then evaluate the tradeoff between cost (system and inspection) and security by comparing two firewalls. We base our analysis on expert opinions, which we aggregate using the Delphi method. The model shows that in the illustrative case presented here, the value of information about the effectiveness of a firewall is actually worth little to a risk neutral decision maker. Therefore, inspecting firewalls for vulnerabilities may not be the most effective way to address the system’s security. Gathering information by monitoring for warning signals of a cyber attack could be a beneficial alternative or complement.

489

Security Informed Safety Assessment of Industrial FPGA-Based Systems

Vyacheslav Kharchenko (a,b), Oleg Illiashenko (a), Eugene Brezhnev (a,b), Artem Boyarchuk (a), Vladimir Golovanevskiy (c)

a) National Aerospace University KhAI, Kharkiv, Ukraine, b) Centre for Safety Infrastructure Oriented Research and Analysis, Kharkiv, Ukraine, c) Western Australian School of Mines, Curtin University, Australia

The strong interconnection and interrelation of safety and security properties of industrial system which are based on programmable logic (field programmable gate arrays, FPGA) is reviewed. Information security, i.e. system's ability to protect the information and data from unauthorized access and modification, is a subordinate property with respect to safety of many instrumentation and control systems (I&Cs), primarily to the NPP reactor trip systems. Such subordination may be taken into account by implementation of security informed safety (SIS) approach. The methodology for safety assessment of FPGA-based systems which are widely used in industrial critical systems is described. It is based on joint using of security analysis techniques (GAP-analysis and intrusion modes, effects and criticality IMECA analysis) and also their reflection on the final safety assessment picture of the system with two channels. This methodology forms so called security informed safety approach. Additional aspects of safety assessment of diverse instrumentation and control FPGA-based systems for safety-critical application are described.

T15 Reliability of Passive Systems I

1:30 PM Wai'anae

Chair: David Grabaskas, Argonne National Laboratory

47

Uncertainty of the Thermal-Hydraulic Model Analysis

Yu YU, Yingqiu HU, Junchi CAI, Shengfei WANG, Fenglei NIU

School of Nuclear Science and Engineering, Beijing Key Laboratory of Passive Nuclear Safety Technology, North China Electric Power University, Beijing, China

Passive containment cooling system is innovatively used in AP1000 to improve the safety of nuclear power plant. By this system the steam produced in the containment can be condensed through natural circulation and independent of outside power. However, since the system is a new design, the uncertainty exists in the thermal-hydraulic (T-H) model especially in the correlations of heat and mass transfer. In this paper, the effect of uncertainties of such correlations on the output of TH model is analyzed. Since the uncertainty of the correlations is within 20% based on the experiments, at different operation conditions such as different air temperature, we run the T-H model with the heat and mass transfer coefficients 5%, 10%, 15% and 20% higher and lower than their calculation value respectively and compare the results with the result of exact value by correlations. Then the amplitude of variation of the T-H model output and the safety margin of the system can be gained for different operation conditions, from the results, it is illustrated that the uncertainties of heat and mass transfer correlations may have important effect on the system reliability in some operation conditions and should be considered in the system reliability model.

239

Sensitivity Analysis and Failure Damage Domain Identification of the Passive Containment Cooling System of an AP1000 Nuclear Reactor

Francesco Di Maio, Giancarlo Nicola (a), Yu Yu (b) and Enrico Zio (a,c)

a) Energy Department, Politecnico di Milano, Milano, Italy, b) North China Electric Power University, Beijing, China, c) Chair on System Science and Energetic Challenge, European Foundation for New Energy,Electricite de France, Ecole Centrale, Paris, and Supelec, Paris, France

The paper presents an application of a variance decomposition method for the sensitivity analysis of the thermal hydraulic (TH) model of the Passive Containment Cooling System (PCCS) of an Advanced Pressurized Reactor (AP1000). The Loss Of Coolant Accident (LOCA) is considered as the most representative accident for identifying the Failure Damage Domain (FDD) of the PCCS with respect to the individual and grouped inputs most affecting the final pressure at the end of the accidental transient.

374

The Development of a Demonstration Passive System Reliability Assessment

Matthew Bucknor, David Grabaskas, and Acacia Brunett

Nuclear Engineering Division, Argonne National Laboratory, Argonne, IL U.S.

In this paper, the details of the development of a demonstration problem to assess the reliability of a passive safety system are presented. An advanced small modular reactor (advSMR) design, which is a pool-type sodium fast reactor (SFR) coupled with a passive reactor cavity cooling system (RCCS) is described. The RELAP5-3D models of the advSMR and RCCS that will be used to simulate a long-term station blackout (SBO) accident scenario are presented. Proposed benchmarking techniques for both the reactor and the RCCS are discussed, which includes utilization of experimental results from the Natural convection Shutdown heat removal Test Facility (NSTF) at the Argonne National Laboratory. Details of how mechanistic methods, specifically the Reliability Method for Passive Systems (RMPS) approach, will be utilized to determine passive system reliability are presented. The results of this mechanistic analysis will ultimately be compared to results from dynamic methods in future work. This work is part of an ongoing project at Argonne to demonstrate methodologies for assessing passive system reliability.

T16 Human Reliability Analysis II

1:30 PM Ewa

Chair: Jeffrey C. Joe, Idaho National Laborator

153

Visual Monitoring Path Forecasting for Digital Human-Computer Interface in Nuclear Power Plant and its Application

Hu Hong, Zhang Li (a), Jiang Jian-Jun (b), Yi Can-Nan (a), Dai Li-Cao (b), Chen Qin-Qin (a)

a) Ergonomics and safety management Institute, HuNan Institute of Technology, Hengyang, China, b) Human Factors Institute, University of South China, Hengyang , China

The operators sometimes can not judge next possible monitoring object which would lead to monitoring delay or transfer error in the monitoring digital human-computer interface (DHCI) parameter information process in nuclear power plant(NPP). For this purpose, the Markov process based forecasting path dynamic plan (FPDP) method which included forecasting path model, forecasting path plan algorithm and the calculation method of transfer path success probability was proposed. Then the monitoring transfer behavior of the operators when SGTR(Stream Generator Tube Rupture) occurred abruption accidents is analyzed based on the method proposed in this paper, taking the DHCI as the source node of monitoring task of t time, the transfer path to next monitoring object was obtained successfully to improve the efficiency of monitoring and to minimize the risk of monitoring error, which will also contribute to the analysis of the driving mechanism of operators’ monitoring activities, to train simulated for monitoring behavior, and to optimize the digital manmachine interface.

13

Individual Differences in Human Reliability Analysis

Jeffrey C. Joe and Ronald L. Boring

Idaho National Laboratory, Idaho Falls, ID, USA

While human reliability analysis (HRA) methods include uncertainty in quantification, the nominal model of human error in HRA typically assumes that operator performance does not vary significantly when they are given the same initiating event, indicators, procedures, and training, and that any differences in operator performance are simply aleatory (i.e., random). While this assumption generally holds true when performing routine actions, variability in operator response has been observed in multiple studies, especially in complex situations that go beyond training and procedures. As such, complexity is one factor that can produce differences in performance according to individual operators’ understanding and decision-making, but psychological research has shown that individual differences in behavior can also result from a number of known antecedents (i.e., attributable causes) that are also systematically measurable (i.e., not random). This paper reviews examples of individual differences taken from operational experience and the psychological literature. The impact of these differences in human behavior and their implications for HRA are then discussed. We propose that individual differences should not be treated as aleatory, but rather as epistemic. Ultimately, by understanding the sources of individual differences, it is possible to remove some epistemic uncertainty from analyses.

281

Cultural Profiles of Non-MCR Operators Working in Domestic NPPs

Jinkyun Park, and Wondea Jung

Korea Atomic Energy Research Institute, Daejeon, Rep. of Korea

Traditionally, the safety of NPPs has been evaluated by the PSA (Probabilistic Safety Assessment) or PRA (Probabilistic Risk Assessment) technique that quantifies the integrated safety of a whole system. In this regard, HRA (Human Reliability Analysis) plays an important role because it should quantify the possibility of HFEs (Human Failure Events) affecting the safety of NPPs. Therefore, the provision of sufficient data that are helpful for understanding the nature of HFEs under a given accident sequence is indispensable for estimating more realistic HRA results. To address this issue, one of the technical obstacles is the cultural effect on the performance of human operators. That is, it is suspicious for an HRA practitioner to use HRA data collected from another country or organization without sufficient understanding the nature of cultural differences. In this study, as one of the practical approaches to unravel this question, the cultural profiles of non-MCR operators are investigated in detail with respect to their operational experience.

219

Improving Scenario Analysis for HRA

Claire Taylor

OECD Halden Reactor Project, Institute for Energy Technology (IFE), Halden, Norway

The International Human Reliability Analysis (HRA) Empirical Study [1, 2, 3, 4] concluded that variability in predictions of human error probabilities are in part due to deficiencies in the qualitative scenario analysis for some HRA methods. The study showed that it can be difficult for HRA analysts to gain a good understanding of how a scenario is likely to unfold, what challenges it may present to operators, how operators are likely to respond, and where performance problems may occur. Although some HRA methods include guidance on qualitative scenario analysis, most methods state only that this should precede quantification, but without specifying methods for this or the depth to which the scenario analysis should go. A study is underway at the Halden Reactor Project in Norway to investigate scenario analysis and why it is considered difficult. The study focuses on the experience of HRA analysts in their everyday work, with the goal of understanding the challenges they face. The aim of the study is to develop a practical guidance handbook for use when performing scenario analysis. The results will include good practices implemented by analysts and further recommendations for improvement. This paper describes the plan for this study, the findings to date and how these findings will inform a further proposed study on the development of a database to support HRA.

308

Can we Quantify Human Reliability in Level 2 PSA?

Lavinia Raganelli (a,b), Barry Kirwan (c)

a) Imperial College, London, United Kingdom, b) Corporate Risk Associate, London, United Kingdom, c) Eurocontrol, Brétigny-sur-Orge, France

In current safety practice in the nuclear power domain, the demand for Level Two PSA by regulatory organizations has become mandatory, and this has received greater priority after the Fukushima-Daiichi accident in Japan in March 2011. However, there are many challenges in the process of performing a Level Two PSA. Most of the challenges are related to uncertainties in the plant state in such accident scenarios. However, even assuming that it is possible to know the exact extent of damage in a selected scenario, a key question remains: “What level of detail is required for describing the human response?” In reality, damage to equipment and the exact plant status are not predictable; therefore Severe Accident Management Guidelines (SAMGs) and Emergency Operating Procedures (EOPs) offer guidelines for operator behaviour rather than specifying the procedural details of actions. In this paper the appropriate level of detail for the analysis of operator action in Level Two PSA models is discussed, as are the difficulties in conducting Human Reliability Assessment (HRA) for vaguely defined actions. It is found that most current HRA approaches for Level 2 PSA rely heavily on expert judgment, but is such expertise valid? This paper explores potential ways forward for HRA in Level 2 PSA.

T17 Integrated Deterministic and Probabilistic Safety Assessment I

1:30 PM Kona

Chair: Robert Youngblood, Idaho National Laboratory

50

A Perspective on the Use of Risk Informed Safety Margin Characterization to Support Nuclear Power Plant Long Term Operation

Stephen M. Hess

Electric Power Research Institute, West Chester, PA, USA

In this paper we describe application of the Risk Informed Safety Margin Characterization (RISMC) approach to enhancements of nuclear power plants that are important to decisions associated with their long term operation. The RISMC approach was used to assess changes in safety margins that would occur due to hypothetical extended power uprates for a PWR loss of feedwater event and a BWR station blackout. For each of these applications, the key parameters that impact core damage probability were identified and representative probability distributions were constructed to represent the associated uncertainties. The distributions were sampled using a Latin Hypercube Sampling technique to generate sets of sample cases to simulate plant response using the EPRI MAAP accident analysis code. In each scenario, changes to the thermal-hydraulic safety margins which would occur due to the uprated power conditions were compared to those for the plant operating at its current nominal full power. Additionally, the impacts on conditional core damage probability and core damage frequency were assessed. As a result of these pilot studies, it was concluded that the RISMC framework can provide a potentially powerful approach to obtain technically robust assessments of safety margins to support critical plant operational and investment decisions.

277

Application of Gaussian Process Modeling to Analysis of Functional Unreliability

R. W. Youngblood

Idaho National Laboratory, Idaho Falls, ID, USA

Gaussian Process Models (GPMs) have been used widely in many ways [1]. The present application uses a GPM for emulation of a system simulation code. Such an emulator can be applied in several distinct ways, discussed below. Most applications illustrated in this paper have precedents in the literature; the present paper is an application of GPM technology to analysis of the functional unreliability of a passive containment cooling system, which was previously analyzed [2] using an artificial neural network (ANN), and later [3, 4] by a method called “Alternating Conditional Expectations” (ACE). The present exercise enables a comparison of both the processes and the results. In this paper, (1) the original quantification of functional unreliability using ANN [2], and the later work using ACE [3], is reprised using GPM; (2) additional information provided by the GPM about uncertainty in the limit surface, generally unavailable in other representations, is discussed briefly; (3) a simple forensic exercise is performed, analogous to the inverse problem of code calibration, but with an accident management spin: given an observation about containment pressure, what can we say about the system variables?

330

An Approach to Grouping and Classification of Scenarios in Integrated Deterministic-Probabilistic Safety Analysis

Sergey Galushin and Pavel Kudinov

Royal Institute of Technology (KTH), Stockholm, Sweden

Integrated Deterministic Probabilistic Safety Assessment (IDPSA) methodologies aim to achieve completeness and consistency of the analysis. However, for the purpose of risk informed decision making it is often insufficient to merely calculate a quantitative value for the risk and its associated uncertainties. IDPSA combines deterministic model of a nuclear power plant with a method for exploration of the uncertainty space. Huge amount of data is generated usually in the process of such exploration. It is very difficult to “manually” process and extract from such data information that can be used by a decision maker for risk-informed characterization and eventually improvement of the system safety and performance. Such understanding requires an approach to the interpretation, grouping of similar scenario evolutions, and classification of the principal characteristics of the events that contribute to the risk. In this work we develop an approach for classification and characterization of failure domains (domains of uncertain parameters where critical system parameters exceed safety thresholds). The method is based on scenario grouping and clustering with application of decision trees for characterization of the influence of timing and order of the events.

382

Developing Probabilistic Safety Performance Margins for Unknown and Underappreciated Risks

Allan Benjamin (a), Homayoon Dezfuli (b), Chris Everett (c)

a) Independent Consultant, Albuquerque, NM, USA, b) Office of Safety & Mission Assurance, NASA Headquarters, Washington, DC, USA, c) Information Systems Laboratories, Inc., Rockville, MD, USA

Probabilistic safety requirements currently formulated or proposed for space systems, nuclear reactor systems, nuclear weapon systems, and other types of systems that have a low probability potential for high consequence accidents depend on showing that the probability of such accidents is below a specified safety threshold or goal. Verification of compliance depends heavily upon synthetic modeling techniques such as PRA. To determine whether or not a system meets its probabilistic requirements, it is necessary to consider whether there are significant risks that are not fully considered in the PRA either because they are not known at the time or because their importance is not fully understood. The ultimate objective is to establish a reasonable margin to account for the difference between known risks and actual risks in attempting to validate compliance with a probabilistic safety threshold or goal. In this paper, we examine data accumulated over the past 60 years from the space program, from nuclear reactor experience, from aircraft systems, and from human reliability experience to formulate guidelines for estimating probabilistic margins to account for risks that are initially unknown or underappreciated. The formulation includes a review of the safety literature to identify the principal causes of such risks.

435

Modeling Operator Actions in Integrated Deterministic-Probabilistic Safety Assessment

Vinh N. Dang and Durga Rao Karanki

Paul Scherrer Institute, Villigen PSI, Switzerland

The Dynamic Event Tree (DET) framework is one of the main approaches for coupling simulations of physical models and stochastic processes. In applications related to nuclear power plant safety, the physical models include plant transient models while some of the principal stochastic events of interest relate to equipment failures and operator responses. The number and/or timing of these failures affect accident sequence outcomes; conversely, the plant state and its dynamic evolution may affect the distributions of the stochastic events. The capability to explicitly model these dynamic interactions motivates the application of DETs. In DET modeling, the operator response time is discretized. This paper presents the analysis of Medium Break Loss of Coolant Accidents (MLOCAs) accident scenarios. The results show the impact of the alternative discretizations of the operator response times for the actions in this scenario on the estimated core damage frequency. It concludes with recommendations on a discretization approach that balances the representation of the sequence dynamics while avoiding excessive conservatism in the risk estimates.

T21 Reliability Analysis and Risk Assessment Methods IV

3:30 PM Honolulu

Chair: Royce Francis, George Washington University

220

Reliability Analysis and Experimental Reliability Parameter Determination of Nuclear Reactor Equipments

Gheorghe Vieru

AREN, Bucharest, ROMANIA

This paper describes the experimental tests performed in order to determine reliability parameters for certain equipments manufactured in INR Pitesti, for NPP Cernavoda. The tests were provided by Technical Specifications and test procedures. The laboratory tests were performed in such running and environmental conditions that correspond to the real operating ones from NPP. A special attention was paid to the accelerated tests (intensive tests), where stress level applied is above the level established by reference condition (stated by design); the acceleration of the operating conditions in the sense of a time compressing may be considered, in the case of a product where it was stated that the reliability parameters depend mainly on the number of the operating cycles. On the other hand, there are presented reliability improvement measures taken in order to check that the equipments would operate within design specification. The results of tests and the conclusions are given also. It is also to be mentioned that this paper is a partial layout of the results of a Scientific Research Contract concluded with IAEA Vienna, where the Author have been CSI (Chief Scientific Investigator).

236

Multi Units Probabilistic Safety Assessment: Methodological elements suggested by EDF R&D

Tu Duong Le Duy, Dominique Vasseur, and Emmanuel Serdet

Industrial Risk Management Department, EDF R&D

Most nuclear generation sites worldwide have more than one reactor in operation. This should be taken into account when assessing the risk related to these installations, in particular, when assessing the consequences in terms of impacts on the health of the population and on the environment. Generally speaking, to date mainly models relating to a single unit have been developed by operators. The purpose of this paper is to present possible solutions or methodological options, suggested by EDF R&D division, in order to switch from a risk assessment for the unit to a risk assessment for the site. The case of a site with two units is addressed here. A review of practices and standards showed that the specific aim of a PSA at site level was to deal with the dependencies existing between the units on that site. The risk calculation for the site is therefore proposed for six configurations resulting from the combination of two types of scenarios and three types of systems which are defined. The treatment of CCF events and the adaption of the assessment of the Human Errors Probabilities to the case of multiple units are also addressed in this paper. The proposed approach is illustrated using a simplified case inspired by the EDF 900MWe units level 1 PSA model.

369

Automated Selection of Number of Clusters for Determining Proliferation Resistance Measures

Daniya Zamalieva (a), Zachary Jankovsky (b), Alper Yilmaz (a), Tunc Aldemir (b), and Richard Denning (b)

a) Photogrammetric Computer Vision Laboratory, The Ohio State University, Columbus, OH, USA, b) Department of Mechanical and Aerospace Engineering, The Ohio State University, Columbus, OH, USA

Analyzing possible proliferation scenarios can provide insights on the most vulnerable stages of a nuclear system. With large number of scenarios, their manual examination becomes infeasible. One possibility for reducing the complexity of the data and discovering possible trends is via automated grouping of scenarios. The k-means clustering algorithm is widely used to group large amounts of data. This algorithm is very efficient, however, it requires the number of clusters to be known a priori. In this paper, we aim to overcome this issue by investigating several goodness-of-fit measures. Namely, using a set of proliferation scenarios modeled by PRCALC, we implement and compare the Bayesian Information Criterion, Akaike Information Criterion, Cluster Cohesion Coefficient and Anderson-Darling Normality Test to estimate the optimal number of clusters k for the k-means clustering algorithm. Experiments show that the examined measures can provide insights on the structure of the data.

266

Analysis Of The Main Challenges With The Current Risk Model For Collisions Between Ships and Offshore Installations on The Norwegian Continental Shelf

Martin Hassel, Ingrid Bouwer Utne and Jan Erik Vinnem

Department of Marine Technology, NTNU, Trondheim, Norway

The last decade has seen the implementation of numerous navigation aids, safety barriers and various tools to ensure the safe navigation and operation of ships. Still, a significant amount of ship collisions and groundings occur every year. The COLLIDE risk model became the industry standard about 20 years ago for calculating the risk of ship collisions against offshore installations on the Norwegian Continental Shelf (NCS). The risk model is currently being revised, in order to take account for the new technology that has entered into the arena. Technological advances have significantly changed the way seafarers operate and navigate. During the last decade, navigators have had to learn new skills and adapt to a new working environment, and this affects safety in many ways. Human and organizational factors (HOFs) have a large impact on complex systems, such as ship operations, and should be given equal and appropriate attention when risk is being investigated and assessed. Too many risk models are applying research on HOFs of questionable quality, using parameter values that may no longer be valid. This paper presents challenges with the current industry standard COLLIDE-methodology and highlights areas where improvements and alternatives to the current model are needed. Relevant issues regarding future research in this area are also discussed.

215

Information-based Reliability Weighting for Failure Mode Prioritization in Photovoltaic (PV) Module Design

Royce Francis (a) and Alessandra Colli (b)

a) The George Washington University, Washington, DC, USA, b) Brookhaven National Laboratory, Upton, NY, USA

Electric utilities and grid operators face major challenges from an accelerated evolution of grids towards an extensive integration of variable renewable energy sources, such as solar photovoltaic (PV). An opportunity exists to incorporate probabilistic risk analysis into the design and operation of photovoltaic systems to deal with rapidly evolving design and configuration techniques. This could potentially achieve greater design reliability through prediction and remediation of failure modes during design and testing project phases, before project implementation or construction. However, because these systems are novel, detailed component level reliability models are difficult to characterize. In this paper, an approach to the prioritization of PV failure modes extending Colli [1], [2] using a Shannon information-weighted reliability approach is demonstrated. We call this information-weight the “surprise index.” The surprise index approach facilitates the prioritization of failure modes by weighting the consequence of their failures by the information in the failure generation model. The surprise index may potentially aid in systematic evaluation of deep uncertainties in PV module design, as failure modes that might be overlooked using traditional PRA may be addressed using the information-based approach.

T22 Dependent Failure Modeling II

3:30 PM Kahuku

Chair: Ashraf El-Shanawany, Corporate Risk Associates Limited

311

UK Experience of Developing Alpha Factors for Use in Nuclear PRA Models

Garth Rowlands (a) and Ashraf El -Shanawany (a,b)

a) Corporate Risk Associates, Warrington, United Kingdom, b) Imperial College London, London, United Kingdom

Modelling Common Cause Failures (CCFs) is an essential part of Probabilistic Risk Assessment (PRA). In the UK, the normal approach for the Advanced Gas-cooled Reactors (AGRs) is to use the beta factor approach with these parameters determined using the Unified Partial Method (UPM). However, there has been recent impetus to consider the feasibility of using a more detailed CCF approach for the AGRs such as the Alpha Factor method. The AGRs share some component types with water cooled reactors. For these it is possible to obtain alpha factors from international databases (such as the US Nuclear Regulatory Commission (NRC) CCF Parameter Estimates and the International Common-Cause Data Exchange (ICDE)). However, AGRs contain many unique components which are not listed in these databases. An additional difficulty is the small AGR fleet size and consequently a potential lack of operating experience. This paper presents the experience to date in deriving alpha factors for AGR components, and presents a Bayesian method which can be used in cases where comparative prior data is sparse. Insights and experiences from the process are discussed.

172

On the Use of Qualitative Methods for Common Cause Analysis: Zonal and Common Mode Analysis

Cristina Johansson (a,b), Johan Tengroth, Jan Hjelmstedt (a)

a) Saab Aeronautics, Linköping, Sweden, b) Linköping University, Department of Machine Design, Linköping, Sweden

While system safety analyses are mostly conducted on the basis of system schematics, this approach do not covers sufficiently the implication of the physical installation of the hardware, especially when the space inside an aircraft is very limited. Additional analyses focused on common causes are necessary and some of the methods used are common practice. This paper presents an approach that combines the techniques for considering the interactions of logically unrelated systems in the same physical part (zone) of an aircraft with those able to identify failures that occur when multiple instances of a redundant system fail almost simultaneously, generally due to a single cause. Zonal Safety Analysis (ZSA) is employed for identifying failures due to location in the same zone, while Common Mode Analysis (CMA) is used to verify the redundancy/independence of failures assumed in other analyses such as FTA or independently of other analyses. First an overview of the methodology used is presented. Some of the finding from both ZSA and CMA are presented, as well as lessons learned. Reflections on the implementation of these qualitative methods are also provided in the paper with regard to advantages, limitations and difficulties.

302

A Computer Program for Evaluating the Alpha Factor Model Parameters Using the Bayesian Operation

Baehyeuk Kwon, Moosung Jae (a), and Dong Wook Jerng (b)

a) Department of Nuclear Engineering, Hanyang University, Seoul, Korea, b) Department of Energy Systems Engineering, Chung-Ang University, Dongjak-Gu, Seoul, Korea

The assessment of common cause failure (CCF) is necessary for reducing the uncertainty during the process of probabilistic safety assessment. A basic unavailability assessment method is an approach for the quantitative analysis of CCF modeling using Bayesian probability, in which the estimation of parameters is more accurate by combining the failure information from system, component and cause level. This study describes the CCF evaluation program which has been developed for assessing the αfactor common cause failure parameters. Examples are presented to demonstrate the calculation process and necessary databases are presented. As a result, the posterior distributions for α-factors model parameter are obtained using the conjugate family distributions as well as general distributions for conducting a numerical estimation. Due to the fact that CCF is one of the significant factors to affect both core damage frequency and large early release frequency, the appropriate evaluation for the relevant parameters is essential, though there are rare the CCF data. In the previous study, the Multiple Greek Letter model (MGL) had been used for modeling the common cause failures in the OPR 1000 reactors. In the future modeling for the reactors, the α-factors approach might be employed for simulating the common cause failures as well as it will be quantified using the computer program developed by the C# language. The main operation to quantify the α-factors parameters is Bayesian which combines the prior distribution and the likelihood function to produce the posterior distribution. It is expected that this program might contribute to enhancing the quality of probabilistic safety assessment and to reducing common cause failure uncertainty.

134

A General Cause Based Methodology for Analysis of Common Cause and Dependent Failures in System Risk and Reliability Assessments

Andrew O’Connor, Ali Mosleh

Center for Risk and Reliability, University of Maryland, College Park, United States

Traditional Probabilistic Risk Assessments (PRAs) model dependency through deterministic relationships in fault trees and event trees, or through empirical ratio common cause failure (CCF) models. However, popular CCF models do not recognized system specific defenses against dependencies and are restricted to identical components in redundant configuration. While this has allowed prediction of system reliability with little or no data, it is a limiting factor in many applications, such as modeling the characteristics of a system design or incorporating the characteristics of failure when assessing the failure’s risk significance or degraded performance events (known as an event assessment). This paper proposes the General Dependency Model (GDM), which uses Bayesian Network to model the probabilistic dependencies between components. This is done through the introduction of three parameters for each failure cause which relate to physical attributes of the system being modelled, component fragility, cause condition probability, and coupling factor strength. Finally this paper demonstrates the development and use of the GDM for new system PSA applications and event assessments of existing system. Example sof the quantification of the GDM model in the presence of uncertain evidence are provided.

T23 Risk and Hazard Analyses II

3:30 PM O'ahu

Chair: James Knudsen, Idaho National Laboratory

178

Copulas applied to Flight Data Analysis

Lukas Höhndorf, Javensius Sembiring, and Florian Holzapfel

Institute of Flight System Dynamics, Technische Universität Mufürnchen, Munich, Germany

During flight, civil aircraft record data by a device called Quick Access Recorder. These data can be used to evaluate the current safety level of an airline. Available Flight Data Monitoring software do not use the full potential of the data. Within this paper, we describe an advanced statistical method of data analysis in order to be able to quantify which factors influence incidents to which extent. Furthermore, the proposed method allows calculating the incident probability. To achieve these goals, we use the mathematical concept of copulas, which is a mathematical structure for the description of dependencies. Copulas can be calculated based on data, in this case flight data, and have advantages compared to alternative statistical tools for describing dependencies such as correlation coefficients. Even though, the main mathematical theorem in the area of copulas is from 1959, finding new algorithms for the estimation of copulas is an up to date research issue within statistics. After preparing the data suitably, the copula can be estimated and used for interpretation, calculating incident probabilities and for sampling virtual flights.

193

Application of Severity Indices Developed for Adiabatic Compression Testing

Barry Newton (a) and Theodore Steinberg (b)

a) Wendell Hull and Associates, Inc, Las Cruces, NM USA, b) Queensland Univ. of Technology, Brisbane, Qld, AU

Gaseous fluid impact (adiabatic compression) testing is widely used for ranking and qualifying a nonmetallic material for its sensitivity to ignition in high-pressure gaseous oxygen. This test method is also used for qualifying flow control equipment (valves, regulators, flexible hoses, etc.) for use in high-pressure gaseous oxygen. As with many ignition tests, gaseous fluid impact ignition testing is inherently probabilistic and subject to variations in results. One common gaseous fluid impact test is ASTM G741. When originally published in 1982, this standard considered a “passing” result to be 0 ignitions of a material out of 20 samples tested. A flow control component was considered to have passed the test by surviving 20 successive pressure surges without signs of ignition. Researchers familiar with this test method have recognized that statistical problems exist with the prescribed methodology and have reported that an analysis of the cumulative binomial probabilities for ASTM G74 produce a 36% confidence for a 20-cycle “passing” result. As a result, the lack of reliability with the historical ASTM G74 test logic could be potentially misleading or even catastrophic when results are used to qualify materials or components for oxygen service. This paper presents a summary of research performed to specify the severity of the ASTM G74 test so that the statistical variations can be incorporated into the test methodology. The severity of the test, as compared to service conditions, was considered crucial to the specification of a suitable approach for passing a material or qualifying a component. Logically, the more severe the test approach, as compared to service conditions, the more confidence that can be placed in a passing result. This research demonstrated that the gaseous fluid impact test commonly conducted is more severe than the service conditions, but not by a large margin. Therefore, the statistical aspects of the test, based on a suitable understanding of the actual severity, are shown to be crucial to an understanding and correct application of the data obtained.

459

Method for Analysing Extreme Events

J.Sörman, O. Bäckström (a), Luo Yang (a), I. Kuzmina, A.Lyubarskiy (b) and M. El-Shanawany (c)

a) Lloyd’s Register Consulting – Energy AB, Stockholm, Sweden, b) IAEA, Vienna, Austria, c) Lloyd's Register, Energy, London, UK

PSA models for Nuclear Power Plants (NPPs) of today include comprehensive and detailed information related to plant safety, both quantitative and qualitative; the latter include e.g. safety functions and system dependencies at nuclear power plants. The detailed information in the PSA model can also be used to analyse extreme events and their impact on safety functions and the ability for the nuclear power plant to withstand extreme events with regard to core integrity. Lloyd´s Register Consulting, in cooperation with the IAEA, has further developed the method described in [1] utilizing an internal initiating events PSA model for assessing the impact of extreme events. A number of extreme events(including credible combinations) can be postulated, for example seismic, water levels, ice storm, etc, then associated initiating events, as well as structures, components, buildings and their susceptibility to the extreme eventsdefined. The extreme events analysis is linked to the PSA model directly to assure that the whole model is included in the evaluation of the impact of the event – or combinations of events. The outcome of the analysis is to: -Identify sensitive scenarios for extreme events -Analyse simultaneous extreme events -Prove robustness of plant design, for individual components and for buildings.

477

Modeling Common Cause Failures of Thrusters on ISS Visiting Vehicles

Megan Haught and Gary Duncan

ARES Technical Services, Houston, TX, USA

This paper discusses the methodology used to model common cause failures of thrusters on the International Space Station (ISS) Visiting Vehicles. The ISS Visiting Vehicles each have as many as 32 thrusters, whose redundancy and similar design make them susceptible to common cause failures. The Global Alpha Model (as described in NUREG/CR-5485) can be used to represent the system common cause contribution, but NUREG/CR-5496 supplies global alpha parameters for groups only up to size six. Because of the large number of redundant thrusters on each vehicle, regression is used to determine parameter values for groups of size larger than six. An additional challenge is that Visiting Vehicle thruster failures must occur in specific combinations in order to fail the propulsion system; not all failure groups of a certain size are critical.

T24 Risk Governance and Societal Safety II

3:30 PM Waialua

Chair: Vicki Bier, University of Wisconsin-Madison

171

Challenges with Risk and Vulnerability Analyses: Strategies for Integration in Risk and Crisis Management

Kirsti Russell Vastveit (a), and Kerstin Eriksson (b,c)

a) University of Stavanger, Stavanger, Norway, b) Division of Risk Management and Societal Safety, Lund University, Lund, Sweden, c) Lund University Centre for Risk Assessment and Management and Centrefor Societal Resilience, Lund University, Sweden

Today risk analyses are commonly used by authorities at national, regional and municipal levels in their work to prevent and prepare for crises. The analyses provide information about types of incidents that can occur and the potential consequences of these events. In Sweden and Norway it is mandatory for local authorities to conduct risk and vulnerability analyses and use them as an input in their crisis management work. Research on public risk and vulnerability analyses often focuses on methodological issues and only to a limited extent on how findings are used by those who undertook or ordered the analysis. For the analyses to become inputs in work to improve societal safety and respond to crises it is important that findings are of sufficient quality and that they are applied or implemented. This paper focuses on strategies used by Norwegian and Swedish municipalities to integrate risk and vulnerability analyses in their current policy and planning processes. Examination of five Norwegian and four Swedish municipalities showed that municipalities vary in their focus on integration of risk and vulnerability analyses in long term planning processes, in their daily activities as well as in crisis management planning. They chose three main strategies to ensure integration of the analyses; some relied on formalized and highly structured procedures, whereas others emphasized that each leader and planner was responsible for taking risk and vulnerability analysis into consideration where necessary.

199

Development of an Updated Societal-Risk Goal for Nuclear Power Safety

Vicki Bier, Michael Corradini (a), Robert Youngblood (b), Caleb Roh, Shuji Liu (a)

a) University of Wisconsin-Madison, Madison, Wisconsin, U.S., b) Idaho National Laboratory (INL), Department of Energy (DOE), Idaho Falls, Idaho, U.S.

The safety-goal policy of the U.S. Nuclear Regulatory Commission (NRC) has never included a true societal-risk goal. In particular, safety goals have focused primarily on radiation-related fatalities, while experience with actual nuclear accidents has shown that societal disruption can be significant even in accidents that yield only small numbers of fatalities. We have evaluated the social disruption from severe reactor accidents as a basis to develop a societal-risk goal for nuclear plants, focusing on population relocation. Our analysis considers several different accident scenarios at five nuclear-plant sites in the U.S. The corresponding source terms were used as input to calculate offsite consequences using actual weather data for each of the five plant sites over a two-year period. The resulting plumes were then compared to population data to determine the population that would need to be relocated to meet current protective-action guidelines. Our results suggest that the number of people relocated is a good proxy for societal disruption, and relatively straightforward to calculate. Safety goals taking into account societal disruption could in principle be applied to the current generation of nuclear plants, but could also be useful in evaluating and siting new technologies.

240

The Effect of Including Societal Consequences for Decisions on CriticalInfrastructure Vulnerability Reductions

J. Johansson (a,c), L. Svegrup, and H. Hassel (a,b)

a) Lund University Centre for Risk Assessment and Management (LUCRAM) and Centre for Societal Resilience (CSR), Lund, Sweden, b) Division of Risk Management and Societal Safety, Lund University, Lund,Sweden, c) Division of Industrial Electrical Engineering and Automation, Lund University, Lund, Sweden

Critical infrastructures provide society with services that are essential for its functioning and extensive disruptions of these give rise to large societal consequences. Vulnerability analysis gives important decision information concerning improving their ability to withstand strains. To analyze vulnerabilities in infrastructures models for estimating consequences due to failures are needed. Consequences arising from a critical infrastructure disruption can be estimated from an infrastructural or a societal viewpoint. Most risk and vulnerability related studies of critical infrastructures, however, focus rather narrowly only on the direct infrastructural consequences, e.g. expressed as services not supplied. An integrated model, consisting of a physical model of a critical infrastructure (the Swedish electric transmission system) and an inoperability input-output model to estimate societal consequences is used. The paper analyze and contrast how the two viewpoints may affect the decision of which vulnerability reducing measures to implement. Vulnerability reducing measures are implemented as addition of branches to the existing power system. The results show a relatively large difference when considering estimated effectiveness but the ranking of the measures is to some extent, congruent, however it is concluded that accounting for societal consequences in the decision-making process, when prioritizing between different vulnerability reducing measures, is of importance.

437

Validation of Proxy Random Utility Models for Adaptive Adversaries

Richard S. John (a) and Heather Rosoff (b)

a) Department of Psychology, University of Southern California, Los Angeles, California, USA, b) Sol Price School of Public Policy, University of Southern California, Los Angeles, California, USA

We report two validation studies comparing MAU models for two different politically active non-profit organizations that utilize civil disobedience to achieve political objectives. In both cases, we constructed an objectives hierarchy and MAU model using adversary values experts (AVEs) who have access to publicly available information about the organizations’ motives, objectives, and beliefs, but no direct contact with organization stakeholders or representatives. We then independently compare these MAU model parameters and constructed preferences to those based on direct assessment from a representative of the organization. The proxy MAU models provide an “averaged” utility model across a diverse organization with varying perspectives. We compare these “averaged” representations of the organizations’ objectives, trade-offs, risk attitudes, and beliefs about consequence impacts with those of individual organization representatives with a particular perspective. In both cases, we demonstrate moderate convergence between the proxy model and the model assessed by direct contact with a representative of the organization. Overall we find moderate agreement between the proxy model and the stakeholder model, with some notable discrepancies. Most of these discrepancies can be attributable to unstated or understated objectives in the published materials of the groups.

T25 Risk Informed Applications II

3:30 PM Wai'anae

Chair: Katrina Groth, Sandia National Laboratories

104

Risk-Informed Review of Actual Maintenance Strategy at Paks NPP

Tibor Kiss (a), Zoltan Karsa (b)

a) Paks NPP, Paks, Hungary, b) NUBIKI, Budapest, Hungary

A common pilot project was launched in April 2010 by the Hungarian Atomic Energy Authority (HAEA) and the Paks Nuclear Power Plant (Paks NPP) with technical support from NUBIKI Nuclear Safety Research Institute to enhance existing, and implement new Risk Informed Decision Making (RIDM) practices. In the framework of the project Risk Monitor (RM) was utilized, and risk-informed review of maintenance at Paks NPP was performed. Based on the operators’ electronic logs information and using the Risk Monitor tool the annual risk profile of historical performance of the units could be visualized. Altogether 16 reactor-years risk profiles have been created including the operation and shut down operation modes. Later these risk profiles served as a basis for further assessment of recent maintenance strategy and formulating findings and recommendations. According to the existing regulation no preventive maintenance of the safety related SSCs is allowed during power operation. The investigation went into two directions. The first one is the risk-informed examination of the online maintenance of emergency diesel generators (DGs). As a result of this investigation it could be demonstrated that online maintenance of the DGs would reduce the annual cumulative risk and, at the same time, may result in economical benefit due to the potential reduction of the outage time duration. The aim of the second direction of investigation was to reduce risk by means of changing the actual maintenance strategy. Assessing the annual risk profile risk areas with an unavailable safety train could be identified during power operation due to the twin unit outage. Such a risk area can be explained by the design of the service water system, having common parts for two units. The outcome of this investigation was a recommendation to use the given unavailability timeframe to perform the maintenance of the components already unavailable including the related DG as well. Fortunately, at the end of the pilot assessment, the above mentioned activities could be harmonized and a new complex maintenance approach could be formulated motivating the licensee to operate more safely and more economically at the same time.

538

“Smart Procedures”: Using Dynamic PRA to Develop Dynamic, Context-Specific Severe Accident Management Guidelines (SAMGs)

Katrina M. Groth, Matthew R. Denman, Jeffrey N. Cardoni, Timothy A. Wheeler

Sandia National Laboratories, Albuquerque, NM, USA

Developing a big picture understanding of a severe accident is extremely challenging. Operating crews and emergency response teams are faced with rapidly evolving circumstances, uncertain information, distributed expertise, and a large number of conflicting goals and priorities. Severe accident management guidance (SAMGs) provides support for collecting information and assessing the state of a nuclear power plant during severe accidents. However, SAMGs developers cannot anticipate every possible accident scenario. Advanced Probabilistic Risk Assessment (PRA) methods can be used to explore an extensive space of possible accident sequences and consequences. Using this advanced PRA to develop a decision support system can provide expanded support for diagnosis and response. In this paper, we present an approach that uses dynamic PRA to develop risk-informed “Smart SAMGs”. Bayesian Networks form the basis of the faster-than-real-time decision support system. The approach leverages best-available information from plant physics simulation codes (e.g., MELCOR). Discrete Dynamic Event Trees (DDETs) are used to provide comprehensive coverage of the potential accident scenario space. This paper presents a methodology to develop Smart procedures and provides an example model created for diagnosing the status of the ECCS valves in a generic iPWR design.

567

Application of PRA in Risk-informed Risk Management

Jie Wu

Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences

This technical paper presents the concept of plant configuration risk management, the role of probabilistic risk assessment (PRA) in a risk-informed, performance-based integrated decisionmaking process during plant design, licensing and operation stages. It also provides an overview of PRA and its application commensurate technical adequacy, regulatory requirements for monitoring maintenance effectiveness and industry practice at operating nuclear power plants in the US.

571

PRA Insights Used for SBO Mitigation in Barakah Nuclear Power Plant – Lessons Learned from the Fukushima Accident

Yu Shen, Abdullah Al Yafei, Mohamed Abdulla Sabaan Al Breiki

ENEC, Abu Dhabi, UAE

After the Fukushima Daiichi Accident in March 2011, Emirates Nuclear Energy Corporation (ENEC) organized a Safety Review Task Force (SRTF) led by its Deputy Chief Nuclear Officer to obtain, evaluate and recommend the implementation of applicable lessons learned to enhance safety features of Barakah Nuclear Power Plant (BNPP). The safety review focuses on beyond design basis accidents (BDBA) and adopts both deterministic and probabilistic approaches to conduct the review. The review results show a high level of plant robustness for BNPP. There have been no design deficiencies identified regarding provisions currently provided to accommodate potential natural hazards, loss of electrical power and loss of ultimate heat sink as well as severe accident management. However, design features for further enhancement of the robustness based on the lessons learned from Fukushima accident have been identified and will be implemented in BNPP. In this paper, a specific Probabilistic Risk Assessment (PRA) study performed to obtain risk insights due to a prolonged SBO event is described. The general approach of how to use the PRA insights to improve nuclear safety is also discussed in the paper. Design improvements based on the PRA application result in a 34% risk reduction in terms of Core Damage Frequency (CDF).

T26 Fire and Combustibles Analysis

3:30 PM Ewa

Chair: James Lin, ABSG Consulting Inc.

508

Insights from the Risk Analysis of a Nearby Propane Tank Farm

James C. Lin

ABSG Consulting Inc., Irvine, California, United States

Due to the flammability and explosion characteristics of the propane vapor cloud, a nearby propane tank farm could possibly present a significant risk to the operation of nuclear power plants. Accidental releases of propane may occur in the propane supply pipeline, at the propane storage tanks, or on the propane transportation routes due to propane transport trailer truck accidents. Loss of containment could result in toxic vapor cloud affecting the nearby nuclear plant control room habitability, Vapor Cloud Explosion (VCE) overpressure due to both deflagration and detonation, Boiling Liquid Expanding Vapor Explosion (BLEVE) missile impacts, and thermal radiation from BLEVE fireballs, flash fires, pool fires, and jet fires. The thermal radiation impact ranges of a pool fire, jet fire, flash fire, and BLEVE fireball is typically less than the shortest distance between the propane tank farm and the nearby nuclear plant. The explosion overpressure and missiles are the most critical hazards to the nearby nuclear plants. Only VCE overpressure greater than 1 psi and BLEVE missiles can potentially impact the PRA related structures and equipment at the nearby nuclear plant. The total frequency of unacceptable damage at the nearby nuclear plant resulting from accidents at the propane tank farm and from propane transport truck accidents is estimated to be less than 10 -7 per year. This paper discusses the insights gained from the analysis of the risk impact on a nearby nuclear plant due to accidental release of propane from the propane terminal related operations, including the propane storage/distribution facility and the propane transport trucks. The important considerations used in the analysis of the risk scenarios resulting from a propane transport truck accident, large and small releases at the propane distribution facility, and exploding missile hazards are presented.

321

Composite-bonded Steel Substrate with Silyl-modified Polymer Exposed to Thermal Distress

Yail J. Kim (a), Seung Won Hyun (b), Isamu Yoshitake (c), Jae-Yoon Kang (d), and Junwon Seo (e)

a) University of Colorado Denver, Denver, CO, USA, b) North Dakota State University, Fargo, ND, USA, c) Yamaguchi University, Ube, Japan, d) Korea Institute of Construction Technology, Ilsan, Korea, e) SouthDakota State University, Brookings, SD, USA

This paper discusses a research program examining the residual performance of carbon fiber reinforced polymer (CFRP)-steel interface bonded with an emerging adhesive called silylmodified polymer (SMP) when exposed to elevated temperatures from 25°C to 200°C. Double-lap tension specimens are prepared and conditioned at predefined temperatures for three hours. Test results reveal that interfacial capacity is preserved up to a temperature of 100°C. Thermally-induced capacity degradation is, however, observed for the specimens exposed to temperatures beyond 100°C. A phase-transition is noticed in adhesive morphology during heating at temperatures higher than 175°C, which affects the adhesion properties of the SMP. The development of CFRP strain is influenced by geometric discontinuities along the interface. Fiber disintegration dominates the failure of the interface exposed up to 150°C, including local fiber dislocation and partial CFRP pull-out. CFRP-debonding is, however, the primary failure mode for the specimens exposed to a temperature higher than 175°C. The Bayesian updating method is used to probabilistically infer the response of the CFRP-steel interface.

18

Statistical Characterization of Cable Electrical Failure Temperatures Due to Fire, with Simulation of Failure Probabilities

Raymond H.V. Gallucci

U.S. Nuclear Regulatory Commission (USNRC), MS O-10C15, Washington, D.C. 20555

Single-value failure temperatures for loss of electrical cable functionality due to fire have been the norm for Fire Probabilistic Risk Assessments (PRAs) since the publication of the landmark state-of-the-art report NUREG/CR-6850 / EPRI 1011989 in 2005. Electrical cable fire tests conducted by the USNRC since then have added a significant amount of failure data that can be used to examine the feasibility of now assigning probability distributions to these failure temperatures. This paper analyzes these data to develop probability distributions for different generic cable types (based on insulation). Then, building on recent work to investigate the sensitivity of fire phenomenological models to variations in input parameters, simulation techniques are employed to show potential refinement in predicting the probability of fire-induced electrical cable failure based on these temperature distributions. Results indicate the potential for relaxation in conservatism in Fire PRA through adoption of a probabilistic/statistical approach in conjunction with fire phenomenological modeling. Examples are presented along with suggestions for future enhancements.

550

OECD FIRE Database Applications and Challenges – A Recent Perspective

Marina Roewekamp (a), Matti Lehto (b), Heinz-Peter Berg (c), Nicholas Melly (d), Wolfgang Werner (e)

a) Gesellschaft für Anlagen-und Reaktorsicherheit (GRS) mbH, Köln, Germany, b) Radiation and Nuclear Safety Authority (STUK), Helsinki, Finland, c) Bundesamt für Strahlenschutz (BfS), Salzgitter, Germany, d) United States Nuclear Regulatory Commission (NRC) Office of Research, Rockville, MD, United States of America, e) SAC, Breitbrunn, Germany

As one of the OECD NEA databases the FIRE Database has been upgraded and extended applicable as a source of generic event data for Fire PRA. The updated Database structure facilitates statistical analysis needed for providing generic fire frequencies for nuclear power plants. Valuable queries can be made based on reactor type, plant operational state, selection of countries, from which events are reported depending on reporting criteria and thresholds, etc. Moreover, for a given generic fire event tree various branch point probabilities can be calculated based on the plant specific operating experience, which may be statistically not meaningful, and generic probabilities derived from the FIRE Database. Meanwhile, twelve member countries are involved in the collection of fire event data from almost 400 reactor units. For the time being, a few thousands of reactor years are covered and more than 420 fire events have been reported in total. In addition, the FIRE Database has provided first insights on causal as event combinations of fire events and other anticipated events. One of the lessons learned from the Fukushima Dai-ichi reactor accidents was that such event combinations have to be adequately addressed in PRA. The most recent analyses of event combinations with fires support the ongoing PSA improvements in this direction.

T27 The Petro-HRA Project: Adapting SPAR-H to a Petroleum Context II

3:30 PM Kona

Chair: Martin Rasmussen, Norwegian University of Science and Technology

93

Suggestions for Improvements to the Definitions of SPAR-H Performance Shaping Factors, to the Definitions of the Levels, and Suggestions for Changes in the Multipliers

Karin Laumann and Martin Rasmussen

Norwegian University of Science and Technology, Trondheim, Norway

In this paper the definitions and the content of six of SPAR-H performance shaping factors are discussed. The six factors discussed are “Available time”, “Stress/Stressors”, “Experience/Training”, “Procedures”, “Fitness for Duty” and “Work Processes”. The discussion is based on a literature study on performance shaping factors, on interviews with consultants that have done SPAR-H analysis in the petroleum industry and from reading Human Reliability Analysis reports where SPAR-H have been used. The conclusions in this paper are: 1) New description of SPAR-H PSFs should be developed where the descriptions of each PSF do not overlap so much. 2) The guidelines should also give more advice to help the analyst to select PSFs levels when multiple PSFs might have a positive or negative impact on error probabilities. 3) New multipliers should be developed from an expert judgment, which is based on a review of the existing literature on PSFs, and with knowledge of the work in control room today.

141

Expert Judgment in Human Reliability Analysis: Development of User Guidelines

Nicola Paltrinieri and Knut Øien

SINTEF Technology and Society, Trondheim, Norway

Human error probabilities (HEPs) have often been required as input to risk assessments within various industries such as Probabilistic Safety Assessments (PSAs) in the nuclear power industry. In the offshore petroleum industry, however, HRA and corresponding HEPs have only recently been included in some Quantitative Risk Analyses (QRAs). The goal of the Petro-HRA project – for which the work in this paper is part of – is to develop an HRA method for use in the petroleum industry. This will be achieved first of all using the Standardized Plant Analysis Risk Human Reliability Analysis (SPAR-H) method as a starting point. For the adaptation of the SPAR-H method, expert judgment may be needed in the validation and evaluation of the nominal HEP values (a) and the multiplier values (b) of the Performance Shaping Factors (PSFs), if the validation cannot be supported by objective data. For the regular use of the SPAR-H method expert judgment will be needed for the assignment of the PSF multiplier values (c). Each of these three foreseen applications of expert judgment requires the development of specific user guidelines within the Petro-HRA project. The development of the expert judgment user guidelines are presented in this paper.

175

The Suitability of the SPAR-H “Ergonomics/HMI” PSF in a Computerized Control Room in the Petroleum Industry

Martin Rasmussen and Karin Laumann

Norwegian University of Science and Technology, Trondheim, Norway

This paper suggests changes in the “Ergonomics/HMI” PSF based on a review of current research on the HEP influence of ergonomic and HMI issues, an evaluation of the suitability of the SPAR-H “Ergonomics/HMI” PSF guidelines for the petroleum industry context, and interviews with HRA analysts. We recommend that the SPAR-H PSF “Ergonomics/HMI” should not be included in the Petro-HRA method as it is today. We suggest that the PSF description should be changed to suit the computerized control rooms in the petroleum industry. We suggest that that the PSF should include a level that corresponds to situations where the HMI is so bad that it is not reasonable to expect the operator to be successful at the task. We also suggest that at least one more PSF level is added to add nuance.