PSAM12 - Probabilistic Safety Assessment and Management
Thurssday, June 26, 2014

Program Book - Schedule - Monday - Tuesday - Wednesday - Thursday - Friday - SEARCH PAPERS

KEY: -Paper; -Biography; -Presentation

Sessions:
Plenary - Th01 - Th02 - Th03 - Th04 - Th05 - Th06 - Th07 - Th12 - Th13 - Th14 - Th15 - Th16 - Th17 - Th21 - Th22 - Th23 - Th24 - Th25 - Th26 - Th27


Th00 Plenary:

9:00 AM

Important Lessons Learned from the Severe Accident at Fukushima Daiichi

Shunsuke Kondo, Dr.

President, Nuclear Waste Management Organization of Japan (NUMO)

Abstract: The current situation of on-site and off-site of Fukushima Daiichi as well as of Japanese nuclear energy utilization will be reported, with his view on the weaknesses regarding defense against natural hazards, regulatory oversight, accident management and emergency response that allowed the accident to unfold as it did.

Bio: Dr. Shunsuke Kondo is currently an independent consultant. He retired from the post of Chairman of the Atomic Energy Commission, Cabinet Office, on March 31th 2014, after serving for more than ten years. He joined the Department of Nuclear Engineering, School of Engineering, the University of Tokyo (UT) as lecturer in 1970, after receiving BE, ME, and DE in nuclear engineering from the UT in 1965, 1967 and 1970, respectively. Since then he dedicated to research and teaching in the area of nuclear engineering, promoting to Associate Professor in 1971 and Professor in 1984. He retired from the UT in 2004 when he was appointed to the Chairman of the Atomic Energy Commission by the Prime Minister. He was conferred Professor Emeritus from the UT in 2004. His research interest is in the field of nuclear reactor design and accident analysis, the development and application of probabilistic safety assessment (PSA) methodology, the human interface design and analysis, and the analysis for nuclear energy utilization and safety regulation policies. He was a Board member of International Association for Probabilistic Safety Assessment and Management (IAPSAM) during 1994 -2004 for which he was President during 2001-2002 and organized PSAM 5 Conference in Osaka, Japan in 2000.

Th01 Nuclear Engineering II

10:30 Honolulu

Chair: Pamela Nelson, UNAM

322

Effects of Source Term on Off-site Consequence in LOCA Sequence in a Typical PWR

Seok-Jung Han, Tae-Woon Kim, and Kwang-Il Ahn

Korea Atomic Energy Research Institute, Daejeon, South Korea

Since the accident of Fukushima, the assessment of source term effects on the environment is a key concern of the nuclear safety. As an effort to take into account the current knowledge of source term in off-site consequence analysis, the effects of the source term according to the containment response simulated by MELCOR code have been examined. In the view of the consequence, the containment response directly affects key features making a shape of plume behaviors to estimate the atmospheric dispersion, which are the release time, duration, and relevant source term features. The source term features for a large break LOCA sequence of a typical PWR plant according to the containment response (failure pressure and break size) have been investigated. In the results of the containment failure pressure, it has been observed that the release time varied 17.4 hour to 52.2 hour according to the containment failure pressure of 4.4 bar to 14.6 bar, respectively. This result potentially affects the radiological emergency strategies such as the public evacuation. Moreover, a considerable amount of the released source term is varied. This is resulted in about twice differences of the radiation exposure dose within the simulation cases. In the break size, it has been observed that the release source term is varied relatively small, but the release features to model the plume behavior are varied according to the break size. In particular, the radiation exposure dose are reduced to 50% according to the plume model approaches (one plume model vs. two plumes model) taking into account the source term release features in this simulation. The obtained insights of source term features will be utilized in an off-site consequence analysis.

373

A Review of U.S. Sodium Fast Reactor PRA Experience

David Grabaskas

Nuclear Engineering Division, Argonne National Laboratory, Argonne, IL, U.S.

The U.S. has a long history of sodium fast reactor (SFR) development. From the almost 30 years of successful EBR-II operation to the competing designs of the Advanced Liquid Metal Reactor (ALMR) project of the early 1990s, much work has been conducted related to SFR safety analysis. Part of this work has involved the creation of PRAs for both operational reactors and those in the conceptual design phase. A review of four of the past U.S. SFR PRAs was conducted, and their strengths and weaknesses were assessed. As part of this review, the past SFR PRAs were compared to the newly issued ASME/ANS Advanced Non-LWR PRA standard, which for the first time offers guidance on the criteria needed for a “complete” advanced reactor PRA. The results of this comparison offer direction for future analyses concerning what methods can be used from the past SFR PRAs, and what new techniques will need to be developed.

527

Applicability of PSA Level 2 in the Design of Nuclear Power Plants

Estelle C. Sauvage (a), Gerben Dirksen (b), and Thierry Coye de Brunellis (c)

a) AREVA-NP SAS, Paris, France, b) AREVA-NP Gmbh, Erlangen, Germany, c) AREVA-NP SAS, Lyon, France

In the nuclear industry, until recently, the licensing and design of the new Nuclear Power Plants (NPP) were based upon a deterministic approach. The Probabilistic Safety Assessments (PSA) only supported the safety demonstration, mostly by the evaluation of the risks for the population and the environment. The feedbacks in the design of the NPP, when existing, were limited. Nowadays the use of the PSA becomes more systematic and is extended to the design phase of the new generation of NPP. In this frame the first approach was to develop the concepts of risk based and risk informed decision making to avoid unnecessary burden taking place in the NPP design due to the strong deterministic prescription on low probability events. Following the development of a new generation of plants, such as the AP1000 or the EPR, which considers the severe accidents in their design, the PSA Level 2 tends to contribute more and more to build the new NPPs. The accident of Fukushima Daishi NPPs even leads to an extended consideration of the severe accident in the design of the nuclear plants and the emergency organization structures. The interaction between the PSA Level 2 development and the design phase of the NPP became obvious, and part of the safety standards as recommended by the safety authorities and organizations. This paper assesses how the PSA Level 2 becomes a high visibility topic of the design phase of the NPP. The current safety requirement expectations regarding the use of the PSA Level 2 in the design phase results from this evolution. Indeed several technical areas can use the insight of a PSA Level 2 to improve the NPP design. It includes the design of hardware and systems (e.g., pipes, valves and tank but also instrumentation and control and civil engineering). It also includes the analysis of human factors, which subject covers the procedures and guidelines, the Human Machine Interface (HMI), the emergency organization, the training and the layout (access to buildings, survivability of the control room…). Two examples of the use of PSA Level 2 for EPR design improvement are provided and reviewed: first the modification of the severe accident spraying system, and second the HMI evaluation for the severe accident. The use of PSA Level 2 in the design phase depends of the model and the level of detail of the developed probabilistic analysis. Discussions on the areas of improvement regarding the use of the PSA Level 2 in the development of a new NPP are proposed.

547

Use of Corrective Action Programs at Nuclear Plants for Knowledge Management

Pamela F. Nelson (a), Teresa Ruiz-Sánchez (b), and Cecilia Martín del Campo (a)

a) Universidad Nacional Autonoma de Mexico, Mexico City, Mexico, b) Universidad Autonoma de Tamaulipas, Reynosa, Mexico

Due to the uncertainty of many of the factors that influence the performance of the humans in nuclear power plant maintenance activities, we propose using Bayesian networks to model this kind of system. In this study, several models are built from the information contained in the Condition Reports from the Corrective Action Program at a nuclear power plant. This first study, using actual nuclear power plant data, includes a method for data processing and highlights some potential uses of Bayesian networks for improving organizational effectiveness in the nuclear power industry. The tool described in this paper is designed to provide a systematic approach to assist in managing an organization’s knowledge base and support improvements in organizational performance. This paper describes the utilization of cause codes recorded in the Corrective Action Program for determining their effect on consequential events.

583

AES-2006 PSA Level 1. Preliminary Results at PSAR STAGE

A. Kalinkin, A. Solodovnikov, S. Semashko

JSC "VNIPIET", Saint-Petersburg, Russian Federation

This report represents PSA level 1 results for AES-2006 project at LAES -2 first unit configuration on PSAR stage. Report contains short description and base composition of LAES -2 project, composition and basic requirements of normative documents regulating process of PSA level 1 implementing, list of operating conditions considered in PSA, list of initiating events selected for analysis, brief description of most significant accident sequences leading to fuel damage, characteristics of input data used, results for fuel damage frequency assessment.

Th02 Reliability Analysis and Risk Assessment Methods VI

10:30 Kahuku

Chair: Federico Gabriele, Gran Sasso National Laboratory -INFN

262

Quantitative Risk Assessment for DarkSide 50, a Nuclear Physics Experimental Apparatus Installed at Gran Sasso Nat’l Lab: Results and Technical Solutions Applied

Federico Gabriele (a), Andrea Ianni, Augusto Goretti (b), Michele Montuschi (a), and Paolo Cavalcante (a) on behalf of DarkSide Collaboration

a) Gran Sasso National Laboratory, L’Aquila, Italy, b) Princeton University, Princeton, USA

DarkSide 50 (DS50) is a two phase argon Time Projection Chamber designed to search for dark matter at the Gran Sasso National Laboratory (LNGS). As in most rare event experiments hosted at the LNGS, the challenge of DS50 is to reduce the background due to natural radioactivity. To meet this challenge, DS50 has the unique feature of underground depleted argon and uses an active veto to account for neutrons, the most important source of background for WIMP searches. In this paper we report the Quantitative Risk Analysis (QRA) of the whole apparatus that we implemented and developed in order to bring the failure rates in the range of the LNGS acceptable matrix. Due to the complexity of the experimental apparatus the analysis takes a variety of accident scenarios into account. Even though QRA is widely used internationally for many purposes, the peculiarity of this application makes the involved issues, interpretation and results extremely interesting for risk assessment in the application of low background experiments in a confined underground space.

270

Safety Analysis and Quantitative Risk Assessment of a Deep Underground Large Scale Cryogenic Installation

Effie Marcoulaki and Ioannis Papazoglou

National Centre for Scientific Research “Demokritos”, Athens, Greece

This work considers the safety analysis and quantitative risk assessment of a deep underground cryogenic installation intended for neutrino physics. The neutrino detector equipment will be submerged in 50ktons fiducial mass of purified liquid argon, stored in a specially designed heat insulated tank located inside a deep underground cavern. The conditions inside the tank and the cavern, and the purity of argon will be maintained using appropriate systems for cooling, heating, pressurization and filtration. Smaller adjacent caverns will host the process unit equipment (process unit caverns). The caverns for the tank and the process units are planned to be excavated inside a mine at about 1400 meters underground. The quantitative results presented here provide incentives for improvements on the current process design of the installation that can reduce significantly the expected frequencies of accidental argon release due to tank overpressure.

295

Centrifugal Pump Mechanical Seal and Bearing Reliability Optimization

Peymaan Makarachi, and Mohammad Pourgol-Mohammad

Sahand University of Technology, Tabriz, Iran

Centrifugal pumps are used in a wide range of field and industrial applications and as significant rotating equipment, incurred high real life costs. The earlier researches illustrate that the main cost is borne by the seals and bearings as critical components of the pump. Most of the pump maintenance work is initiated by the failure of a mechanical seal or bearing as well. Reliability allocation is developed for the early design stage of a system to apportion the system reliability requirement to its individual subsystems. This article examines possible approaches to allocate the reliability values to the components of the mechanical seals and bearings such that the total cost is minimized. The cost of increasing reliability of these components is considered as an exponential function that contains four parameters of component reliability, feasibility factor, maximum achievable reliability and minimum reliability, which is estimated by Monte Carlo simulation. The Genetic Algorithm (GA) optimization is applied to the reliability allocation topic for a typical mechanical seal and bearing components. Optimization process yield optimum values of the components reliabilities, while considering the cost function as an objective in the GA method.

463

A Science-Based Theory of Reliability Founded on Thermodynamic Entropy

Anahita Imanian, Mohammad Modarres

Center for Risk and Reliability, University of Maryland, College park, USA

Failure data-driven stochastic and probabilistic techniques that underlie reliability analysis of components and structures remain unchanged for decades. The present study relies on a science-based explanation of damage as the source of material failure, and develops an alternative approach to reliability assessment based on the second law of thermodynamics. The common definition of damage, which is widely used to measure the reliability over time, is somewhat abstract, and varies at different geometric scales and when the observable field variables describing the damage change. For example, fatigue damage in metals has been described in several ways including reduction of elasticity modules, variation of hardness, cumulative number of cycle ratio, reduction of load carrying capacity, crack length and energy dissipation. These descriptions are typically based on observable changes in the physical or spatial properties, and exclude unobservable and highly localized damages. Therefore, the definition and measurement of damage is subjective and dependent on the choice of observable variables. However, all damage mechanisms share a common feature at a far deeper level, namely energy dissipation. Dissipation is a fundamental measure for irreversibility that, in a thermodynamic treatment of non-equilibrium processes, is quantified by entropy generation. Using a theorem relating entropy generation to energy dissipation via generalized thermodynamic forces and thermodynamic fluxes, this paper presents a model that formally describes the resulting damage. This model also contains cases where there is a synergy between different irreversible fluxes, such as in corrosion-fatigue damage where the mechanical deformation rate leading to fatigue is coupled with the electrochemical reaction rate leading to corrosion. Employing thermodynamic forces and fluxes to model the damage process, not only enables us to express the entropy generation in terms of physically measurable quantities including stress diffusion and electrochemical affinities, but also provides a powerful technique for studying the complex synergic effect of multiple irreversible processes. Having developed the proposed damage model over time, one could determine the time that damage accumulates to a level where the component or structure can no longer endure and fails. Existence of any uncertainties about the parameters and independent variables in this thermodynamic-based damage model leads to a time-to-failure distribution. Accordingly, such a distribution can be derived from the thermodynamic laws rather than estimated from the observed failure histories.

534

Quick Quantitative Calculation of DFT for NPP’s Repairable Systems Based on Minimal Cut Sequence Set

Daochuan Ge (a,b), Qiang Chou, Ruoxing Zhang (b), Yanhua Yang (a)

a) School of Nuclear Science and Engineering, Shanghai Jiao Tong University, Shanghai, China; b) Software Development Center, State Nuclear Power Technology Corporation, Beijing, China

The quantitative calculations of Nuclear Power Plant (NPP)’s repairable system are mainly based on Markov model. However, with the increase of the system’s size, the system’s state space increases exponentially, which makes the problem hard or even not to be solved. This paper proposes a method about quick calculation of Dynamic Fault Tree (DFT) for NPP’s repairable system based on Minimal Cut Sequence Set (MCSS), which divides a complex DFT into individual failure chain defined by MCSS. For each failure chain, the Markov model is applied. Then the unavailability of system is obtained synthesizing the result of each failure chain. This approach decreases the system’s size increasing from exponentially to linearly and reduces the computation complexity. As to the NPP’s dynamic systems with low failure rate and high repair rate, this approach can give a solution with a high-precision and conservative result and has practical value.

Th03 Human Reliability Analysis V

10:30 O'ahu

Chair: Ronald Boring, Idaho National Laboratory

415

Air Traffic Controllers' Workload on the Period of ATC Paradigm Shift

Kakuichi Shiomi

Electronic Navigation Research Institute, Tokyo, Japan

The real time simulation was performed in order to investigate the influence of the introduction of CPDLC into Japanese domestic ATC operations. The simulation was carried out with the participation of retired ATC controllers. Based on the results, it was considered that the introduction of CPDLC would be effective to reduce total communication time for ATC instructions, but it was also confirmed that the ATC workload is not simply dependent on the length of communication time. It is impossible to avoid the increase of workload in the transient situation, and it is not desirable from the viewpoint of the operational workload of ATC controllers that the introductory situation of 30% of aircraft with CPDLC capability continues for a long term.

416

Quantification of Bayesian Belief Net Relationships for HRA from Operational Event Analyses

Luca Podofillini, Lusine Mkrtchyan, Vinh N. Dang

Paul Scherrer Institute, Villigen PSI, Switzerland

Bayesian Belief Nets represent factor relationships in the form of conditional probability distributions (CPDs). The transparency of the CPD assessment is an important element for the acceptability of BBNs. This is especially the case when expert judgment is dominant in CDP assessment, which is often the case in risk analysis and in particular HRA. Unfortunately, research and applications on BBNs have frequently focused on their modeling potential as opposed to the process of building BBNs. This paper deals with this process and examines it for a BBN developed to quantify Errors of Commission (EOCs). The derivation of CPDs is based on introducing weighted functions among the nodes, an approach from the literature. The approach builds the CPDs automatically (ie. by an algorithm) from high-level assumptions on the effect of the factors; this contrasts with approaches in which CPDs for each child node are separately elicited. The assumptions concerning the effects of the factors were determined from operational event analyses in the database of the Commission Error Search and Assessment (CESA) quantification method (CESA-Q). The application shows the feasibility of systematically building a BBN from limited information and identifies some of the research needs related to BBN building and verification.

544

Task Decomposition in Human Reliability Analysis

Ronald L. Boring and Jeffrey C. Joe

Idaho National Laboratory, Idaho Falls, Idaho, USA

In the probabilistic safety assessments (PSAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approaches should arrive at the same set of HFEs. This question remains central as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PSAs tend to be top-down— defined as a subset of the PSA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) are more likely to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.

421

A Comparison of Two Cognition-driven Human Reliability Analysis Processes -CREAM and IDHEAS

Kejin Chen, Zhizhong Li (a), Yongping Qiu and Jiandong He (b)

a) Department of Industrial Engineering, Tsinghua University, Beijing, P. R. China, b) Shanghai Nuclear Engineering Research & Design Institute, Shanghai, P. R. China

Years of technology development has witnessed the increasing reliability and robustness of instruments in modern complex systems, while humans, still constitute the major incidents contributor. This article proposes a new taxonomy of various HRA methods based on how the basic probability is determined. Next focusing on the cognition-driven HRA methods, the article summarizes the general quantification model in cognitive-driven category. CREAM and IDHEAS, two representative HRA methods, are compared in terms of their analysis processes against the general qualification process. A simpler two-phase response model for new HRA is suggested and discussed.

203

Human Reliability in Spacecraft Development: Assessing and Mitigating Human Error in Electronics Assembly

Obibobi K. Ndu (a), Monifa Vaughn-Cooke (b)

a) Space Mission Assurance Group, Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA, b) Dr. , Department of Mechanical Engineering, Reliability Engineering Program, University ofMaryland, College Park, MD USA

Integral to the proper functioning and reliability of any spacecraft are the proper design, fabrication, assembly, and integration of its electrical and electronic systems. A critical look at the composition of such spacecraft systems reveals a preponderance of circuits built on printed wiring assemblies (PWA). Considering the highly complex process of spacecraft development and the stringent reliability and performance requirements imposed on operational spacecraft, it is apparent that human reliability during the development process is a factor in the quantification of overall spacecraft system reliability. This paper presents the development and application of Human Reliability Analysis (HRA) methods to specifically analyze the human error associated with the task of applying a polymeric coat onto a printed wiring assembly, a process also known as conformal coating. The polymeric coat serves to protect electronic circuitry against moisture, chemical contaminants and corrosion, extremes of temperature, and dust particles. The conformal coating is typically specified to protect against the particular space environment to which the spacecraft will be subjected. Subsequently, improper application potentially leads to loss of electro-electrical functionality. Yet a closer look at the development process shows the high degree of interface and contact between the human fabricator and the system in development however, traditional system reliability analysis does not address the issue of human error introduced during the manufacturing and integration phase. It is assumed that quality assurance and quality control standards and processes will eliminate these workmanship-related defects. Acknowledging that no system or process is perfectly capable of arresting quality escapes in manufacturing, fabrication, and integration, how then can one account for human error in the absence of more modern and precise machine controlled processes? In the context of polymeric application, we will need to establish the boundaries of human error by defining the scope and then decide on the methods and tools relevant to analysis of human error. Human error in this process shall be defined as an action, or omission of action, by a technician, which detracts from reaching a specific target end state; in this case, the perfect application of the appropriate polymeric coat on a PWA.

Th04 Shipping and Offshore Oil & Gas I

10:30 Waialua

Chair: Christoopher Jablonowski, Shell E&P Company

112

Use of Bayesian Network to Support Risk-Based Analysis of LNG Carrier Loading Operation

Arthur Henrique de Andrade Melani, Dennis Wilfredo Roldán Silva, Gilberto Francisco Martha Souza

University of São Paulo, São Paulo, Brazil

This paper presents a methodology for risk analysis of LNG carriers operations aiming at defining the most critical pieces of equipments as for avoiding LNG leakage during loading and unloading operations. The pieces of equipment considered critical for loading and unloading operations are identified and the Cause-Consequence diagram is built. The probability of occurrence of each event listed in the diagram is calculated based on Bayesian network method. The consequences associated with those scenarios are estimated based on literature review. Based on the calculated risk profile some maintenance and operational recommendations are presented aiming at reducing the probability of occurrence of the critical failure scenarios.

14

Probabilistic Analysis of Geological Properties to Support Equipment Selection for a Deepwater Subsea Oil Project

Christopher J. Jablonowski, Edward E. Shumilak, Kenneth F. Tyler (a), Arash Haghshenas (b)

a) Shell Exploration and Production Company, Houston, TX, U.S.A. b) Boots & Coots Services LLC, Houston, TX, U.S.A.

This paper describes the method and results of a probabilistic risk analysis that was used to provide a quantitative basis for a complex and high-stakes design decision for a deepwater subsea oil project. The analysis specified probabilistic simulations of geologic properties based on information from a small number of exploration and appraisal wells. Each iteration of the simulated data was then fed into a deterministic engineering model to simulate various operational scenarios. Conventional probabilistic sampling and a more efficient experimental design approach were both employed. The key results are cumulative density functions for critical operational variables that drive design decisions.

85

Gas Detection for Offshore Applications

Peter Okoh

Norwegian University of Science and Technology, Trondheim, Norway

Release of hazardous and flammable gas is a significant contributor to risk in the offshore a oil and gas industry and various types of automatic systems for rapid detection of gas are therefore installed to accentuate the elimination or reduction of the dangerous releases. There are different types of gases which may be released and gas may be released in different environments and under different conditions. Several principles for detecting gas are therefore applied and a variety of types of gas detectors are in use. However, a significant percentage of gas releases remain undetected by the dedicated detectors and hence unaccounted for and uncontrolled. The objectives of this paper are: (1) to present a state-of-the art overview of gas detection in relation to offshore applications, (2) to present an overview of requirements for gas detection in the Norwegian offshore industry, and (3) to do a comparative study of performance standards for gas detection worldwide. The paper builds on a review of literature, standards and guidelines in relation to gas detection oshore.

165

BOP Risk Model Development and Applications

Xuhong He, Johan Sörman (a), Inge A. Alme (b), and Scotty Roper (c)

a) Lloyd’s Register Consulting, Stockholm, Sweden, b) Lloyd’s Register Consulting, Kjeller, Norway, c) Lloyd's Register Drilling Integrity Services Inc., Houston, USA

Deepwater drilling operations typically involve a critical safety system called BOP (Blow Out Preventer), which is latched onto a wellhead and situated on the seabed. It is the final and ultimate line of defense in protecting life and the environment throughout drilling operations. It is thus important to make sure the BOP will function when it is required. When a failure is detected in a certain system or component on the submerged BOP, the industry’s typical response is to analyze the possible consequences and perform a risk assessment in order to define risk levels. If the risk level is increased above the certain level, the drilling has to be stopped and BOP needs to be pulled to the surface to fix the problem. To stop the drilling and pull the BOP to the surface for manual inspection is a very costly and timely operation. To support the BOP pull or no pull decision, there is a need of a risk-informed model which quickly defines the change in operational risks based on the BOP status. This model must be transparent, verifiable and with the subjectivity removed. The BOP Risk Model is realized using traditional riskand reliability modeling methods. The risk model is made available for drilling rigs staff using a risk monitor interface that can be used for visualizing operational risks. The paper describes the model development process, involving: (1) identify key BOP functions; (2) establish block diagrams for each function; (3) FMEA; (4) establish fault trees based on the logic block diagrams and FMEA; (5) Integrate the fault tree model into the risk monitor. A case study will be given using the BOP Risk Model for decision making.

411

Determination of the Design Load for Structural Safety Assessment against Gas Explosion in Offshore Topside

Migyeong Kim, Gyusung Kim, Jongjin Jung and Wooseung Sim

Advanced Technology Institute, Hyundai Heavy Industries, Ulsan, Republic of Korea

The possibility of gas explosion accidents always exists at offshore facilities of the oil and gas industry. In the design of those structures, the structural safety assessment against explosion loads is necessary to prevent loss of lives or catastrophic failure of structures. One of the essential parts in the structural assessment is to determine design explosion loads. Nowadays, the explosion loads are calculated by probabilistic approach rather than deterministic approach. In the recommended probabilistic load calculation procedure of the offshore explosion accidents, for instance, the NORSOK standard, the design load shall be established based on the relation between the predicted overpressure and its duration from numerous scenarios, and also, the exceedance frequency of the loads to the risk acceptance criteria. In most offshore projects, however, the conservative approach has been used, which derives only the design overpressure from the probabilistic load scenarios, or considers overpressure and duration independently because there is no efficient application method for the suggested procedure. In this paper, the practical method to determine the design explosion load, especially considering both overpressure and duration, are presented by using the response surface model with the joint probability distribution and compared with the present industrial practices.

Th05 Uncertainty, Sensitivity, and Bayesian Methods III

10:30 Wai'anae

Chair: Sergio Guarro, ASCA Inc.

309

Propagating Uncertainty in Phenomenological Analysis into Probabilistic Safety Analysis

A. El-Shanawany (a,b)

a) Imperial College London, London, United Kingdom, b) Corporate Risk Associates, London, United Kingdom

The operation of nuclear power plants is supported by numerous analyses, both computational and experimental. Probabilistic risk analysis models attempt to quantify the risk of power plants, and implicitly use the supporting analyses during this process. The way in which these analyses are used in risk models is usually conservative, but could instead be represented as an uncertainty distribution. The conservatisms are often hidden, but affect every aspect of risk models; for example in the definition of success criteria. This paper uses operator reliability as an example to quantitatively demonstrate how conservative interpretations of supporting analyses can affect risk model predictions. The influence of human factors is recognised to be crucially important to risk models for nuclear power plants. Human error probability quantification is a key aspect in determining the relative risk importance of human actions in the context of a holistic probabilistic safety analysis model. However, there are large degrees of uncertainty in numerous aspects of human factors analysis and in the resulting quantification, many of which can be traced back to supporting transient analyses, such as thermal hydraulic and neutronic analyses. Risk models have historically used conservative judgements resulting from these analyses as an input into human reliability assessment. This paper presents a method for incorporating uncertainty distributions arising from phenomenological analyses into human reliability quantification. The method is illustrated using uncertainty in the timescale available to the operator for performing specified actions. This paper shows how to include uncertainty distributions over the time available to the operator and provides updated quantitative analysis. An illustrative example of operator initiated long term hold down of reactivity is presented.

349

A Procedure Estimating and Smoothing Earthquake Rate in a Region with the Bayesian Approach

J.P. Wang

The Hong Kong University of Science and Technology, Kowloon, Hong Kong

Reliable instrumentation earthquake data are considered limited compared to the long return period of earthquakes, especially the major events. As a result, earthquake rate estimating could become unrealistic based on the limited earthquake observation with classical statistics algorithms. For example, given no M ≥ 6.0 earthquakes were recorded in the past 50 years, a best-estimate for the earthquake rate around the region should be zero, and such a zero estimate is considered unconvincing owing to the short observation period and long earthquake return periods. In this paper, a Bayesian calculation is proposed to earthquake rate estimating and smoothing given a reliable, but relatively short, earthquake catalog compiled with instrumentation data recorded since the last century. The key to this Bayesian application to engineering seismology is to utilize the observed rates in neighboring zones as the prior information, then updated with the likelihood function governed by the earthquake observation in a target zone.

512

Open Conceptual Questions in the Application of Uncertainty Analysis in PRA Logic Model Quantification

Sergio Guarro

ASCA Inc., Redondo Beach, USA

The final stage of quantification of Probabilistic Risk Assessment (PRA) or reliability logic models is usually carried out via processes that first obtain estimates of key component-level reliability and risk parameter values, such as the failure rate in the time domain, or the probability of failure (PoF) for a specific function or mission duration, and then propagate such values from the component to the subsystem and system levels according to the component logic arrangements reflected in system reliability and failure logic models. When applying uncertainty analysis techniques to the estimation of reliability or PoF parameters of components belonging to a given system, a conceptual problem arises as to whether the same bottom up process may be applied to the definition of the prior distributions of the parameters of interest, or whether better state-ofknowledge consistency and coherence may be achieved by a top down process that proceeds from the initial construction of a system level prior distribution for the parameter of concern. This paper examines and discusses this and related issues that arise in the application of Bayesian analyses to a system PRA or reliability assessment.

565

System Initiating Event Frequency Estimation using Uncertain Data

Kurt G. Vedros

NuScale Power, LLC, Corvallis, Oregon, United States

Presented is an application of Bayesian inference methods for quantifying a prior distribution used for a system initiating event with no directly applicable system information available. New safety systems in development require the use of uncertain data and estimations where the applicability must be determined by the analyst. Presented is an approach for utilizing available, germane data that both captures the appropriateness of the contributing data and quantifies and maintains the uncertainty of the data.

584

SUnCISTT -A Generic Code Interface for Uncertainty and Sensitivity Analysis

Matthias Behler (a), Matthias Bock (b), Florian Rowold, and Maik Stuke (a)

a) Gesellschaft für Anlagen und Reaktorsicherheit GRS mbH, Garching n. Munich, Germany, b) STEAG Energy Services GmbH, Essen, Germany

The GRS development SUnCISTT (Sensitivities and Uncertainties in Criticality Inventory and Source Term Tool) is a modular, easily extensible, abstract interface program designed to perform Monte Carlo sampling based uncertainty and sensitivity analyses. In the field of criticality safety analyses it couples different criticality and depletion codes commonly used in nuclear criticality safety assessments to the well-established GRS tool for sensitivity and uncertainty analyses SUSA. SUSA provides the necessary statistical methods, whereas SUnCISTT handles the complex bookkeeping that arises in the transfer of the generated samples into valid models of a given problem for a specific code. It generates and steers the calculation of the sample input files for the used codes. The computed results are collected, evaluated, and prepared for the statistical analysis in SUSA. In this paper we describe the underlying methods in SUnCISTT and present examples of major applications in the field of nuclear criticality safety assessment: Uncertainty and sensitivity analyses applied in criticality calculations. Monte Carlo sampling techniques in nuclear fuel depletion calculation. Uncertainty and sensitivity analyses of burnup credit analysis. Analysis of correlations between different experimental setups sharing uncertain parameters. The examples and results are shown for SUnCISTT, coupling SUSA to different SCALE sequences from Oak Ridge National Laboratory and to OREST from GRS.

Th06 Nuclear Engineering III

10:30 Ewa

Chair: Jeffrey Brewer, Sandia National Laboratories

312

BWR-club PSA Benchmarking – Bottom LOCA during Outage, Reactor Level Measurement and Dominating Initiating Events

Anders Karlsson (a), Maria Frisk (b), and Göran Hultqvist (c)

a) Forsmarks Kraftgrupp AB, Östhammar, Sweden, b) Risk Pilot AB, Stockholm, Sweden, c) Havsbrus Consulting, Öregrund, Sweden

Benchmarking is an important activity in order to eliminate unjustified differences between PSA models and enable harmonisation. It could also be used in order to understand plant differences. As part of the BWR-club PSA activities benchmarking of bottom LOCA during outage, reactor level measurement and dominating initiating events have been performed. Modelling of bottom LOCA during outage varies between the BWR-club members and work performed within the BWR-club aims at compiling and understanding these differences. When it comes to reactor level measurement modelling varies from a more detailed modelling to more of a “black box” approach. Information has also been collected from the BWR-club members regarding dominating initiating events in their PSA studies. The initiating event frequencies, scope of the PSA studies and risk importance of different initiating events vary between the BWR-club members and work has been performed compiling and understanding these differences. BWR-club reports have been issued for bottom LOCA during outage and reactor level measurement, while the benchmarking of dominating initiating events is yet to be finalised.

371

Effects of an Advanced Reactor’s Design, Use of Automation, and Mission on Human Operators

Jeffrey C. Joe and Johanna H. Oxstrand

Idaho National Laboratory, Idaho Falls, USA

The roles, functions, and tasks of the human operator in existing light water nuclear power plants (NPPs) are based on sound nuclear and human factors engineering (HFE) principles, are well defined by the plant’s conduct of operations, and have been validated by years of operating experience. However, advanced NPPs whose engineering designs differ from existing light-water reactors (LWRs) will impose changes on the roles, functions, and tasks of the human operators. The plans to increase the use of automation, reduce staffing levels, and add to the mission of these advanced NPPs will also affect the operator’s roles, functions, and tasks. We assert that these factors, which do not appear to have received a lot of attention by the design engineers of advanced NPPs relative to the attention given to conceptual design of these reactors, can have significant risk implications for the operators and overall plant safety if not mitigated appropriately. This paper presents a high-level analysis of a specific advanced NPP and how its engineered design, its plan to use greater levels of automation, and its expanded mission have risk significant implications on operator performance and overall plant safety.

391

For the Completeness of the PRA Implementation Standard

Yoshiyuki Narumiya (a), Akira Yamaguchi (b), Takayuki Ota, Haruhiro Nomura (a)

a) The Kansai Electric Power Co., Inc, Osaka, Japan, b) Osaka University, Suita, Osaka, Japan

Risk Technical Committee in the Standards Committee of Atomic Energy Society of Japan has formulated the standards related to PRA procedure, data, and its utilization. It provides the standards for the internal events in every operation condition usable for the risk assessment up to environmental effects (Level 3). As for the external events PRA also, the standards are expanded to earthquake, internal flooding, and tsunami. Also fire PRA or complex events PRA, which are especially induced by earthquake, are being examined for standardization. While accumulating the formulated content of the standards, usage experience and noticed matters of the PRA standards are to be feedback to the process of standard formulation.

497

Nuclear Safety Design Principles & the Concept of Independence: Insights from Nuclear Weapon Safety for Other High-Consequence Applications

Jeffrey D. Brewer

Sandia National Laboratories, Albuquerque, NM, USA

Insights developed within the U.S. nuclear weapon system safety community may benefit system safety design, assessment, and management activities in other high consequence domains. The approach of assured nuclear weapon safety has been developed that uses the Nuclear Safety Design Principles (NSDPs) of incompatibility, isolation, and inoperability to design safety features, organized into subsystems such that each subsystem contributes to safe system responses in independent and predictable ways given a wide range of environmental contexts. The central aim of the approach is to provide a robust technical basis for asserting that a system can meet quantitative safety requirements in the widest context of possible adverse or accident environments, while using the most concise arrangement of safety design features and the fewest number of specific adverse or accident environment assumptions. Rigor in understanding and applying the concept of independence is crucial for the success of the approach. This paper provides a basic description of the assured nuclear weapon safety approach, in a manner that illustrates potential application to other domains. There is also a strong emphasis on describing the process for developing a defensible technical basis for the independence assertions between integrated safety subsystems.

498

Advancing Human Reliability Analysis Methods for External Events with a Focus on Seismic

Jeffrey A. Julius, Jan Grobbelaar, and Kaydee Kohlhepp

Scientech, a Curtiss-Wright Flow Control Company, Tukwila, WA, U.S.A.

The reliability of operator actions following an external initiating event is a topic that has increased importance following the 2011 seismic-induced tsunami at the Fukushima Daiichi site in Japan. This event has prompted licensees in the U.S.A., and internationally, to reexamine their plant’s risk profile and the plant’s ability to prevent and/or mitigate damage following external initiating events (external hazards). In support of the industry initiatives to evaluate and prepare for external initiating events, the Electric Power Research Institute and Scientech have developed a preliminary approach to analyze the reliability of operator actions following external initiating events, with a specific focus on seismic events. The preliminary approach has been published in EPRI 1025294, A Preliminary Approach to Human Reliability Analysis for External Events with a Focus on Seismic, in December 2012. Since the development of the 2012 report, the approach and methods suggested in the report have been applied in the development and in the review of seismic PRAs that are currently in development. This paper summarizes the development of the current external events human reliability analysis (HRA) methods and guidance, and summarizes recent insights from applying this approach to seismic PRAs.

Th07 Maintenance and Availability Modeling

10:30 Kona

Chair: Yail Jimmy Kim, University of Colorado Denver

572

Expected Maintenance Costs Model for Time-Delayed Technical Systems in Various Reliability Structures

Anna Jodejko-Pietruczuk, Sylwia Werbińska-Wojciechowska

Wroclaw University of Technology, Wroclaw, Poland

In the article, there is presented the mathematical model of expected maintenance costs of two-and multi-unit system in a single cycle of operation (between (i-1)th and ith time moments of inspection action performance), provided that at the beginning of the inspection cycle system elements are in the same age and show no signs of forthcoming failure. The mathematical modelling of maintenance decisions for such a system is provided with the use of delay-time analysis. Moreover, there was examined the compatibility of developed analytical model with a simulation model. The directions for further research work are defined.

394

Modeling the Reliability and the Performance of a Wind Farm Using Cyclic Non-Homogenous Markov Chains

Theodoros V. Tzioutzias, Agapios N. Platis, Vasilis P. Koutras

University of the Aegean Department of Financial and Management Engineering, Chios, Greece

Reliability issues concerning wind power installations are of prior interest due to the increasing production of electricity through wind energy around the world. Attaining the highest performance of a wind farm lies on two factors: wind intensity and mechanical failures. Wind intensity is related to the geographical characteristics of the farm, while mechanical failures are related to the wind turbines reliability. The latter issue is the one studied in this paper. In order to achieve higher performance levels when assuming the wind capacity as constant for a certain installation and a certain period, we can increase reliability by reducing mechanical failures and increasing repair rates. The objective of this paper is to model reliability’s impact on the overall performance of the wind farm. To this direction, a Continuous Time Markov Chain (CTMC) and a Cyclic Non-Homogenous Markov Chain (CNHMC) are used to model the system. CNHMCs are adopted in order to capture the periodicity of the wind intensity. The results of the above models are compared in order to identify which one fits better the characteristics of the system.

273

Performance and Reliability of Bridge Girders Upgraded with Posttensioned Near-surface-mounted Composite Strips

Yail J. Kim (a), Jae-Yoon Kang, and Jong-Sup Park (b)

a) University of Colorado Denver, Denver, CO, USA, b) Korea Institute of Construction Technology, Ilsan, Korea

This paper deals with a research program concerning the performance and reliability of bridge girders strengthened with post-tensioned near-surface mounted (NSM) carbon fiber reinforced polymer (CFRP) composite strips. The advantages of CFRP application include non-corrosive characteristics, prompt execution on site, reduced maintenance expenses, favorable strength-to-weight ratio, and good chemical or fatigue resistance. NSM CFRP technologies are emerging in the infrastructure rehabilitation community because of several benefits such as enhanced bond performance and durability. For the present study, computational and analytical approaches are employed to examine the behavior of CFRP-strengthened girders, including 51 finite element models. Preliminary reliability analysis is conducted for evaluating the level of safety associated with the strengthened girders.

393

A Quantitative Method for Assessing the Resilience of Infrastructure Systems

Cen Nan (a,b), Giovanni Sansavini (b,c), Wolfgang Kröger (c) and Hans Rudolf Heinimann (a,c)

a) Land Using Group, ETH Zürich, Switzerland, b) Reliability and Risk Engineering, ETH Zürich, Switzerland, c) ETH Risk Center, ETH Zürich, Switzerland

Resilience is a dynamic multi-faceted term and complements other terms commonly used in risk analysis, e.g. reliability, availability, vulnerability, etc. The importance of fully understanding system resilience and identifying ways to enhance it, especially for infrastructure systems our daily life depends on, has been recognized not only by researchers, but also by the public. During recent years, several methods and frameworks have been proposed and developed to explore applicable ways to assess and analyse system resilience. However, they are tailored to specific disruptive hazards/events mainly for other than technological systems, or fail to properly include all the phases, e.g., mitigation, adaptation and recovery. In this paper, after defining the term, a generic quantitative method for the assessment of the system resilience is proposed, which consists of two components: a hybrid modelling approach and an integrated metric for resilience quantification. The feasibility and applicability of the proposed method is tested using an electric power supply system as the exemplary system.

113

Use of Reliability Concepts to Support Pas 55 Standard Application to Improve Hydro Power Generator Availability

Gilberto F. M. de Souza (a), Erick M.P. Hidalgo (a), Claudio C. Spanó (b), and Juliano N. Torres (c)

a) University of São Paulo, São Paulo, Brazil, b) ReliaSoft Brasil, São Paulo, Brazil, c) AES Tietê, Bauru, Brazil

The electric power generation industry seeks for high equipment availability to fulfill the requirements of regulatory agencies contracts and demands. The availability of electric power generation equipment is affected not only by equipment design characteristics but also by maintenance policies. The recently developed British Standard PAS 55 presents requirements to improve asset integrity aiming at increasing availability and reducing safety risk. The present paper presents a discussion regarding the advantages of application of PAS 55 requirements to increase operational availability of hydro power generator and its relation with traditional reliability techniques used to develop equipment maintenance policy. An assessment is presented to link operational information, standards requirements and the associated reliability analysis techniques are linked to requirements of PAS 55. An example of application is presented for a 30 MW hydro power generator showing the advantages of applying reliability requirements to improve equipment availability.

Th12 Safety Assessment Software and Tools II

01:30 Kahuku

Chair: Rongxiang Hu, Chinese Academy of Sciences

341

Processing of Switching Events Sets in Reliability and Probabilistic Safety Assessment Program RiskA

Shanqi Chen, Jin Wang (a,b), Fang Wang (b), Liqin Hu, Yican Wu (a,b), FDS Team

a) University of Science and Technology of China, Hefei Anhui, Chin, b) Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei Anhui, China

Fault tree analysis and event tree analysis are the general methods for reliability and probabilistic safety assessment (PSA). In PSA programs, boundary conditions of analysis in fault tree and event tree can make convenience in the modeling and the analysis process. However, handling the logical value conflicts of the same events in different boundary conditions is one of the difficulties in the PSA programs. A new event type named Switching Event and processing method of Switching Events Set is introduced in this paper. The methods implemented in RiskA, which is a reliability and probabilistic safety assessment program developed by FDS Team, and their applications in real power plant model were also proposed and discussed in this paper.

342

A New Reliability Allocation Method Based on FTA and AHP for Nuclear Power Plant

Boyuan Li (a,b), Rongxiang Hu (b), Jin Wang (a,b), Fang Wang (b), Shanqi Chen, Jiawen Xu (a,b), FDS Team

a) University of Science and Technology of China, Hefei Anhui, China, b) Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei Anhui, China

Reliability of Nuclear Power Plants (NPPs) has become an even greater concern in recent years. Protecting the public from the risk of NPPs by increasing the reliability is the main goal of the nuclear reliability study. In this paper, a new reliability allocation method combining Fault Tree Analysis (FTA) with Analytic Hierarchy Process (AHP) was proposed to improve extant methods, most of which were not reasonable or efficient enough to allocate reliability for complex system. With this method, objective computed result from FTA and subjective evaluation from experts was combined to make the allocation result more accurate and more reasonable.

208

Preventive Maintenance Optimization for Slovak Power Grid Using EOOS Risk Monitor

Pavol Hlavac, Zoltan Kovacs

RELKO Ltd., Bratislava, Slovak Republic

In recent years, deregulation of the electric industry has resulted in changes to the approach to generation and distribution of power. Previously, one power company would both generate and distribute power. Now these functions are often performed by separate entities. The market has resulted in changes to the way that electric grids are being operated other than those for which they were originally designed, including various configurations and load levels. The deregulated electrical power market has already contributed to conditions that challenge the stability of the grid. The utilization of existing assets is increased. It has resulted in greater power transfers over longer distances. This has increased the loading of the transmission grid and also made local reliability more dependent on distant events. On the other side, the customer expectations of reliability are increasing and the consequences of power outages have never been greater. Even small weak points in the power transmission system might eventually lead to costly outages or trigger cascading failures that affect large regions. The traditional approach to electrical grid reliability is based on deterministic analyses. However, under the changed conditions this approach is not enough. The probabilistic approach (PSA) should be used which can help to identify and correct potential weak points in the power system long before they trigger costly failures. Given a PSA model of the grid constructed, the risk monitor can be developed. This is a specific real-time analysis tool of the grid which can be used to determine the instantaneous risk based on the actual status of its systems and components. The paper describes the risk monitor developed for the Slovak electrical power system.

395

Supporting Tool for Cooperative Work Analysis Based on Distributed Cognition

Satoru Inoue (a), Stuart Moran (b), and Keiichi Nakata (c)

a) ATM Department, Electronic Navigation Research Institute, Tokyo, Japan, b) Mixed Reality Lab, University of Nottingham, Nottingham, UK, c) BISA, Henley Business School, University of Reading, Reading, UK

This paper aims to use a distributed cognition-inspired approach in an analysis of how teams work. The intention is to extract tacit knowledge from observations of a cognitive system which can be derived from the identification of information trajectories and the grouping of agent actions/activities into specific abstract processes. For this purpose, we have developed a tool to support analysis from a distributed cognitive perspective, and we present a prototype of the tool for assisting analysis of cooperative work here. This work is part of a study into the applicability of distributed cognition to cooperative tasks, with the objective of developing a systematic framework to represent relevant knowledge and expertise.

Th13 External Events Hazard/PRA Modeling for Nuclear Power Plants III

01:30 O'ahu

Chair: Takahiro Kuramoto, Nuclear Engineering, Ltd.

403

Seismic PRA for Kashiwazaki-Kariwa NPP

Keiichiro Saito, Masanori Takeuchi,Takashi Uemura,Yasunori Yamanaka

Tokyo Electric Power Company Inc, Tokyo, Japan

A seismic PRA was carried out in order to confirm the effectiveness of measures related to Kashiwazaki-Kariwa NPS (KK-NPS) that are aimed at improving safety and founded upon lessons learned from Niigata-Chuetu-Oki Earthquake (NCO) in 2007 and The 2011 off the Pacific coast of Tohoku Earthquake (the Tohoku Earthquake)) in 2011 as well as from understanding our plant vulnerability to Earthquakes. The lessons learned from the Fukushima Daiichi (F1) accident and findings gathered from the Great Eastern Japan Earthquake were reflected in both the hazard evaluation and the sequence evaluation during the seismic PRA. In this evaluation, we were able to confirm the effectiveness of safety measures carried out towards plant vulnerabilities that were found before these measures were implemented. Additionally, our objective is to continually work towards improving the level of safety through utilizing risk which also accounts for results from seismic and other PRA in order to assess effective countermeasures. Here, we will also evaluate the findings extracted from the seismic PRA carried out this time in studying how to "improve the accuracy of fragility evaluations of portable equipments" and "methods for implementing evaluation conditions for redundancy".

298

Development of Implementation Standard Concerning the Risk Evaluation Methodology Selection for the External Hazards

Takahiro KURAMOTO (a), Akira YAMAGUCHI (b), Yoshiyuki NARUMIYA (c), Takayuki OTA (d), Yutaka MAMIZUKA (a)

a) Nuclear Engineering, Ltd., Osaka, Japan, b) Osaka University, Osaka, Japan, c) The Kansai Electric Power Company, Osaka, Japan, d) The Kansai Electric Power Company, Fukui, Japan

Since the accident at Fukushima Daiichi Nuclear Power plant caused by the Great East Japan Earthquake in March 2011, there has been growing demands for assessing the effects of external hazards, including natural events, such as earthquake and tsunami, and external human behaviors, and taking actions to address those external hazards. The newly established Japanese regulatory requirements claim design considerations associated with external hazards. The primary objective of the risk assessment for external hazards is to establish countermeasures against such hazards rather than grasping the risk figures. Therefore, applying detailed risk assessment methods, such as probabilistic risk assessment (PRA), to all the external hazards is not always the most appropriate. Risk assessment methods can vary in types including quantitative evaluation, hazard analysis (analyzing hazard frequencies or their influence), and margin assessment. The Risk Technical Committee under the Atomic Energy Society of Japan comprehensively identified the external hazards that had potential risks leading to core damage, and has held discussion meetings to establish the implementation standard for the selection of assessment methods for risks associated with external hazards. The implementation standard is expected to be published in 2014. The implementation standard will help to understand plant safety against all external hazards and establish appropriate countermeasures against individual hazards. This paper describes the contents of the implementation standard concerning the risk evaluation methodology selection for the external hazards, which is being discussed, as well as the process of discussion by the Risk Technical Committee.

333

Technical Approach for Safety Assessment of Multi-Unit NPP Sites Subject To External Events

Sujit Samaddar, Kenta Hibino, Ovidiu Coman

International Atomic Energy Agency

This paper presents a framework and technical approach for conducting a probabilistic safety assessment of multiunit sites against external events. The treatment of multiple hazard on a unit, interaction between units, implementation of severe accident measures, human reliability, environmental conditions, metric of risk for both reactor and non-reactor sources, integration of risk and responses and many such important factors need to be addressed within the context of this framework. The framework facilitates the establishment of a comprehensive methodology that can be applied internationally to the peer review of safety assessment of multiunit sites under the impact of multiple external hazards.

520

An Approach to Estimate the Compartment Fire Ignition Frequency for HTGR NPP Based on LWR Generic Data

Wei Wang, Jiejuan Tong, Chuan Li, Jun Zhao, and Tao Liu (a,b)

a) Institute of Nuclear and New Energy Technology, Tsinghua University, Beijing, P.R. China, b) Key Laboratory of Advanced Reactor Engineering and Safety, Ministry of Education, Beijing, P.R. China

TASK 6 of NUREG/CR-6850 provides the detailed approach on how to estimate the fire frequencies of compartments and scenarios in Fire PRA. The approach is based on the historic data of light water reactor nuclear power plant, and suitable for the operating plants. However, lack of operational experience is an obvious issue for the new type of reactor, e.g. the High Temperature Gas Cooled Reactor (HTGR). We still can use NUREG/CR-6850 as the methodological reference. In this research, key issues for ignition frequency evaluation are discussed firstly, which are arisen during the fire PRA development of the first HTGR nuclear power plant demonstration project in China (HTR-PM). Two methods are developed accordingly to estimate compartment fire frequency for HTGR plant, namely, one is based on plant-level information and the other is based on component-level information. A real and typical compartment of HTR-PM (switchgear room) is subsequently piloted to verify the two proposals.

474

Study on Next Generation Seismic PRA Methodology (1): Program Plan and Proposal of New Mathematical Framework (Presentation Only)

Ken Muramatsu (a), Tsuyoshi Takada (b), Akemi Nishida (c), Tomoaki Uchiyama (d), Hitoshi Muta, Osamu Furuya, Sigeru Fujimoto (a), Tatsuya Itoi (b)

a) Tokyo City University, b) The University of Tokyo, c) Japan Atomic Energy Agency, d) CSA of Japan

This is the first of a series of papers on a study on next generation seismic probabilistic safety assessment methodology being conducted by the authors as a three-year project “Reliability Enhancement of Seismic Risk Assessment of Nuclear Power Plants as Risk Management Fundamentals”, which was started in 2012 and is funded by the Ministry of Education, Culture, Sports, Science & Technology (MEXT) of Japan. This paper gives an overview of the program plan and a proposal of new mathematical framework of S-PRA. 1. Program Plan 1.1 Objective. This project, focusing on uncertainty assessment framework and utilization of expertise, and finally by developing relevant computer codes to improve reliability of seismic probabilistic risk assessment (SPRA) and to promote its further use of the SPRA, develops methodology for quantification of uncertainty associated with final results from SPRA in the framework of risk management of NPP facilities. The following scope are set. 1) Development of framework of probabilistic models for uncertainty quantification and Computer codes 2) Aggregation of expert opinion on structure/equipment fragility estimation and development of implementation guidance on epistemic uncertainty (modeling uncertainty) 3) Study on applicability of SPRA to model plant 1.2 Expected goal (1) Study on probabilistic models and treatment of epistemic uncertainty a. Study on probabilistic models. Reviewing the current status of assessment procedures of accident sequence occurrence probability in the SPRA, this subprogram will develop a mathematical framework for estimating uncertainty in SPRA results in a more comprehensive way taking into account uncertainty related to correlation effect of components failures and system modeling such as ET/FT (event tree/ fault tree), uncertainty of which it has been difficult to quantify so far. b. Development of computer code Based on SECOM2-DQFM developed by JAEA, the proposed mathematical framework will be built-in in order to make it possible to estimate the accident sequence occurrence probability and its uncertainty. (2) Study on quantification of epistemic uncertainty on fragility assessment a. Elicitation of expert opinion. As a first step, survey of literatures on elicitation procedures and on organization of multi-experts group is done. There are two different groups: experts in the field of buildings and soil ground, experts in the field of pipe and equipment. These groups will conduct a pilot study on the use of expert judgment elicitation for identification and quantification of parameters to be used in fragility analyses. b. Analytical study on building and soil ground A model plant is selected for seismic response analysis and sensitivity study. The model plant is based on a hypothetical plant on the actual ground site. In the first year of the project (FY2012) and available plant information and observed records will be surveyed to construct a standardized light water reactor buildings model, from which CAD data for 3-D analysis, mesh data, boundary condition data can be generated. Then this model will be used in the second and third years for various calculations to provide supporting information to the above expert group, including comparison of code prediction with observed building responses and sensitivity studies on the effects of uncertainty factors on the building response. This model will also be used for providing a set of building response data for use in component fragility analysis in the trial SPRA for a model plant.

Th14 Safety Management and Decision Making II

01:30 Waialua

Chair: Hossein Nourbakhsh, Affiliation: U.S. Nuclear Regulatory Commission

198

Oil & Gas Projects Alternative Selection using Analytic Hierarchy Process - A Case Study

Stefania Benucci (a) and Fabrizio Tallone (b)

a) Auriga Consulting s.r.l., Rome, Italy, b) Saipem, Fano, Italy

During the feasibility phase of Oil & Gas projects, several solutions are developed and, before Board of Directors sanction, the Top Management needs to be informed about the most promising solution which could be selected to achieve the project goals. Considering that not for all the solutions the design can be fully developed, the alternative selection is normally carried out through a qualitative and relatively subjective analysis: typically the criticality ranking after the Hazard Identification (HAZID). To overcome the uncertainties of this analysis a more detailed and objective approach can be used: the Analytic Hierarchy Process (AHP). This paper describes how the best alternative against several offshore/onshore pipeline routes has been selected. The AHP has been applied just after the HAZID session to take advantage of the knowledge of design specialists belonging to different disciplines. This methodology allows to completely compare different solutions and to mathematically select the best project alternative from all the technical points of view, providing clear justifications for this choice. Finally, through a simple “benefits to costs ratios” analysis, also the costs, deliberately set aside until the benefits of the alternatives are assessed, are included in the analysis and the most promising solution can be identified.

434

The Role of Safety Professionals in Organizations – Developing and Testing a Framework of Competing Safety Management Principles

Teemu Reiman (a), and Elina Pietikäinen (b)

a) VTT Technical Research Centre of Finland, Espoo, Finland, b) VTT Technical Research Centre of Finland, Tampere, Finland

Safety professionals have a key role in influencing the safety of an industrial organization. Relatively little research attention has been paid to this professional group. Many safety professionals apply the principles that underlie their field of technical expertise or refer to lay theories and folk models of human behavior. Recent conceptualizations of organizations as complex adaptive systems have put further challenges in our understanding of safety professionals’ work. What is the role of a safety professional in a system that is inherently unpredictable, as complex adaptive systems theories proclaim? In light of our increased understanding of the complexity and dynamics of safety-critical organizations is there a need to rethink the role of safety professionals? The paper will focus on the underlying principles that the safety professionals rely on in their work. The study design is a longitudinal study of nine safety professionals in three different safety-critical organizations. A model of eight distinct management principles is tested and mechanisms that influence the formation of each professional’s role are identified. The potential tensions between the different principles will be discussed as well as the influencing mechanisms in defining which principles are emphasized and which not.

475

Dealing with Beyond-Design-Basis Accidents in Nuclear Safety Decisions

Hossein P. Nourbakhsh

Office of Advisory Committee on Reactor Safeguards (ACRS), Nuclear Regulatory Commission, Washington, DC, USA

This paper presents an approach for dealing with beyond-design-basis accidents in nuclear safety decisions. The proposed approach integrates traditional deterministic and risk considerations and is based on the following key principles: (a) limiting the radiological consequences of higher frequency accident sequences by defining an extendeddesign-basis accidents category or a design enhancement category within the traditional beyond-design-basis regime; (b) controlling the total risk by risk-informed cost beneficial enhancements to safety; and (c) consideration of uncertainty and decision stakes (e.g., consequences) in addressing the necessity and sufficiency that must be imposed on the application of defense in depth. The feasibility of implementing the proposed approach is also discussed in this paper.

490

Dynamic Context Quantification for Design Basis Accidents List Extension and Timely Severe Accident Management

Emil Kostov (a,b) and Gueorgui Petkov (a)

a) Technical University, Sofia, Bulgaria, b) WorleyParsons, Sofia, Bulgaria

The complexity of nuclear power stations (NPS) makes them more difficult to be controlled and increases the need for their comprehension and automation. Severe accidents (SA), besides the manifestation of deficiencies in design basis, the emergency plan preparedness, accident management, organization and safety culture, showed that a kind of more elusive dependence between the expected and unexpected events existed. Safety investigations of complex installations are unavailing without comprehensive evaluation of human, organization and technology (HОТ) context, development and improvement of methods and concepts for finding effective decision-making and keeping the HOT balance. The paper presents the capacities of the Performance Evaluation of Teamwork procedure for HOT context quantification during the accidents for retrospective monitoring and event analysis, which reveal the SA mechanisms and assess their probabilities as well as the perspective for systematic understanding of the effect-cause relationships. The SA context quantification is proposed as an additional criterion for the extension of the detailed list of credible postulated initiating events and for increasing the effectiveness of timely SA management. The context quantification is exemplified by the situations at the Fukushima Daiichi NPS and the Onagawa NPS after the Great East Japanese Earthquake and by two SA simulations on VVER-1000 MELCOR 1.8.5 model.

519

Incident Investigation on the basis of Business Process Model of Plant Lifecycle Engineering Activities for Process Safety Leading Metrics

Tetsuo Fuchino (a), Kazuhiro Takeda (b), and Yukiyasu Shimada (c)

a) Chemical Engineering Dept., Tokyo Institute of Technology, Tokyo, Japan, b) Applied Chemistry and Biochemical Engineering, Shimizu University, Hamamatsu, Japan, c) Chemical Safety Research Gr.,National Institute of Occupational Safety and Health, Tokyo, Japan

The process safety incidents are directly caused by defects of protection layers, and process safety management (PSM) system maintains the soundness of the protection layers. In general, it is said that the weakness in PSM system is identified from the incident cases, and the performance of the PSM is improve by PDCA cycle using process safety metrics. However, PSM business process is comprehended in the plant lifecycle engineering business process, so that even if the weakness of PSM system is identified, the key engineering business process for the weakness and metrics cannot be recognized, so far. To overcome the above mentioned problem on process safety metrics, we propose a business process model based process safety incident investigation for process safety metrics.

Th15 Shipping and Offshore Oil & Gas II

01:30 Wai'anae

Chair: Luiz Fernando Oliveira, DNV GL Oil & Gas

182

What Inter-Organizational Factors are Related to Risk of Major Accidents in Offshore Operations?

Vibeke Milch, Karin Laumann and Gunhild B. Sætren

NTNU, Trondheim, Norway

The purpose of this paper is to present the ongoing research project Inter-organizational complexity and risk of major accidents. In the petroleum industry, increasingly complex drilling conditions have resulted in a demand for more specialized services and an increase in outsourcing of operations to external actors. The result is complex organizational systems consisting of multiple organizations in close collaboration and operations that span across several organizational boundaries. Inter-organizational complexity is an emerging trend in the industry; however, there has to date been little research on the effect of inter-organizational complexity on safety and risk of major accidents. The main objective of this project is to expand our knowledge and understanding of safety issues in drilling and well processes that are characterized by high inter-organizational complexity. The project aims to investigate in what way safety issues are related to formal and informal coordination of work in inter-organizational systems, conditions for safety effects of knowledge sharing within and between organizations and management and safety effects of inter-organizational industrial relations. Our objective is that the research will contribute to new knowledge and new methods that can reduce risk and increase resilience in the petroleum industry.

484

PRA Application to Offshore Drilling Critical Systems

S. Massoud (Mike) Azizi

Reliability, System Safety and Specialty Engineering, Aerojet Rocketdyne – Extreme Engineering

The nuclear and aerospace industries systems engineering approach typically incorporates Probabilistic Risk Assessment (PRA) to estimate risk using quantitative methods to determine what can go wrong, the likelihood of occurrence of such events and the probable consequences. Thus, PRA provides insight into the strengths and weaknesses of the system’s design, operation and maintenance strategy. For instance, in the nuclear industry PRA is traditionally used to estimate the core damage and potential consequences relative to the reactor, facilities, power grid, environment and public. A Space Shuttle and launch vehicle operations PRA would provide an estimated risk for operations on the ground and during the launch, on-orbit, re-entry and landing phases. Similar discipline as those applied in the nuclear and aerospace industries can also be applied to various “mission critical” onshore and offshore oil and gas drilling, exploration and production systems. This paper describes the PRA methods as they could be applied to these systems and how the outcomes of such discipline can benefit the system’s design, operations and stakeholder interests.

487

Evolution of Offshore Safety in Brazil – Comparison with International Data

Luiz Fernando Oliveira, Flávio Luiz Diniz, and Jaime Eduardo Lima

DNV GL, Rio de Janeiro, Brazil

Offshore oil E&P activities in Brazil are undergoing a very fast expansion with the development of the pre-salt oil province announced in 2007 together with other offshore oil fields in various parts of the Brazilian coast. Under such situation it is of paramount importance that safety conditions are given very high priority and that operational safety be conducted at par with the most advanced recommended practices exercised in other countries. The main objectives of this paper are to generate a picture of the evolution of safety in offshore exploration and production in Brazilian waters since its beginnings in the 70s to present day, and to compare the evolution of offshore safety in Brazil with that of other parts of the world. Results indicate that offshore safety conditions in Brazil have experienced a significant improvement within the last 15 years. Among other reasons, the creation of ANP and its effects as offshore safety regulators represent one of the most important factors for such improvement. Some measures are proposed to further improve the offshore safety situation especially in light of the predicted even faster expansion of offshore activities in the upcoming years.

150

Study on the Assessment Method for Results of Ship Maneuvering Training with the Simulator

Nobuo Mitomo (a), Fumiaki Takedomi, and Tadatsugi Okazaki (b)

a) Nihon University, Chiba, Japan, b) Tokyo University of Marine Science and Technology, Tokyo, Japan

In this study, evaluating results of ship simulator training was treated. This study proposed a method that clarified factors of errors in ship simulator training for bay pilot trainees. In this method, event tree was made from behaviors of trainees during ship simulator training, and then the branch point that led to errors was made clear. Training results of 52 subjects were analyzed by the proposed method, and then the cause of the failure of 16 subjects was clarified.

222

Challenges and New Developments in Maritime Risk Assessment

Di Zhang (a,b)

a) Intelligent Transport Systems Research Center, Wuhan University of Technology, Wuhan, P.R.China, b) Engineering Research Center for Transportation Safety (Ministry of Education), Wuhan University ofTechnology, Wuhan, P. R. China

Concerns have been raised to navigational safety in China because the throughput and the passing ships have been rapidly increasing during the past few years while accidents such as collisions, groundings, overturns, oil-spills and fires have occurred repeatedly, causing serious consequences. Though techniques for risk assessment such as formal safety assessment (FSA) have been acknowledged to be a possible way to eliminate or reduce the number of accidents and broadly used in the shipping industry nowadays around the world, there are certain challenges existing when an effective risk assessment is carried out. This paper will address some of the obstacles in maritime risk studies, e.g. lack of data and fuzziness by introducing a few recent case studies in the Yangtze River, China's largest and the world’s busiest inland waterway. Attempts made in this paper are to demonstrate how advanced methodologies can facilitate maritime risk assessment under high uncertainties.

Th16 SOARCA Uncertainty Analyses

01:30 Ewa

Chair: S. Tina Ghosh, U.S. Nuclear Regulatory Commission

438

SOARCA Peach Bottom Atomic Power Station Long-Term Station Blackout Uncertainty Analysis: Overview

S. Tina Ghosh (a), Patrick D. Mattie, Randall O. Gauntt, Nathan E. Bixler, Kyle W. Ross, Cedric J. Sallaberry, and Douglas M. Osborn (b)

a) Nuclear Regulatory Commission, Washington, DC, USA, b) Sandia National Laboratories, Albuquerque, NM, USA

This paper provides an overview of the uncertainty analysis for the accident progression, radiological releases, and offsite consequences for the State-of-the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. The SOARCA project (NUREG -1935) estimated the outcomes of postulated severe accident scenarios which could result in release of radioactive material from a nuclear power plant into the environment. The SOARCA model was based on best practices used to estimate offsite consequences of important classes of events. SOARCA coupled the deterministic ‘best estimate’ modeling of accident progression (i.e., reactor and containment thermal-hydraulic response and fission product transport), embodied in the MELCOR code with modeling of offsite consequences in MACCS2. This uncertainty analysis presents the results of an integrated analysis of epistemic parameter uncertainty associated with the accident progression and offsite consequence modeling. This uncertainty analysis supported the overall conclusions of the SOARCA project and provided some new insights.

439

SOARCA Peach Bottom Atomic Power Station Long-Term Station Blackout Uncertainty Analysis: Knowledge Advancement

Patrick D. Mattie, Nathan E. Bixler, Kyle W. Ross, Randall O. Gauntt, Douglas M. Osborn, Cedric J. Sallaberry, Jeffrey N. Cardoni, Donald A. Kalinich (a), and S. Tina Ghosh (b)

a) Sandia National Laboratories, Albuquerque, USA, b) U.S. Nuclear Regulatory Commission, Washington DC, USA

This paper describes the knowledge advancements from the uncertainty analysis for the Stateof-the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the model response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions.

441

SOARCA Peach Bottom Atomic Power Station Long-Term Station Blackout Uncertainty Analysis: Convergence of the Uncertainty Results

Cedric J. Sallaberry, Douglas M. Osborn, Nathan E. Bixler, Aubrey C. Eckert-Gallup, Patrick D. Mattie (a), and S. Tina Ghosh (b)

a) Sandia National Laboratories, Albuquerque, USA, b) U.S. Nuclear Regulatory Commission, Washington DC, USA

This paper describes the convergence of MELCOR Accident Consequence Code System, Version 2 (MACCS2) probabilistic results of offsite consequences for the uncertainty analysis of the State-of-the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout scenario at the Peach Bottom Atomic Power Station. The consequence metrics evaluated are individual latent-cancer fatality (LCF) risk and individual early fatality risk. Consequence results are presented as conditional risk (i.e., assuming the accident occurs, risk per event) to individuals of the public as a result of the accident. In order to verify convergence for this uncertainty analysis, as recommended by the Nuclear Regulatory Commission’s Advisory Committee on Reactor Safeguards, a ‘high’ source term from the original population of Monte Carlo runs has been selected to be used for: (1) a study of the distribution of consequence results stemming solely from epistemic uncertainty in the MACCS2 parameters (i.e., separating the effect from the source term uncertainty), and (2) a comparison between Simple Random Sampling (SRS) and Latin Hypercube Sampling (LHS) in order to validate the original results obtained with LHS. Three replicates (each using a different random seed) of size 1,000 each using LHS and another set of three replicates of size 1,000 using SRS are analyzed. The results show that the LCF risk results are well converged with either LHS or SRS sampling. The early fatality risk results are less well converged at radial distances beyond 2 miles, and this is expected due to the sparse data (predominance of “zero” results).

443

SOARCA Peach Bottom Atomic Power Station Long-Term Station Blackout Uncertainty Analysis: Contributions to Overall Uncertainty

Nathan E. Bixler, Douglas M. Osborna, Joseph A. Jones, Cedric J. Sallaberry, Patrick D. Mattie (a), and S. Tina Ghosh (b)

a) Sandia National Laboratories, Albuquerque, NM, USA, b) Nuclear Regulatory Commission, Washington, DC, USA

This paper describes an uncertainty analysis based on a MELCOR Accident Consequence Code System (MACCS) evaluation of the offsite consequences for the State-of-the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout scenario at the Peach Bottom Atomic Power Station. Four types of uncertainty are characterized in this analysis: that from the source term itself (radiological release, or Level-2 epistemic uncertainty); that from the influence of source term on offsite health risk (the influence of Level-2 epistemic uncertainties on Level-3 risk); that from a set of offsite consequence parameters reflecting state-of-knowledge uncertainties (Level-3 epistemic uncertainty); and that from the stochastic variability related to weather (Level-3 aleatory uncertainty). Each of these uncertainties contributes to the overall uncertainty in the estimation of health risk to the population surrounding the nuclear power plant. An important question is how much of the overall uncertainty comes from each of these sources. A second question is how the individual sources of uncertainty combine to form the whole. The answers to these two questions are evaluated in this paper. The paper also discusses the most important of the uncertain input parameters in terms of their influence on offsite health risk.

446

SOARCA Surry Power Station Uncertainty Analysis: Parameter Methodology and Insights

Joseph Jones, Douglas M. Osborn, Kyle W. Ross, Jeffrey N. Cardoni (a), S. Tina Ghosh (b)

a) Sandia National Laboratories, Albuquerque, NM, USA, b) U.S. Nuclear Regulatory Commission, Washington, DC, USA

The State-of-the-Art Reactor Consequence Analyses (SOARCA) project for the Peach Bottom Atomic Power Station (the pilot boiling-water reactor) and Surry Power Station (the pilot pressurized-water reactor) represents the most complex deterministic MELCOR analyses performed to date. Uncertainty analyses focusing on input parameter uncertainty are now under way for one scenario at each pilot plant. Analyzing the uncertainty in parameters requires technical justification for the selection of each parameter to include in the analyses and defensible rationale for the associated distributions. This paper describes the methodology employed in the selection of parameters and corresponding distributions for the Surry uncertainty analysis, and insights from applying the methodology to the MELCOR parameters.

Th17 Nuclear Engineering IV

01:30 Kona

Chair: Pavel Kudinov, Royal Institute of Technology (KTH)

21

Verification of PRA Results by Applications in Full Scale Simulators

Cilla Andersson (a) and Antanas Romas (b)

a) Ringhals AB, Väröbacka, Sweden, b) GSE Power Systems AB, Nyköping, Sweden

In this paper a new method to verify the results of PRA models and find possible improvements to make the models more realistic is described. The main idea of the method, which has been denoted MCS2SIM, is to apply the PRA results of a nuclear power plant as an input to the full scale simulator which is used for the training of the operators at that plant. The PRA results, in the form of Minimal Cut Sets (MCS), are in most cases easy to translate into the equivalent malfunctions which are used in the simulator since the level of detail and realism in PRA models and full scale simulators often are similar. In the simulator the actual differential equations that are modeling the physical systems and phenomenon are solved which provides detailed information about the effect of different failure combinations (MCS) on the plant. By comparing the assumed consequences in the PRA models with the consequences that are calculated in the simulator and finding explanations for the observed differences a comprehensive verification of both the PRA models and the simulators can be achieved. The method can also provide insights about possible improvements of the models and be used to design advanced training scenarios for the operators. At Ringhals unit 1 (BWR) and 3 (PWR) a couple of pilot studies have been conducted by Ringhals AB in cooperation with KSU AB and GSE Power Systems AB. Ringhals provided results generated by the plant specific PRA models, KSU supplied the full scale simulators and experienced simulator engineers that performed the tests while GSE developed an approach to make the application of the MCS2SIM idea possible and organized the pilot studies. Both system failures based on fault tree analysis in the PRA models and more complex scenarios based on event tree models have been translated to malfunctions and studied in the simulators. The results, observations, conclusions and recommendations based on the outcomes of these pilot studies are introduced in this paper. The main conclusion is that the pilot studies prove that the MCS2SIM method can give valuable insights about the PRA models that otherwise are difficult to find. The limitations of the method are the lack of automated tools for the translation of the MCS results into malfunctions in the simulator and for effective analysis of the simulation results as well as the simulation time needed to run the long-term scenarios since the simulators are normally run in real time. However, these limitations can be resolved using modern technologies as is further discussed in the paper.

154

A Framework for Assessment of Severe Accident Management Effectiveness in Nordic BWR Plants

Pavel Kudinov, Sergey Galushin (a), Sergey Yakush (b), Walter Villanueva, Viet-Anh Phung, Dmitry Grishchenko (a), Nam Dinh (c)

a) Division of Nuclear Power Safety, Royal Institute of Technology (KTH), Stockholm, Sweden, b) Institute for Problems in Mechanics of the Russian Academy of Sciences, Moscow, Russia, c) North Carolina StateUniversity, Raleigh, NC, USA.

In the case of severe accident in Nordic boiling water reactors (BWR), core melt is poured into a deep pool of water located under the reactor. The severe accident management (SAM) strategy involves complex and coupled physical phenomena of melt-coolant-structure interactions sensitive to the transient accident scenarios. Success of the strategy is contingent upon melt release conditions from the vessel which determine (i) if corium debris bed is coolable, and (ii) potential for energetic steam explosion. The goal of this work is to develop a risk-oriented accident analysis framework for quantifying conditional threats to containment integrity for a Nordic-type BWR. The focus is on the process of refining the treatment and components of the framework to achieve (i) completeness, (ii) consistency, and (iii) transparency in the review of the analysis and its results. A two-level coarse-fine iterative refinement process is proposed. First, fine-resolution but computationally expensive methods are used in order to develop computationally efficient surrogate models. Second, coupled modular framework is developed connecting initial plant damage states with respective containment failure modes. Systematic statistical analysis is carried out to identify the needs for refinement of detailed methods, surrogate models, data and structure of the framework to reduce the uncertainty, and increase confidence and transparency in the risk assessment results.

156

A Plant’s Perspective on a Full Scope PSA Update

E.P. Roose, H.A. Schoonakker (a), J.L. Brinkman (b), and M.D. Quilici (c)

a) EPZ, Borssele, Netherlands, b) NRG, Arnhem, Netherlands, c) Scientech, Seattle, USA

NPP Borssele has a full scope Living PSA model since 1995 covering internal events, and internal and external hazards for level 1, level 2 and level 3. After 20 years of working with and on this Living PSA an IAEA mission was conducted on this PSA. This paper elaborates on the model changes after the IPSART mission, the impact on the model of these changes and relates the changes to the use of the PSA.

166

Risk of Sloshing in the Primary System of a Lead-Cooled Fast Reactor

Marti Jeltsov, Walter Villanueva, and Pavel Kudinov

KTH Royal Institute of Technology, Stockholm, Sweden

Pool-type designs of Lead-cooled Fast Reactor (LFR) aim for commercial viability by simplified engineering solutions and passive safety systems. However, such designs carry the risks related to heavy coolant sloshing in case of seismic event. Sloshing can cause (i) structural damage due to fluid-structure interaction (FSI) and (ii) core damage due to void induced reactivity insertion or due to local heat transfer deterioration. The main goal of this study is to identify the domain of seismic excitation characteristics at the reactor vessel level that can lead to exceedance of the safety limits for structural integrity and core damage. Reference pool-type LFR design used in this study is the European Lead-cooled SYstem (ELSY). Liquid lead sloshing is analyzed with Computational Fluid Dynamics (CFD) method. Outcome of the analysis is divided in two parts. First, different modes of sloshing depending on seismic excitation are identified. These modes are characterized by wave shapes, loads on structures and entrapped void. In the second part we capitalize on the framework of Integrated Deterministic-Probabilistic Safety Assessment (IDPSA) to quantify the risk. Specifically, statistical parameters pertaining to mechanical loads and void transport are quantified and combined with the deterministically obtained data about consequences.

560

Analyzing Importance Measure Methodologies for Integrated Probabilistic Risk Assessment in Nuclear Power Plants

Tatsuya Sakurahara, Seyed Reihani, Mehmet Ertem, Zahra Mohaghegh (a), and Ernie Kee (b)

a) Department of Nuclear, Plasma, and Radiological Engineering, University of Illinois at Urbana-Champaign, IL, USA, b) YK.Risk, LLC, TX, USA

Importance Measures (IMs) are used to rank the risk contributing factors in Probabilistic Risk Assessment (PRA). In this paper, existing IM methodologies are analyzed in order to select the most suitable IM for an Integrated PRA (IPRA) of Nuclear Power Plants. In IPRA, the classical PRA of the plant is used, but specific areas of concern (e.g., fire, GSI-191, organizational factors, and seismic) are modeled in a simulation-based module (separate from PRA) and the module is then linked to the classical PRA of the plant. The IPRA, with respect to modeling techniques, bridges the classical PRA and simulation-based/dynamic PRA. This paper compares the local and Global Importance Measure (GIM) methodologies and explains the importance of GIM for IPRA. It also demonstrates the application of GIM methodologies to illustrative examples and, after comparing the results, selects the CDF-based sensitivity indicator (Si (CDF)) as an appropriate moment-independent GIM for IPRA. The results demonstrate that, because of the complexity and nonlinearity of IPRA frameworks, Si (CDF) is the best method to accurately rank the risk contributors. Si (CDF) can capture three key features: (1) distribution of input parameters, (2) interactions among input parameters, and (3) distribution of the model output.

Th21 External Events Hazard/PRA Modeling for Nuclear Power Plants IV

03:30 Honolulu

Chair: Curtis L. Smith, Idaho National Laboratory

539

Study on Next Generation Seismic PRA Methodology Part II: Quantifying Effects of Epistemic Uncertainty on Fragility Assessment

Akemi Nishida (a), Tsuyoshi Takada, Itoi Tatsuya (b), Osamu Furuya, and Ken Muramatsu (c)

a) Japan Atomic Energy Agency, Tokyo, Japan, b) University of Tokyo, Tokyo, Japan, c) Tokyo City University, Tokyo, Japan

This study focused on uncertainty-assessment frameworks, utilization of expertise, and on developing relevant software to improve reliability of Seismic Probabilistic Risk Assessment (SPRA) and to promote further use of SPRA, develops methodology for quantification of uncertainty associated with final results from SPRA in the framework of risk management of Nuclear Power Plant (NPP) facilities. This research aimed to contribute to the development of probabilistic models for uncertainty quantification-and software (1); to the aggregation of expert opinions on structure/equipment fragility estimation and development of implementation guidance on epistemic uncertainty (2); and to the study of applicability of newly proposed SPRA models to plant models (3). In particular, we focused on the second goal. There were two different groups of experts used: those in the field of civil engineering, and those in the fields of mechanical engineering. With these groups, we conducted a pilot study on the use of expert-opinion elicitation for identification and quantification of parameters of fragility assessment. Sensitivity analysis was performed by using a reactor-building model, and results were provided to experts for expert-opinion elicitation.

582

Analyses of Severe Accident Sequences During Shutdown and Caused by External Hazards

Michael Kowalik, Horst Löffler, Oliver Mildenberger, Thomas Steinrötter

Gesellschaft für Anlagen-und Reaktorsicherheit (GRS) mbH, Köln, Germany

According to the German regulations for periodic safety reviews it is obligatory for each nuclear power plant to perform a Level 1 PSA for full power and shutdown operating conditions and for events caused by plant-external hazards. In contrary, a Level 2 PSA has to be performed only for full power operating conditions. The German regulatory body therefore supports a project with the objective of closing this gap of knowledge. First, a limited set of scenarios covering most of the relevant scenarios with respect to the time scale of the physical effects to be expected, the pressure buildup in the containment and the source term has to be identified. In order to calculate the set of scenarios by the computer code MELCOR the plant has been modeled in a plant-specific input deck and some scenario-specific settings need to be defined. Then the scenarios will be calculated by MELCOR and analyzed accurately regarding the relevant physical effects including core melting and the release of radionuclides. The results of the deterministic analyses will support development of a probabilistic event tree approach and recommendations for prevention and mitigation of such accidents.

258

Lessons Learned from the New Fire PRA Methodology (NUREG/CR-6850) Application in Korea under Fire Ignition Frequency Perspectives

Sung-Hyun Kim (a), Kwang-Nam Lee (b), and Hak-Kyu Lim (a)

a) KEPCO-E&C, Integrated Engineering Department, Korea, b) KEPCO-E&C, Power Engineering Research Institute, Korea

The objectives of the Fire Probabilistic Risk Assessment (PRA) are to estimate the contribution of in-plant fires to overall plant Core Damage Frequency (CDF) and Large Early Release Frequency (LERF), to identify its vulnerabilities, and to provide recommendations for reducing fireinduced plant risk. Risk due to internal fire has been one of the major concerns in design and for operation of nuclear power plants. So far, Korea has applied Fire PRA Implementation Guide (EPRI TR-105928: FPRAIG) to conduct Fire PRA. In the meantime, NUREG/CR-6850 was issued as a current state-of-the-art method, which was studied by joint activity between Electrical Power Research Institute (EPRI) and U.S Nuclear Regulatory Commission (NRC) office of Nuclear Regulatory Research (RES), in August 2005. This paper covers comparison results for fire ignition frequency analysis separately conducted by FPRAIG and NUREG/CR-6850 and lessons learned from outcomes performed by newly developed Fire PRA methodology, NUREG/CR-6850, from fire ignition frequency perspectives. As a result, when applying new Fire PRA methodology, NUREG/CR-6850, compared to the previous Fire PRA methodology, FPRAIG, fire frequency for fixed ignition source has been decreased, while fire frequency for transient ignition source has been increased.

462

Revision of the AESJ Standard for Seismic Probabilistic Risk Assessment (4): Accident Sequence Evaluation (Presentation Only)

Yasuhiro Iwaya (a), Ken Muramatsu (b), Katsunori Ogura (c)

a) CHUBU Electric Power Co., Inc., b) Tokyo City University, c) Japan Nuclear Energy Safety Organization

This paper is one of the series of four papers on the on-going revision of the Japan Atomic Energy Society (AESJ) standard for procedure of seismic probabilistic risk assessment (S-PRA) submitted to PSAM12 and describes revisions of the chapter on accident sequence evaluation to reflect new insights and lessons learned from the experiences at the Fukushima Daiichi nuclear power station and other plants in Japan in the last five years. 1. Composition of the Chapter on Accident Sequence Evaluation An overall and qualitative evaluation procedure of potential accident scenarios induced by seismic motion as well as identification of potential initiating events is developed in another chapter. The chapter on “Accident Sequence Evaluation” following the overall and qualitative evaluation is composed of the following 6 sections: (1) Evaluation process, (2) Setting of initiating event, (3) Modeling of accident sequence, (4) Modeling of systems, (5) Quantitative evaluation of accident sequence, (6) Analysis of loss of containment function scenario. It has many appendices to provide more detailed description of the requirements as part of the standard as well as other supporting information such as examples of application of the required methodologies or suggested approaches for areas where methodologies are not well established at present. 2. Revision of the Chapter for the Accident Sequence Evaluation Important elements of the revision, including response to the new technical issues, are summarized in the following. -Fuel damage in the spent fuel pool: Requirement for the consideration of the fuel damage in the spent fuel pool is added and an example approach is provided for identification and screening of the potential scenarios of fuel damage. -Treatment of initiating events (IE) with failure of multiple systems: As for the use of hierarchy event trees for the simplification of the classification of initiating events, treatment of multiple initiating events caused by a seismic motion as well as approaches to avoid oversight of important combinations of simultaneous system failures is discussed. -Treatment of severe accident management procedures: Requirements on the treatment of failure probabilities of severe accident management procedures are strengthened in order to properly consider the risk reduction potential of the severe accident management measures implemented after the Fukushima Daiich accident. -Requirements for sensitivity analysis on new technical issues: As for the new technical issues raised in recent years for which well-established methods are not available, sensitivity analyses are required to obtain better understanding on the effects of such issues on the core damage frequency (CDF). Potential approaches are suggested in the appendices based on a survey of research on such issues. The issues explicitly required to be considered in the sensitivity analysis include the following: ➢ Effect of after shocks ➢ Consideration of various multi-unit site effects (including the effect of correlation of seismic failures) ➢ Effect of fault displacement in the plant site.

Th22 Crisis and Emergency Management

03:30 Kahuku

Chair: Stephen Hora, Center for Risk and Economic Analysis of Terrorism Events, USC

84

Crisis Organization and Severe Accident Management: Contribution of Ergonomic Considerations in the Definition of Severe Accident Management Guidelines (SAMG)

Violaine Bringaud, Jean-Paul Labarthe

Department of Industrial Risk Management, EDF Lab Clamart, France

This document presents an ergonomic action led in the framework of the definition of the organizational reaction to a crisis and the associated technical and documentary supports, defined for the design of a new nuclear plant. In the event of an incidental or accidental situation occurring in a functioning plant, the risk management approach is based on a local and corporate crisis management organization, the objective being to ensure that the situation at the plant is under control and to protect people endangered by the situation. The more specific case of a severe accident is defined as a state of functioning with deterioration and possible loss of the plant and the probability of this type of accident occurring is extremely low. Nevertheless, these cases are taken into account in the design stage and emergency simulation drills are organized to help prepare the staff to manage these situations. The first part of this document presents the crisis organization and the associated documentary supports (the SAMG) designed to manage a severe accident. The second part describes the ergonomic approach to the design of the SAMG and concludes with the value of such an approach in preparing the teams to manage a crisis in a complex and high risk social and technical system.

160

Disaster Context Modeling for the Creation of Exercise Scenarios

Taro KANNO, Wataru ONO, Shengxin HONG, and Kazuo FURUTA

The University of Tokyo, Tokyo, Japan

Disaster training and exercises are widely employed to improve preparedness for and the ability to respond to unprecedented natural and man-made disasters. While various types of drills and exercises such as Serious Game, Disaster Imagination Game, and Cross Road have been proposed, less attention has been paid to how to make effective exercise scenarios efficiently. This study develops a disaster context model that provides a foundation for creating new exercise scenarios and describing what happened in actual past disasters. The study also develops a method of creating semi-automatically a new imaginary disaster context that will be used as an assumption in an exercise scenario.

566

Bayesian Networks as a Decision Making Tool to Plan and Assess Maritime Safety Management Indicators

Osiris A. Valdez Banda (a), Maria Hänninen, Floris Goerlandt and Pentti Kujala (b)

a) Aalto University, Department of Applied Mechanics, Kotka Maritime Research Centre, Kotka, Finland, b) Aalto University, Department of Applied Mechanics, Espoo, Finland

Today, maritime safety management norms, self-assessment guides and frameworks demand and/or recommend the collection, report, and analysis of indicators to measure the safety performance of shipping companies. However, the characteristic of classic indicators only provide information about the specific evaluated activity. In this paper, a new quantitative and qualitative option to jointly analyze the performance of individual and collective indicators of a maritime safety management system is proposed. For this purpose, the dependencies between the quality of the most representative components of maritime safety management and their designated indicators levels are probabilistically estimated using a Bayesian network model and two expert views. Each component has one or more designated indicators which aim to identify practical values for the performance of those components. Based on the findings of this study, the implementation of the Bayesian network model seem to provide a unique decision support tool to plan and set indicators, and also to evaluate the indicators’ performance and the effect on their designated components. Furthermore, the use of the indicators in the model enable detecting their repercussion on other components of an evaluated safety management system, even when those components do not seem to be directly related.

Th23 Risk and Hazard Analyses III

03:30 O'ahu

Chair: James Lin, ABSG Consulting Inc.

217

Addressing Off-site Consequence Criteria Using PSA Level 3 -Enhanced Scoping Study

Anders Olsson, Andrew Caldwell, Malin Nordqvist (a), Gunnar Johansson (b), Carl Sunde, Jan-Erik Holmberg (c), and Ilkka Karanta (d)

a) Lloyd's Register Consulting, Stockholm, Sweden, b) ES-Konsult, Stockholm, Sweden, c) Risk Pilot, Stockholm, Sweden, d) VTT, Helsinki, Finland

Based on an inquiry from the Nordic PSA Group (NPSAG) and the Nordic Nuclear Safety Research group (NKS), a consortium of Swedish nuclear risk consultancies (Lloyd's Register Consulting, ES-Konsult and Risk Pilot) and the Finnish research institute VTT has begun a multi-year study of Probabilistic Off-site Consequences Analysis, commonly referred to as Level 3 Probabilistic Safety Assessment (Level 3 PSA). Level 3 PSA is infrequently performed and generally regarded as a less developed analysis when compared to Level 1 and Level 2 PSA. Interest in the Nordic countries has been spurred based on new nuclear construction projects and plans. These activities have raised interest in objective, risk-based siting analyses for new nuclear reactors in order to better understand the risks of off-site consequences in the wake of the multi-unit disaster at the Fukushima Daiichi site. The objective of this study is to further develop understanding within the Nordic countries in the field of Level 3 PSA, in order to determine the scope of its application, its limitations, the appropriate risk metrics, and the overall need and requirements for performing a Level 3 PSA. The project's first year focused on the development and analysis of an industrial survey about Level 3 PSA, which included several workshops and meetings with Nordic utilities, regulators, and safety experts. Level 3 PSA risk metrics including health, environmental, and economic effects have been researched and discussed in the first year's project report. The project has generated significant interest internationally and has interfaced with international organizations including the IAEA and the American Nuclear Society. The long term objective of the work is to set the foundation for performing a "state-of-the art" Level 3 PSA for Nordic conditions.

568

A Unified Approach to PSA Accident Sequence Model Quantification

Donald J. Wakefield and James C. Lin

ABSG Consulting Inc. (ABS Consulting), Irvine, CA, USA

Existing, fault tree linking models and large, event tree linking models for nuclear power plants are so large that they challenge computer memory limits and/or require excessive run times to fully quantify at the frequency cut-offs required for convergence. In some software quantification tools, the amount of frequency cut-off is not known, and for others, the sheer size of the models becomes unwieldy. The conceptual approach described here is to make use of Monte Carlo simulation. The simulation is one which treats a series of initiating event challenges to the logic model as constants and each challenge assesses whether the logic model end states are true or not. The logic model may be a single fault tree, a single event tree with branch probabilities, or a combination of fault trees and event trees. The outcomes of each challenge are tallied at the end of the simulation to obtain conditional end state probabilities and then combined with the initiating event frequencies to obtain accident sequence frequencies. Quantification cut-offs are not used for this approach and there are no restrictions on the use of NOT gates. Convergence of the Monte Carlo simulation would be the main issue.

445

Earthquake Risk Perception: The case of Mexico City

Tatiana Gouzeva, Galdino Santos-Reyes, and Jaime Santos-Reyes

SARACS Research Group, SEPI-ESIME, IPN, Mexico City, Mexico

Given the concerns of society in relation to natural hazards, nowadays the analysis of risk perception and communication play an important role in decision making of those in charge, for example, of Civil Protection. The analysis of risk perception and communication may be regarded not only as a presentation of the scientific calculations of risk, but also a forum for discussion on issues on broader ethical and moral concerns. The paper present some preliminary findings of the ongoing research project on earthquake risk perception of the population of Mexico City. It is hoped that the results of the research project may help to understand, to some extent, the degree of knowledge of the study population in terms of earthquake risk perception and preparedness, so that the impact of earthquakes could be mitigated.

Th24 Digital I&C and Software Reliability IV

03:30 Waialua

Chair: James K. Knudsen, Idaho National Laboratory

574

Reliability Analysis of Core Protection Calculator System using Petri Net

Hyejin Kim (a), Jonghyun Kim (b)

a) KEPCO Nuclear Fuel, Daejeon-si, Korea, b) KEPCO International Nuclear Graduate School, Ulsan-si, Korea

As digital systems are introduced to nuclear power plants, issues related with reliability analyses of these digital systems are being raised. One of these issues is that static Fault Tree (FT) and Event Tree (ET) approach cannot properly account for dynamic interactions in the digital systems, such as multiple top events, logic loops and time delay. This study proposes an approach to analyzing the reliability of Core Protection Calculator System (CPCS) using Petri Net (PN) modeling. The PN, one of the dynamic methodologies, allows modeling event dependencies and interaction to represent the time sequence and delay time for dynamic events. This study applies the approach to the reliability analysis of CPCS. In order to analyze the digital system modeling, further studies are required with the dynamic modeling methods and the software in the digital system. Modeling of digital systems should be realistic to account for the system characteristics and be able to predict system behavior.

492

Degradation Modeling and Algorithm for On-line System Health Management using Dynamic Hybrid Bayesian Network

Chonlagarn Iamsumang, Ali Mosleh, Mohammad Modarres

The Center for Risk and Reliability, University of Maryland College Park, Maryland, USA

This paper presents a new modeling method and computational algorithm for reliability inference with dynamic hybrid Bayesian network. It features a component-based algorithm and structure to represent complex engineering systems characterized by discrete functional states (including degraded states), and models of underlying physics of failure, with continuous variables. The methodology is designed to be flexible and intuitive, and scalable from small localized functionality to large complex dynamic systems. In System Health Management applications, this method introduces a well-defined interface between continuous system component status and discrete system functionality within the network model. Markov Chain Monte Carlo (MCMC) inference is optimized using pre-computation and dynamic programming for real-time monitoring of system health. The scope of this research includes new modeling approach, computation algorithm, and an example application for on-line System Health Management.

453

Survivability Evaluation of Disaster Tolerant Cloud Computing Systems

Bruno Silva, Paulo Romero Martins Maciel (a), Armin Zimmermann (b) and Jonathan Brilhante (a)

a) Federal University of Pernambuco, Recife, Brasil, b) Ilmenau University of Technology, Ilmenau, Germany

A prominent type of cloud service is the Infrastructure-as-a-Service (IaaS), which delivers, on-demand, computing resources in the form of virtual machines (VMs) satisfying user needs. In such systems, penalties may be applied if the defined quality level of service level agreement (SLA) is not satisfied. Therefore, high availability is a critical requirement of these systems. A strategy to protect such systems from natural or manmade disasters corresponds to the utilization of multiple data centers located into different geographical locations to provide the service. Considering such systems, redundancy mechanisms can be adopted to receive copies of VM images during data center operation. Hence, whenever a disaster makes one data center unavailable, the VMs can be re-instantiated in other operational data center. Modeling techniques, with a strong mathematical foundation, such as Stochastic Petri Nets (SPN) can be adopted to evaluate survivability in these complex infrastructures. This work presents SPN models to evaluate survivability metrics in IaaS systems deployed into geographically distributed data centers taking into account disaster occurrences. Using the proposed models, IaaS providers can evaluate the impact of VM transmission time and the VM backup period on survivability metrics. A case study is provided to illustrate the effectiveness of the proposed work.

Th25 Nuclear Fuel Analysis

03:30 Wai'anae

Chair: Allan Hedin, Swedish Nuclear Fuel and Waste Management Co., SKB

98

Probability of Adventitious Fuel Pin Failures in Fast Breeder Reactors andEvent Tree Analysis on Damage Propagation up to Severe Accident in Monju

Yoshitaka Fukano (a), Kenichi Naruto (b), Kenichi Kurisaka, and Masahiro Nishimura (a)

a) Japan Atomic Energy Agency, Tsuruga, Japan, b) NESI Inc., O-arai, Japan

Experimental studies, deterministic safety analyses and probabilistic risk assessments (PRAs) on local fault (LF) propagation in sodium cooled fast reactors (SFRs) have been performed in many countries because LFs have been historically considered as one of the possible causes of severe accidents. Adventitious fuel pin failures were considered to be the most dominant initiators of LFs in these PRAs because of high frequency of occurrence during reactor operation and possibility of subsequent pin-to-pin failure propagation. Therefore event tree analysis (ETA) on fuel element failure propagation initiated from adventitious fuel pin failure (FEFPA) in Japanese prototype fast breeder reactor Monju was performed in this study based on state-of-the-art knowledge on experimental and analytical studies on FEFPA and reflecting latest operation procedure at emergency in Monju. Probability of adventitious fuel pin failures in SFRs which is the initiating event of this ETA was also updated in this study. Probability of FEFPA to the peripheral sub-assemblies was quantified to be 1.7×10-12 in Monju based on this ETA. It was clarified that FEFPA in Monju was negligible and could be included in core damage fraction of the anticipated transient without scram and protected loss of heat sink in the viewpoint of both probability and consequence.

91

License Application for a Spent Nuclear Fuel Repository in Sweden

Allan Hedin

Swedish Nuclear Fuel and Waste Management Co. (SKB), Stockholm, Sweden

The Swedish Nuclear Fuel and Waste Management Co., SKB, has applied for a license to build a final geological repository for spent nuclear fuel at the Forsmark site, situated around 70 miles north of Stockholm, Sweden. A key component in the license application is an assessment of the long-term safety of the repository. Probabilistic radionuclide transport and dose calculations are at the core of the analysis. The license application is currently (Spring 2014) under review by the Swedish Radiation Safety Authority, SSM, and a report from SSM to the Swedish Government is expected in 2015. This paper i) gives an overview of the probabilistic dose calculations of the safety assessment in the license application and ii) presents some new results related to the probabilistic calculations obtained after the completion of the assessment.

517

Current Research in Storage and Transportation of Used Nuclear Fuel and High-Level Radioactive Waste

Sylvia J. Saltzstein

Sandia National Laboratories, Albuquerque, New Mexico, USA

Through the Department of Energy (DOE)/ Office of Nuclear Energy (NE), Used Fuel Disposition Campaign (UFDC), numerous institutions are working to address issues associated with the extended storage and transportation of used nuclear fuel. In 2012, this group published a technical analysis which identified technical gaps that could be addressed to better support the technical basis for the extended storage and transportation of used nuclear fuel. This paper summarizes some of the current work being performed to close some of those high priority gaps. The areas discussed include: 1. developing thermal profiles of waste storage packages, 2. investigating the stresses experienced by fuel cladding and how that might affect cladding integrity, 3. understanding real environmental conditions that could lead to cask stress corrosion cracking, 4. quantifying the stress and strain fuel assemblies experience during normal truck transport and 5. performing a full-scale ten-year confirmatory demonstration of dry cask storage. Data from these R&D activities will reduce important technical gaps and allow us to better assess the risks associated with extended storage and transportation of used nuclear fuel.

Th26 Safety Integrity Level (SIL)

03:30 Ewa

Chair: Mohammad Pourgol-Mohammad, Sahand University of Technology

418

Modified-LOPA; a Pre-Processing Approach for Nuclear Power Plants Safety Assessment

Seyed Mohsen Gheyasi, Mohammad Pourgol-Mohammad

Sahand University of technology, Tabriz, Iran

Risk and safety assessment are important subjects in modern industries. Different methods have been proposed for safety and risk evaluation of high hazardous facilities. The risk assessments methods are classified in three main groups of qualitative, semi-quantitative and quantitative. The methodology is selected depending on scope and objective, level of details and requirements. Nuclear facilities regulations require more detailed assessment of system safety. Regulatory body requires utilization of probabilistic risk assessments (PRA) for appraisal of design, modifications and operation of nuclear power plants. This method usually is very complicate, expensive and time consuming. Significant amount of resources are needed for a PRA project completion which in some cases for preliminary safety evaluation are not justified. Simpler methods would be used for preliminary evaluation as a pre-processor to quick find out of the situations (especially in operational nuclear power plant). Layer of Protection Analysis (LOPA) is one of the powerful risk analysis methods. It is a semi-quantitative approach widely used in chemical process industry. This method is not a competitive alternative to full quantitative methods of risk analysis for nuclear facilities like PRA. However, it is simpler and less expensive methodology comparing to full probabilistic risk assessment methods. It evaluates the probability of failure per demand for the safety system failures and the resulting consequences. It is introduced here as a practical technique for early and quick risk assessing in many other industries. But if LOPA has been selected as a risk evaluator pre-processor in nuclear systems it requires some modifications in methodology structure. This research examines utilization of LOPA method for nuclear systems as an order of magnitude evaluation of the safety status. Conventional LOPA method requires some essential modifications in methodology to prepare it as a suitable approach for nuclear systems, especially in its scenario development and quantitative calculations. The so-called modified layer of protection analysis (Modified-LOPA) methodology is based on improvement of some features of conventional LOPA. Some changes are proposed to the classic LOPA method by using event tree method and Bayesian logic. Since LOPA and event-tree methods use definition of scenarios to represent the paths of the accidents, therefore scenario development is completed in modified method by using event-tree method. Then initiating event frequency and probability of failure on demand (PFD) of independent protection layers (IPLs) estimations are updated by Bayesian approach which increases the reliability of results by combination of plant specific data with generic data from other similar industries. In this paper “Modified-LOPA” method is proposed as a primary tool for quick hazard analysis, risk assessment and risk based decision in nuclear systems. This method is more accurate comparing to conventional LOPA. However it is not a complete substitute for Full PRA in nuclear systems. A simple example of a fire protection system shows application of this method and the results are compared with the results of a PRA approach.

388

Uncertainty Analysis for Target SIL Determination in the Offshore Industry

Sungteak Kim, Kwangpil Chang, Younghun Kim, and Eunhyun Park

Hyundai Heavy Industries, Yongin, Korea

The requirements on design of SIS (Safety Instrumented System) based on SIL (Safety Integrity Level) has been developed continuously in the offshore industry. Especially, IEC 61508 and IEC 61511 illustrates various methodologies to determine a target SIL for specified safety function such as risk graph, hazard matrix, etc. These methods could derive different target SILs for the identical safety function. Model uncertainty might be the main cause of the result. In addition, since various methods require many input parameters, parameter uncertainties contribute to a target SIL with variance, either. In the offshore industry, engineers usually utilize two or even more methods to assess target SILs for the same function simultaneously and determine the more conservative value as the target SILs from the results. The conservatism would keep the system safe, but sometimes it could be too safe by installing excessive safety systems. For better decision-making, this article identifies the uncertainty factors in determining target SILs and evaluates the effects of the uncertainties on target SILs. Case studies have been performed for the practical systems used in the offshore industry.

581

Using Fault Trees to Analyze Safety-Instrumented Systems

Joseph R. Belland

Isograph, Inc., Irvine, USA

Safety-instrumented systems are protection functions frequently seen in automotive, chemical processing, and oil and gas refining systems. These functions are designed to engage in case a hazardous condition arises and mitigate any potentially catastrophic consequences. Because of the potential for loss of life or other safety-related risks related to these systems, safety-instrumented systems usually have a very strict reliability requirement. Fault Tree analysis is a method of analyzing a system to determine its reliability and identify weak points. This method uses a qualitative and quantitative approach that graphically shows how component failures logically combine to create system failures, and quantifies the system failure probability using failure rate data from component failures. Due to its powerful and flexible nature, Fault Tree analysis is an ideal method for analyzing safety-instrumented systems to determine if they are meeting their reliability goals, to find weak points in the design, or for focusing maintenance efforts. Fault Trees may also be used to determine the spurious trip rate of the safety system, that is, how frequently the safety system will engage unnecessarily. This paper will provide a guide to using Fault Tree analysis software for these purposes.

Th27 Safety Culture and Human & Organizational Factors

03:30 Kona

Chair: Justin Pence, Argonne National Laboratory

55

Are Cognitive and Organizational Human Factors Missing From the Blunt End in the Oil and Gas Industry?

Stig O. Johnsen

SINTEF, Trondheim, Norway

The area of Human factors (HF) has been in focus to increase safety and quality of operation in the industry. HF covers three main domains, physical ergonomics, cognitive factors and organizational factors. HF has often been identified as root causes in accidents, i.e. 40-90% in different industries. The International Association of Oil & Gas producers (OGP) prioritized "more attention paid to HF" as one of four issues after the Macondo disaster in 2010. However in several projects, HF focus on cognitive factors and organizational factors is missing in the blunt end (i.e. early phase). HF is conceptualized as physical ergonomics (work environment and layout). In addition layout is often based on prior experiences and not on new was of operations. There is poor systematic HF education; few HF experts in user organizations and limited HF research to improve safety. We propose that HF must focus on cognitive factors and organizational factors early from the blunt end. The organizations should have local HF competence. External validation and verification from certified HF experts should be performed early. A simple set of HF guidelines and standards should be used to ensure HF focus, in addition to continued regulatory focus and review.

326

Achieving a Total Safety Culture Through Behavior Based Safety, Establishing and Maintaining an Injury Free Culture

NJF van Loggerenberg

University of South Africa, Pretoria, South Africa

Historically the focus in the industry has been on improving safety by addressing the work environment and eliminating, mitigating, and identifying hazards and risks. Industry has however reached a plateau regarding safety and safe procedures. It is therefore essential to focus on directives to establish a total safety culture through behavior based safety. The existence of a safety culture is evident when employees believe that safety is a value in life and the employer enhances employee safety ownership. Positive and negative reinforcement can encourage safe behaviors. Behavior based safety aims at increasing safety in industry by positively influencing the behavior of all stakeholders involved. In focusing on a total safety culture, it is important to also focus on behavior and to understand the ABC model. Behavior based safety includes the four steps of the improvement process: define, observe, intervene and test, which will empower a shift in the employee safety culture from bad to good behavior. An injury free culture requires the reconsideration of safety activities and the engagement of employees. Creating a workplace with an overall injury free culture that includes safety and empowers employees can only be done by establishing and maintaining a total safety culture.

401

Organising Human and Organisational Reliability

Pierre Le Bot and Hélène Pesme

EDF Lab, Clamart, France

Human Reliability (HRA) and the HRO (High Reliability Organising) approach are two major trends theorising the design, monitoring and improvement of safety in high-risk industries such as the generation of nuclear power. Human Reliability is increasingly requested in current design projects for new reactors or for the renovation of existing reactors in order to incorporate human factors and technical constraints for safety. Based on our observations on simulators and accident analyses, using the MERMOS method we illustrated how human failures in the operation of reactors assessed by Human Reliability for the Probabilistic Safety Assessments (PSA) need to be analysed at an organisational level. An absence of robustness (execution errors), a lack of anticipation (design flaw) or a failure in organisational resilience (lack of reconfiguration based on a new context) generate situations in which safety is threatened. Failure is sure to arise where an organisation is not sufficiently adapted in these situations (lack of recovery). We modelled this logic with the Model of Resilience in Situation that justifies MERMOS. In this paper, we will show how the MRS can be linked to the HRO mindset and how the resulting Human Reliability approach can contribute to High Reliability Organising at the human and organisational level.

432

On the Relation Between Culture, Safety Culture and Safety Management

Teemu Reiman (a), Carl Rollenhagen (b) and Kaupo Viitanen (a)

a) VTT Technical Research Centre of Finland, Espoo, Finland, b) Royal Institute of Technology, Stockholm, Sweden

Safety can be considered an emergent phenomenon, making a systems view imperative if the aim is to evaluate or develop the safety of an entire sociotechnical system. This paper deals with one important component of the systems view – the relation between culture and management. Specifically, we will inspect how the concepts of culture and safety culture can be used in conjunction with the concept of safety management in facilitating a more dynamic systems view on safety. The paper proposes a model of eight cultural archetypes and illustrates how these relate to both safety culture and safety management in organizations.

549

Toward Monitoring Organizational Safety Indicators by Integrating Probabilistic Risk Assessment, Socio-Technical Systems Theory, and Big Data Analytics

Justin Pence (a), Zahra Mohaghegh (a), Cheri Ostroff (b), Ernie Kee (c), Fatma Yilmaz (d), Rick Grantom (e), and David Johnson (f)

a) Department of Nuclear, Plasma, and Radiological Engineering, University of Illinois at Urbana-Champaign, Urbana, USA, b) University of South Australia, Adelaide, Australia, c) YK.risk, LLC, Bay City, USA, d)South Texas Project Nuclear Operating Company, Bay City, USA, e) C.R. Grantom PE & Assoc. LLC, West Colombia, USA, f) ABS Consulting, Irvine, USA

Many catastrophic accidents have organizational factors as key contributors; however, current generations of Probabilistic Risk Assessment (PRA) do not include a comprehensive representation of the underlying organizational failure mechanisms. This paper reports on the current status of new research with the idealistic goal of monitoring organizational safety indicators. Because of the evolving nature of computational power and information-sharing technologies, ‘Internet of Things’ has been adopted as a metaphor to describe the authors’ vision for combining multiple levels of organizational analysis into a real-time application for monitoring the changing landscape of risk. The short-term objectives are: (1) identifying the organizational root causes of failure and their paths of influence on technical system performance, utilizing theoretical models of social phenomena, (2) quantifying the models using advanced measurement and predictive methodologies, big data analytics, and uncertainty analysis, and (3) proposing preventive approaches. Socio-Technical risk analysis deals with wide-ranging, incomplete, and unstructured data. Therefore, this research focuses on developing hybrid predictive technologies for PRA that are not only grounded on Socio-Technical Systems theory, but also serve to expand the classical approach of data extraction and execution for risk analysis by incorporating techniques such as text mining, network data analytics, and data curation.