Search results for: hazard function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5410

Search results for: hazard function

5350 A Compressor Map Optimizing Tool for Prediction of Compressor Off-Design Performance

Authors: Zhongzhi Hu, Jie Shen, Jiqiang Wang

Abstract:

A high precision aeroengine model is needed when developing the engine control system. Compared with other main components, the axial compressor is the most challenging component to simulate. In this paper, a compressor map optimizing tool based on the introduction of a modifiable β function is developed for FWorks (FADEC Works). Three parameters (d density, f fitting coefficient, k₀ slope of the line β=0) are introduced to the β function to make it modifiable. The comparison of the traditional β function and the modifiable β function is carried out for a certain type of compressor. The interpolation errors show that both methods meet the modeling requirements, while the modifiable β function can predict compressor performance more accurately for some areas of the compressor map where the users are interested in.

Keywords: beta function, compressor map, interpolation error, map optimization tool

Procedia PDF Downloads 242
5349 Evaluation of Patients’ Quality of Life After Lumbar Disc Surgery and Movement Limitations

Authors: Shirin Jalili, Ramin Ghasemi

Abstract:

Lumbar microdiscectomy is the most commonly performed spinal surgery strategy; it is regularly performed to lighten the indications and signs of sciatica within the lower back and leg caused by a lumbar disc herniation. This surgery aims to progress leg pain, reestablish function, and enable a return to ordinary day-by-day exercises. Rates of lumbar disc surgery show critical geographic varieties recommending changing treatment criteria among working specialists. Few population-based considers have investigated the hazard of reoperation after disc surgery, and regional or inter specialty varieties within the reoperations are obscure. The conventional approach to recouping from lumbar microdiscectomy has been to restrain bending, lifting, or turning for a least 6 weeks in arrange to anticipate the disc from herniating once more. Traditionally, patients were exhorted to limit post-operative action, which was accepted to decrease the hazard of disc herniation and progressive insecurity. In modern hone, numerous specialists don't limit understanding of postoperative action due to the discernment this practice is pointless. There's a need of thinks about highlighting the result by distinctive scores or parameters after surgery for repetitive circle herniations of the lumbar spine at the starting herniation location. This study will evaluate the quality of life after surgical treatment of recurrent herniations with distinctive standardized approved result instruments.

Keywords: post-operative activity, disc, quality of life, treatment, movements

Procedia PDF Downloads 62
5348 On the Survival of Individuals with Type 2 Diabetes Mellitus in the United Kingdom: A Retrospective Case-Control Study

Authors: Njabulo Ncube, Elena Kulinskaya, Nicholas Steel, Dmitry Pshezhetskiy

Abstract:

Life expectancy in the United Kingdom (UK) has been near constant since 2010, particularly for the individuals of 65 years and older. This trend has been also noted in several other countries. This slowdown in the increase of life expectancy was concurrent with the increase in the number of deaths caused by non-communicable diseases. Of particular concern is the world-wide exponential increase in the number of diabetes related deaths. Previous studies have reported increased mortality hazards among diabetics compared to non-diabetics, and on the differing effects of antidiabetic drugs on mortality hazards. This study aimed to estimate the all-cause mortality hazards and related life expectancies among type 2 diabetes (T2DM) patients in the UK using the time-variant Gompertz-Cox model with frailty. The study also aimed to understand the major causes of the change in life expectancy growth in the last decade. A total of 221 182 (30.8% T2DM, 57.6% Males) individuals aged 50 years and above, born between 1930 and 1960, inclusive, and diagnosed between 2000 and 2016, were selected from The Health Improvement Network (THIN) database of the UK primary care data and followed up to 31 December 2016. About 13.4% of participants died during the follow-up period. The overall all-cause mortality hazard ratio of T2DM compared to non-diabetic controls was 1.467 (1.381-1.558) and 1.38 (1.307-1.457) when diagnosed between 50 to 59 years and 60 to 74 years, respectively. The estimated life expectancies among T2DM individuals without further comorbidities diagnosed at the age of 60 years were 2.43 (1930-1939 birth cohort), 2.53 (1940-1949 birth cohort) and 3.28 (1950-1960 birth cohort) years less than those of non-diabetic controls. However, the 1950-1960 birth cohort had a steeper hazard function compared to the 1940-1949 birth cohort for both T2DM and non-diabetic individuals. In conclusion, mortality hazards for people with T2DM continue to be higher than for non-diabetics. The steeper mortality hazard slope for the 1950-1960 birth cohort might indicate the sub-population contributing to a slowdown in the growth of the life expectancy.

Keywords: T2DM, Gompetz-Cox model with frailty, all-cause mortality, life expectancy

Procedia PDF Downloads 104
5347 Seismic Hazard Assessment of Offshore Platforms

Authors: F. D. Konstandakopoulou, G. A. Papagiannopoulos, N. G. Pnevmatikos, G. D. Hatzigeorgiou

Abstract:

This paper examines the effects of pile-soil-structure interaction on the dynamic response of offshore platforms under the action of near-fault earthquakes. Two offshore platforms models are investigated, one with completely fixed supports and one with piles which are clamped into deformable layered soil. The soil deformability for the second model is simulated using non-linear springs. These platform models are subjected to near-fault seismic ground motions. The role of fault mechanism on platforms’ response is additionally investigated, while the study also examines the effects of different angles of incidence of seismic records on the maximum response of each platform.

Keywords: hazard analysis, offshore platforms, earthquakes, safety

Procedia PDF Downloads 121
5346 Formulation of a Rapid Earthquake Risk Ranking Criteria for National Bridges in the National Capital Region Affected by the West Valley Fault Using GIS Data Integration

Authors: George Mariano Soriano

Abstract:

In this study, a Rapid Earthquake Risk Ranking Criteria was formulated by integrating various existing maps and databases by the Department of Public Works and Highways (DPWH) and Philippine Institute of Volcanology and Seismology (PHIVOLCS). Utilizing Geographic Information System (GIS) software, the above-mentioned maps and databases were used in extracting seismic hazard parameters and bridge vulnerability characteristics in order to rank the seismic damage risk rating of bridges in the National Capital Region.

Keywords: bridge, earthquake, GIS, hazard, risk, vulnerability

Procedia PDF Downloads 383
5345 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 429
5344 Closed Forms of Trigonometric Series Interms of Riemann’s ζ Function and Dirichlet η, λ, β Functions or the Hurwitz Zeta Function and Harmonic Numbers

Authors: Slobodan B. Tričković

Abstract:

We present the results concerned with trigonometric series that include sine and cosine functions with a parameter appearing in the denominator. We derive two types of closed-form formulas for trigonometric series. At first, for some integer values, as we know that Riemann’s ζ function and Dirichlet η, λ equal zero at negative even integers, whereas Dirichlet’s β function equals zero at negative odd integers, after a certain number of members, the rest of the series vanishes. Thus, a trigonometric series becomes a polynomial with coefficients involving Riemann’s ζ function and Dirichlet η, λ, β functions. On the other hand, in some cases, one cannot immediately replace the parameter with any positive integer because we shall encounter singularities. So it is necessary to take a limit, so in the process, we apply L’Hospital’s rule and, after a series of rearrangements, we bring a trigonometric series to a form suitable for the application of Choi-Srivastava’s theorem dealing with Hurwitz’s zeta function and Harmonic numbers. In this way, we express a trigonometric series as a polynomial over Hurwitz’s zeta function derivative.

Keywords: Dirichlet eta lambda beta functions, Riemann's zeta function, Hurwitz zeta function, Harmonic numbers

Procedia PDF Downloads 71
5343 Planning a European Policy for Increasing Graduate Population: The Conditions That Count

Authors: Alice Civera, Mattia Cattaneo, Michele Meoli, Stefano Paleari

Abstract:

Despite the fact that more equal access to higher education has been an objective public policy for several decades, little is known about the effectiveness of alternative means for achieving such goal. Indeed, nowadays, high level of graduate population can be observed both in countries with the high and low level of fees, or high and low level of public expenditure in higher education. This paper surveys the extant literature providing some background on the economic concepts of the higher education market, and reviews key determinants of demand and supply. A theoretical model of aggregate demand and supply of higher education is derived, with the aim to facilitate the understanding of the challenges in today’s higher education systems, as well as the opportunities for development. The model is validated on some exemplary case studies describing the different relationship between the level of public investment and levels of graduate population and helps to derive general implications. In addition, using a two-stage least squares model, we build a macroeconomic model of supply and demand for European higher education. The model allows interpreting policies shifting either the supply or the demand for higher education, and allows taking into consideration contextual conditions with the aim of comparing divergent policies under a common framework. Results show that the same policy objective (i.e., increasing graduate population) can be obtained by shifting either the demand function (i.e., by strengthening student aid) or the supply function (i.e., by directly supporting higher education institutions). Under this theoretical perspective, the level of tuition fees is irrelevant, and empirically we can observe high levels of graduate population in both countries with high (i.e., the UK) or low (i.e., Germany) levels of tuition fees. In practice, this model provides a conceptual framework to help better understanding what are the external conditions that need to be considered, when planning a policy for increasing graduate population. Extrapolating a policy from results in different countries, under this perspective, is a poor solution when contingent factors are not addressed. The second implication of this conceptual framework is that policies addressing the supply or the demand function needs to address different contingencies. In other words, a government aiming at increasing graduate population needs to implement complementary policies, designing them according to the side of the market that is interested. For example, a ‘supply-driven’ intervention, through the direct financial support of higher education institutions, needs to address the issue of institutions’ moral hazard, by creating incentives to supply higher education services in efficient conditions. By contrast, a ‘demand-driven’ policy, providing student aids, need to tackle the students’ moral hazard, by creating an incentive to responsible behavior.

Keywords: graduates, higher education, higher education policies, tuition fees

Procedia PDF Downloads 141
5342 Stability Analysis of SEIR Epidemic Model with Treatment Function

Authors: Sasiporn Rattanasupha, Settapat Chinviriyasit

Abstract:

The treatment function adopts a continuous and differentiable function which can describe the effect of delayed treatment when the number of infected individuals increases and the medical condition is limited. In this paper, the SEIR epidemic model with treatment function is studied to investigate the dynamics of the model due to the effect of treatment. It is assumed that the treatment rate is proportional to the number of infective patients. The stability of the model is analyzed. The model is simulated to illustrate the analytical results and to investigate the effects of treatment on the spread of infection.

Keywords: basic reproduction number, local stability, SEIR epidemic model, treatment function

Procedia PDF Downloads 493
5341 Integration of Quality Function Deployment and Modular Function Deployment in Product Development

Authors: Naga Velamakuri, Jyothi K. Reddy

Abstract:

Quality must be designed into a product and not inspected has become the main motto of all the companies globally. Due to the rapidly increasing technology in the past few decades, the nature of demands from the consumers has become more sophisticated. To sustain this global revolution of innovation in production systems, companies have to take steps to accommodate this technology growth. In this process of understanding the customers' expectations, all the firms globally take steps to deliver a perfect output. Most of these techniques also concentrate on the consistent development and optimization of the product to exceed the expectations. Quality Function Deployment(QFD) and Modular Function Deployment(MFD) are such techniques which rely on the voice of the customer and help deliver the needs. In this paper, Quality Function Deployment and Modular Function Deployment techniques which help in converting the quantitative descriptions to qualitative outcomes are discussed. The area of interest would be to understand the scope of each of the techniques and the application range in product development when these are applied together to any problem. The research question would be mainly aimed at comprehending the limitations using modularity in product development.

Keywords: quality function deployment, modular function deployment, house of quality, methodology

Procedia PDF Downloads 296
5340 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 37
5339 GIS and Remote Sensing Approach in Earthquake Hazard Assessment and Monitoring: A Case Study in the Momase Region of Papua New Guinea

Authors: Tingneyuc Sekac, Sujoy Kumar Jana, Indrajit Pal, Dilip Kumar Pal

Abstract:

Tectonism induced Tsunami, landslide, ground shaking leading to liquefaction, infrastructure collapse, conflagration are the common earthquake hazards that are experienced worldwide. Apart from human casualty, the damage to built-up infrastructures like roads, bridges, buildings and other properties are the collateral episodes. The appropriate planning must precede with a view to safeguarding people’s welfare, infrastructures and other properties at a site based on proper evaluation and assessments of the potential level of earthquake hazard. The information or output results can be used as a tool that can assist in minimizing risk from earthquakes and also can foster appropriate construction design and formulation of building codes at a particular site. Different disciplines adopt different approaches in assessing and monitoring earthquake hazard throughout the world. For the present study, GIS and Remote Sensing potentials were utilized to evaluate and assess earthquake hazards of the study region. Subsurface geology and geomorphology were the common features or factors that were assessed and integrated within GIS environment coupling with seismicity data layers like; Peak Ground Acceleration (PGA), historical earthquake magnitude and earthquake depth to evaluate and prepare liquefaction potential zones (LPZ) culminating in earthquake hazard zonation of our study sites. The liquefaction can eventuate in the aftermath of severe ground shaking with amenable site soil condition, geology and geomorphology. The latter site conditions or the wave propagation media were assessed to identify the potential zones. The precept has been that during any earthquake event the seismic wave is generated and propagates from earthquake focus to the surface. As it propagates, it passes through certain geological or geomorphological and specific soil features, where these features according to their strength/stiffness/moisture content, aggravates or attenuates the strength of wave propagation to the surface. Accordingly, the resulting intensity of shaking may or may not culminate in the collapse of built-up infrastructures. For the case of earthquake hazard zonation, the overall assessment was carried out through integrating seismicity data layers with LPZ. Multi-criteria Evaluation (MCE) with Saaty’s Analytical Hierarchy Process (AHP) was adopted for this study. It is a GIS technology that involves integration of several factors (thematic layers) that can have a potential contribution to liquefaction triggered by earthquake hazard. The factors are to be weighted and ranked in the order of their contribution to earthquake induced liquefaction. The weightage and ranking assigned to each factor are to be normalized with AHP technique. The spatial analysis tools i.e., Raster calculator, reclassify, overlay analysis in ArcGIS 10 software were mainly employed in the study. The final output of LPZ and Earthquake hazard zones were reclassified to ‘Very high’, ‘High’, ‘Moderate’, ‘Low’ and ‘Very Low’ to indicate levels of hazard within a study region.

Keywords: hazard micro-zonation, liquefaction, multi criteria evaluation, tectonism

Procedia PDF Downloads 246
5338 Seismotectonics of Southern Haiti: A Faulting Model for the 12 January 2010 M7 Earthquake

Authors: Newdeskarl Saint Fleur, Nathalie Feuillet, Raphaël Grandin, Éric Jacques, Jennifer Weil-Accardo, Yann Klinger

Abstract:

The prevailing consensus is that the 2010 Mw7.0 Haiti earthquake left the Enriquillo–Plantain Garden strike-slip Fault (EPGF) unruptured but broke unmapped blind north-dipping thrusts. Using high-resolution topography, aerial images, bathymetry and geology we identified previously unrecognized south-dipping NW-SE-striking active thrusts in southern Haiti. One of them, Lamentin thrust (LT), cuts across the crowded city of Carrefour, extends offshore into Port-au-Prince Bay and connects at depth with the EPGF. We propose that both faults broke in 2010. The rupture likely initiated on the thrust and propagated further along the EPGF due to unclamping. This scenario is consistent with geodetic, seismological and field data. The 2010 earthquake increased the stress toward failure on the unruptured segments of the EPGF and on neighboring thrusts, significantly increasing the seismic hazard in the Port-au-Prince urban area. The numerous active thrusts recognized in that area must be considered for future evaluation of the seismic hazard.

Keywords: active faulting, enriquillo-plantain garden fault, Haiti earthquake, seismic hazard

Procedia PDF Downloads 1212
5337 Process Safety Evaluation of a Nuclear Power Plant through Virtual Process Hazard Analysis (PHA) using the What-If Technique

Authors: Lormaine Anne Branzuela, Elysa Largo, Julie Marisol Pagalilauan, Neil Concibido, Monet Concepcion Detras

Abstract:

Energy is a necessity both for the people and the country. The demand for energy is continually increasing, but the supply is not doing the same. The reopening of the Bataan Nuclear Power Plant (BNPP) in the Philippines has been circulating in the media for the current time. The general public has been hesitant in accepting the inclusion of nuclear energy in the Philippine energy mix due to perceived unsafe conditions of the plant. This study evaluated the possible operations of a nuclear power plant, which is of the same type as the BNPP, considering the safety of the workers, the public, and the environment using a Process Hazard Analysis (PHA) method. What-If Technique was utilized to identify the hazards and consequences on the operations of the plant, together with the level of risk it entails. Through the brainstorming sessions of the PHA team, it was found that the most critical system on the plant is the primary system. Possible leakages on pipes and equipment due to weakened seals and welds and blockages on coolant path due to fouling were the most common scenarios identified, which further caused the most critical scenario – radioactive leak through sump contamination, nuclear meltdown, and equipment damage and explosion which could result to multiple injuries and fatalities, and environmental impacts.

Keywords: process safety management, process hazard analysis, what-If technique, nuclear power plant

Procedia PDF Downloads 191
5336 Study of Natural Radioactive and Radiation Hazard Index of Soil from Sembrong Catchment Area, Johor, Malaysia

Authors: M. I. A. Adziz, J. Sharib Sarip, M. T. Ishak, D. N. A. Tugi

Abstract:

Radiation exposure to humans and the environment is caused by natural radioactive material sources. Given that exposure to people and communities can occur through several pathways, it is necessary to pay attention to the increase in naturally radioactive material, particularly in the soil. Continuous research and monitoring on the distribution and determination of these natural radionuclides' activity as a guide and reference are beneficial, especially in an accidental exposure. Surface soil/sediment samples from several locations identified around the Sembrong catchment area were taken for the study. After 30 days of secular equilibrium with their daughters, the activity concentrations of the naturally occurring radioactive material (NORM) members, i.e. ²²⁶Ra, ²²⁸Ra, ²³⁸U, ²³²Th, and ⁴⁰K, were measured using high purity germanium (HPGe) gamma spectrometer. The results obtained showed that the radioactivity concentration of ²³⁸U ranged between 17.13 - 30.13 Bq/kg, ²³²Th ranged between 22.90 - 40.05 Bq/kg, ²²⁶Ra ranged between 19.19 - 32.10 Bq/kg, ²²⁸Ra ranged between 21.08 - 39.11 Bq/kg and ⁴⁰K ranged between 9.22 - 51.07 Bq/kg with average values of 20.98 Bq/kg, 27.39 Bq/kg, 23.55 Bq/kg, 26.93 Bq/kg and 23.55 Bq/kg respectively. The values obtained from this study were low or equivalent to previously reported in previous studies. It was also found that the mean/mean values obtained for the four parameters of the Radiation Hazard Index, namely radium equivalent activity (Raeq), external dose rate (D), annual effective dose and external hazard index (Hₑₓ), were 65.40 Bq/kg, 29.33 nGy/h, 19.18 ¹⁰⁻⁶Sv and 0.19 respectively. These obtained values are low compared to the world average values and the values of globally applied standards. Comparison with previous studies (dry season) also found that the values for all four parameters were low and equivalent. This indicates the level of radiation hazard in the area around the study is safe for the public.

Keywords: catchment area, gamma spectrometry, naturally occurring radioactive material (NORM), soil

Procedia PDF Downloads 76
5335 Geospatial Multi-Criteria Evaluation to Predict Landslide Hazard Potential in the Catchment of Lake Naivasha, Kenya

Authors: Abdel Rahman Khider Hassan

Abstract:

This paper describes a multi-criteria geospatial model for prediction of landslide hazard zonation (LHZ) for Lake Naivasha catchment (Kenya), based on spatial analysis of integrated datasets of location intrinsic parameters (slope stability factors) and external landslides triggering factors (natural and man-made factors). The intrinsic dataset included: lithology, geometry of slope (slope inclination, aspect, elevation, and curvature) and land use/land cover. The landslides triggering factors included: rainfall as the climatic factor, in addition to the destructive effects reflected by proximity of roads and drainage network to areas that are susceptible to landslides. No published study on landslides has been obtained for this area. Thus, digital datasets of the above spatial parameters were conveniently acquired, stored, manipulated and analyzed in a Geographical Information System (GIS) using a multi-criteria grid overlay technique (in ArcGIS 10.2.2 environment). Deduction of landslide hazard zonation is done by applying weights based on relative contribution of each parameter to the slope instability, and finally, the weighted parameters grids were overlaid together to generate a map of the potential landslide hazard zonation (LHZ) for the lake catchment. From the total surface of 3200 km² of the lake catchment, most of the region (78.7 %; 2518.4 km²) is susceptible to moderate landslide hazards, whilst about 13% (416 km²) is occurring under high hazards. Only 1.0% (32 km²) of the catchment is displaying very high landslide hazards, and the remaining area (7.3 %; 233.6 km²) displays low probability of landslide hazards. This result confirms the importance of steep slope angles, lithology, vegetation land cover and slope orientation (aspect) as the major determining factors of slope failures. The information provided by the produced map of landslide hazard zonation (LHZ) could lay the basis for decision making as well as mitigation and applications in avoiding potential losses caused by landslides in the Lake Naivasha catchment in the Kenya Highlands.

Keywords: decision making, geospatial, landslide, multi-criteria, Naivasha

Procedia PDF Downloads 178
5334 A Transfer Function Representation of Thermo-Acoustic Dynamics for Combustors

Authors: Myunggon Yoon, Jung-Ho Moon

Abstract:

In this paper, we present a transfer function representation of a general one-dimensional combustor. The input of the transfer function is a heat rate perturbation of a burner and the output is a flow velocity perturbation at the burner. This paper considers a general combustor model composed of multiple cans with different cross sectional areas, along with a non-zero flow rate.

Keywords: combustor, dynamics, thermoacoustics, transfer function

Procedia PDF Downloads 361
5333 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles

Authors: Nozar Kishi, Babak Kamrani, Filmon Habte

Abstract:

Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.

Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM

Procedia PDF Downloads 242
5332 Geometric Properties of Some q-Bessel Functions

Authors: İbrahim Aktaş, Árpád Baricz

Abstract:

In this paper, the radii of star likeness of the Jackson and Hahn-Exton q-Bessel functions are considered, and for each of them three different normalizations is applied. By applying Euler-Rayleigh inequalities for the first positive zeros of these functions tight lower, and upper bounds for the radii of starlikeness of these functions are obtained. The Laguerre-Pólya class of real entire functions plays an important role in this study. In particular, we obtain some new bounds for the first positive zero of the derivative of the classical Bessel function of the first kind.

Keywords: bessel function, lommel function, radius of starlikeness and convexity, Struve function

Procedia PDF Downloads 254
5331 On the Fractional Integration of Generalized Mittag-Leffler Type Functions

Authors: Christian Lavault

Abstract:

In this paper, the generalized fractional integral operators of two generalized Mittag-Leffler type functions are investigated. The special cases of interest involve the generalized M-series and K-function, both introduced by Sharma. The two pairs of theorems established herein generalize recent results about left- and right-sided generalized fractional integration operators applied here to the M-series and the K-function. The note also results in important applications in physics and mathematical engineering.

Keywords: Fox–Wright Psi function, generalized hypergeometric function, generalized Riemann– Liouville and Erdélyi–Kober fractional integral operators, Saigo's generalized fractional calculus, Sharma's M-series and K-function

Procedia PDF Downloads 411
5330 Particle Swarm Optimization and Quantum Particle Swarm Optimization to Multidimensional Function Approximation

Authors: Diogo Silva, Fadul Rodor, Carlos Moraes

Abstract:

This work compares the results of multidimensional function approximation using two algorithms: the classical Particle Swarm Optimization (PSO) and the Quantum Particle Swarm Optimization (QPSO). These algorithms were both tested on three functions - The Rosenbrock, the Rastrigin, and the sphere functions - with different characteristics by increasing their number of dimensions. As a result, this study shows that the higher the function space, i.e. the larger the function dimension, the more evident the advantages of using the QPSO method compared to the PSO method in terms of performance and number of necessary iterations to reach the stop criterion.

Keywords: PSO, QPSO, function approximation, AI, optimization, multidimensional functions

Procedia PDF Downloads 557
5329 Probabilistic-Based Design of Bridges under Multiple Hazards: Floods and Earthquakes

Authors: Kuo-Wei Liao, Jessica Gitomarsono

Abstract:

Bridge reliability against natural hazards such as floods or earthquakes is an interdisciplinary problem that involves a wide range of knowledge. Moreover, due to the global climate change, engineers have to design a structure against the multi-hazard threats. Currently, few of the practical design guideline has included such concept. The bridge foundation in Taiwan often does not have a uniform width. However, few of the researches have focused on safety evaluation of a bridge with a complex pier. Investigation of the scouring depth under such situation is very important. Thus, this study first focuses on investigating and improving the scour prediction formula for a bridge with complicated foundation via experiments and artificial intelligence. Secondly, a probabilistic design procedure is proposed using the established prediction formula for practical engineers under the multi-hazard attacks.

Keywords: bridge, reliability, multi-hazards, scour

Procedia PDF Downloads 348
5328 Climate Change and Landslide Risk Assessment in Thailand

Authors: Shotiros Protong

Abstract:

The incidents of sudden landslides in Thailand during the past decade have occurred frequently and more severely. It is necessary to focus on the principal parameters used for analysis such as land cover land use, rainfall values, characteristic of soil and digital elevation model (DEM). The combination of intense rainfall and severe monsoons is increasing due to global climate change. Landslide occurrences rapidly increase during intense rainfall especially in the rainy season in Thailand which usually starts around mid-May and ends in the middle of October. The rain-triggered landslide hazard analysis is the focus of this research. The combination of geotechnical and hydrological data are used to determine permeability, conductivity, bedding orientation, overburden and presence of loose blocks. The regional landslide hazard mapping is developed using the Slope Stability Index SINMAP model supported on Arc GIS software version 10.1. Geological and land use data are used to define the probability of landslide occurrences in terms of geotechnical data. The geological data can indicate the shear strength and the angle of friction values for soils above given rock types, which leads to the general applicability of the approach for landslide hazard analysis. To address the research objectives, the methods are described in this study: setup and calibration of the SINMAP model, sensitivity of the SINMAP model, geotechnical laboratory, landslide assessment at present calibration and landslide assessment under future climate simulation scenario A2 and B2. In terms of hydrological data, the millimetres/twenty-four hours of average rainfall data are used to assess the rain triggered landslide hazard analysis in slope stability mapping. During 1954-2012 period, is used for the baseline of rainfall data at the present calibration. The climate change in Thailand, the future of climate scenarios are simulated by spatial and temporal scales. The precipitation impact is need to predict for the climate future, Statistical Downscaling Model (SDSM) version 4.2, is used to assess the simulation scenario of future change between latitude 16o 26’ and 18o 37’ north and between longitude 98o 52’ and 103o 05’ east by SDSM software. The research allows the mapping of risk parameters for landslide dynamics, and indicates the spatial and time trends of landslide occurrences. Thus, regional landslide hazard mapping under present-day climatic conditions from 1954 to 2012 and simulations of climate change based on GCM scenarios A2 and B2 from 2013 to 2099 related to the threshold rainfall values for the selected the study area in Uttaradit province in the northern part of Thailand. Finally, the landslide hazard mapping will be compared and shown by areas (km2 ) in both the present and the future under climate simulation scenarios A2 and B2 in Uttaradit province.

Keywords: landslide hazard, GIS, slope stability index (SINMAP), landslides, Thailand

Procedia PDF Downloads 534
5327 Measurement of 238U, 232Th and 40K in Soil Samples Collected from Coal City Dhanbad, India

Authors: Zubair Ahmad

Abstract:

Specific activities of the natural radionuclides 238U, 232Th and 40K were measured by using γ - ray spectrometric technique in soil samples collected from the city of Dhanbad, which is located near coal mines. Mean activity values for 238U, 232Th and 40K were found to be 60.29 Bq/kg, 64.50 Bq/kg and 481.0 Bq/kg, respectively. Mean radium equivalent activity, absorbed dose rate, outdoor dose, external hazard index, internal hazard index, for the area under study were determined as 189.53 Bq/kg, 87.21 nGy/h, 0.37 mSv/y, 0.52 and 0.64, respectively. The annual effective dose to the general public was found 0.44 mSv/y. This value lies well below the limit of 1 mSv/y as recommended by International Commission on Radiological Protection. Measured values were found safe for environment and public health.

Keywords: coal city Dhanbad, gamma-ray spectroscopy, natural radioactivity, soil samples

Procedia PDF Downloads 244
5326 Seismic Hazard Assessment of Tehran

Authors: Dorna Kargar, Mehrasa Masih

Abstract:

Due to its special geological and geographical conditions, Iran has always been exposed to various natural hazards. Earthquake is one of the natural hazards with random nature that can cause significant financial damages and casualties. This is a serious threat, especially in areas with active faults. Therefore, considering the population density in some parts of the country, locating and zoning high-risk areas are necessary and significant. In the present study, seismic hazard assessment via probabilistic and deterministic method for Tehran, the capital of Iran, which is located in Alborz-Azerbaijan province, has been done. The seismicity study covers a range of 200 km from the north of Tehran (X=35.74° and Y= 51.37° in LAT-LONG coordinate system) to identify the seismic sources and seismicity parameters of the study region. In order to identify the seismic sources, geological maps at the scale of 1: 250,000 are used. In this study, we used Kijko-Sellevoll's method (1992) to estimate seismicity parameters. The maximum likelihood estimation of earthquake hazard parameters (maximum regional magnitude Mmax, activity rate λ, and the Gutenberg-Richter parameter b) from incomplete data files is extended to the case of uncertain magnitude values. By the combination of seismicity and seismotectonic studies of the site, the acceleration with antiseptic probability may happen during the useful life of the structure is calculated with probabilistic and deterministic methods. Applying the results of performed seismicity and seismotectonic studies in the project and applying proper weights in used attenuation relationship, maximum horizontal and vertical acceleration for return periods of 50, 475, 950 and 2475 years are calculated. Horizontal peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.12g, 0.30g, 0.37g and 0.50, and Vertical peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.08g, 0.21g, 0.27g and 0.36g.

Keywords: peak ground acceleration, probabilistic and deterministic, seismic hazard assessment, seismicity parameters

Procedia PDF Downloads 48
5325 Understanding the Impact of Out-of-Sequence Thrust Dynamics on Earthquake Mitigation: Implications for Hazard Assessment and Disaster Planning

Authors: Rajkumar Ghosh

Abstract:

Earthquakes pose significant risks to human life and infrastructure, highlighting the importance of effective earthquake mitigation strategies. Traditional earthquake modelling and mitigation efforts have largely focused on the primary fault segments and their slip behaviour. However, earthquakes can exhibit complex rupture dynamics, including out-of-sequence thrust (OOST) events, which occur on secondary or subsidiary faults. This abstract examines the impact of OOST dynamics on earthquake mitigation strategies and their implications for hazard assessment and disaster planning. OOST events challenge conventional seismic hazard assessments by introducing additional fault segments and potential rupture scenarios that were previously unrecognized or underestimated. Consequently, these events may increase the overall seismic hazard in affected regions. The study reviews recent case studies and research findings that illustrate the occurrence and characteristics of OOST events. It explores the factors contributing to OOST dynamics, such as stress interactions between fault segments, fault geometry, and mechanical properties of fault materials. Moreover, it investigates the potential triggers and precursory signals associated with OOST events to enhance early warning systems and emergency response preparedness. The abstract also highlights the significance of incorporating OOST dynamics into seismic hazard assessment methodologies. It discusses the challenges associated with accurately modelling OOST events, including the need for improved understanding of fault interactions, stress transfer mechanisms, and rupture propagation patterns. Additionally, the abstract explores the potential for advanced geophysical techniques, such as high-resolution imaging and seismic monitoring networks, to detect and characterize OOST events. Furthermore, the abstract emphasizes the practical implications of OOST dynamics for earthquake mitigation strategies and urban planning. It addresses the need for revising building codes, land-use regulations, and infrastructure designs to account for the increased seismic hazard associated with OOST events. It also underscores the importance of public awareness campaigns to educate communities about the potential risks and safety measures specific to OOST-induced earthquakes. This sheds light on the impact of out-of-sequence thrust dynamics in earthquake mitigation. By recognizing and understanding OOST events, researchers, engineers, and policymakers can improve hazard assessment methodologies, enhance early warning systems, and implement effective mitigation measures. By integrating knowledge of OOST dynamics into urban planning and infrastructure development, societies can strive for greater resilience in the face of earthquakes, ultimately minimizing the potential for loss of life and infrastructure damage.

Keywords: earthquake mitigation, out-of-sequence thrust, seismic, satellite imagery

Procedia PDF Downloads 64
5324 A Bathtub Curve from Nonparametric Model

Authors: Eduardo C. Guardia, Jose W. M. Lima, Afonso H. M. Santos

Abstract:

This paper presents a nonparametric method to obtain the hazard rate “Bathtub curve” for power system components. The model is a mixture of the three known phases of a component life, the decreasing failure rate (DFR), the constant failure rate (CFR) and the increasing failure rate (IFR) represented by three parametric Weibull models. The parameters are obtained from a simultaneous fitting process of the model to the Kernel nonparametric hazard rate curve. From the Weibull parameters and failure rate curves the useful lifetime and the characteristic lifetime were defined. To demonstrate the model the historic time-to-failure of distribution transformers were used as an example. The resulted “Bathtub curve” shows the failure rate for the equipment lifetime which can be applied in economic and replacement decision models.

Keywords: bathtub curve, failure analysis, lifetime estimation, parameter estimation, Weibull distribution

Procedia PDF Downloads 421
5323 Characteristic Function in Estimation of Probability Distribution Moments

Authors: Vladimir S. Timofeev

Abstract:

In this article the problem of distributional moments estimation is considered. The new approach of moments estimation based on usage of the characteristic function is proposed. By statistical simulation technique, author shows that new approach has some robust properties. For calculation of the derivatives of characteristic function there is used numerical differentiation. Obtained results confirmed that author’s idea has a certain working efficiency and it can be recommended for any statistical applications.

Keywords: characteristic function, distributional moments, robustness, outlier, statistical estimation problem, statistical simulation

Procedia PDF Downloads 481
5322 Continuous-Time and Discrete-Time Singular Value Decomposition of an Impulse Response Function

Authors: Rogelio Luck, Yucheng Liu

Abstract:

This paper proposes the continuous-time singular value decomposition (SVD) for the impulse response function, a special kind of Green’s functions e⁻⁽ᵗ⁻ ᵀ⁾, in order to find a set of singular functions and singular values so that the convolutions of such function with the set of singular functions on a specified domain are the solutions to the inhomogeneous differential equations for those singular functions. A numerical example was illustrated to verify the proposed method. Besides the continuous-time SVD, a discrete-time SVD is also presented for the impulse response function, which is modeled using a Toeplitz matrix in the discrete system. The proposed method has broad applications in signal processing, dynamic system analysis, acoustic analysis, thermal analysis, as well as macroeconomic modeling.

Keywords: singular value decomposition, impulse response function, Green’s function , Toeplitz matrix , Hankel matrix

Procedia PDF Downloads 134
5321 Subclasses of Bi-Univalent Functions Associated with Hohlov Operator

Authors: Rashidah Omar, Suzeini Abdul Halim, Aini Janteng

Abstract:

The coefficients estimate problem for Taylor-Maclaurin series is still an open problem especially for a function in the subclass of bi-univalent functions. A function f ϵ A is said to be bi-univalent in the open unit disk D if both f and f-1 are univalent in D. The symbol A denotes the class of all analytic functions f in D and it is normalized by the conditions f(0) = f’(0) – 1=0. The class of bi-univalent is denoted by  The subordination concept is used in determining second and third Taylor-Maclaurin coefficients. The upper bound for second and third coefficients is estimated for functions in the subclasses of bi-univalent functions which are subordinated to the function φ. An analytic function f is subordinate to an analytic function g if there is an analytic function w defined on D with w(0) = 0 and |w(z)| < 1 satisfying f(z) = g[w(z)]. In this paper, two subclasses of bi-univalent functions associated with Hohlov operator are introduced. The bound for second and third coefficients of functions in these subclasses is determined using subordination. The findings would generalize the previous related works of several earlier authors.

Keywords: analytic functions, bi-univalent functions, Hohlov operator, subordination

Procedia PDF Downloads 270