Search results for: probability estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2982

Search results for: probability estimation

2262 Games behind Bars: A Longitudinal Study of Inmates Pro-Social Preferences

Authors: Mario A. Maggioni, Domenico Rossignoli, Simona Beretta, Sara Balestri

Abstract:

The paper presents the results of a Longitudinal Randomized Control Trial implemented in 2016 two State Prisons in California (USA). The subjects were randomly assigned to a 10-months program (GRIP, Guiding Rage Into Power) aiming at undoing the destructive behavioral patterns that lead to criminal actions by raising the individual’s 'mindfulness'. This study tests whether the participation to this program (treatment), based on strong relationships and mutual help, affects pro-social behavior of participants, in particular with reference to trust and inequality aversion. The research protocol entails the administration of two questionnaires including a set of behavioral situations ('games') - widely used in the relevant literature in the field - to 80 inmates, 42 treated (enrolled in the program) and 38 controls. The first questionnaire has been administered before treatment and randomization took place; the second questionnaire at the end of the program. The results of a Difference-in-Differences estimation procedure, show that trust significantly increases GRIP participants to compared to the control group. The result is robust to alternative estimation techniques and to the inclusion of a set of covariates to further control for idiosyncratic characteristics of the prisoners.

Keywords: behavioral economics, difference in differences, longitudinal study, pro-social preferences

Procedia PDF Downloads 375
2261 Vulnerability Assessment of Reinforced Concrete Frames Based on Inelastic Spectral Displacement

Authors: Chao Xu

Abstract:

Selecting ground motion intensity measures reasonably is one of the very important issues to affect the input ground motions selecting and the reliability of vulnerability analysis results. In this paper, inelastic spectral displacement is used as an alternative intensity measure to characterize the ground motion damage potential. The inelastic spectral displacement is calculated based modal pushover analysis and inelastic spectral displacement based incremental dynamic analysis is developed. Probability seismic demand analysis of a six story and an eleven story RC frame are carried out through cloud analysis and advanced incremental dynamic analysis. The sufficiency and efficiency of inelastic spectral displacement are investigated by means of regression and residual analysis, and compared with elastic spectral displacement. Vulnerability curves are developed based on inelastic spectral displacement. The study shows that inelastic spectral displacement reflects the impact of different frequency components with periods larger than fundamental period on inelastic structural response. The damage potential of ground motion on structures with fundamental period prolonging caused by structural soften can be caught by inelastic spectral displacement. To be compared with elastic spectral displacement, inelastic spectral displacement is a more sufficient and efficient intensity measure, which reduces the uncertainty of vulnerability analysis and the impact of input ground motion selection on vulnerability analysis result.

Keywords: vulnerability, probability seismic demand analysis, ground motion intensity measure, sufficiency, efficiency, inelastic time history analysis

Procedia PDF Downloads 337
2260 Enhancing the Pricing Expertise of an Online Distribution Channel

Authors: Luis N. Pereira, Marco P. Carrasco

Abstract:

Dynamic pricing is a revenue management strategy in which hotel suppliers define, over time, flexible and different prices for their services for different potential customers, considering the profile of e-consumers and the demand and market supply. This means that the fundamentals of dynamic pricing are based on economic theory (price elasticity of demand) and market segmentation. This study aims to define a dynamic pricing strategy and a contextualized offer to the e-consumers profile in order to improve the number of reservations of an online distribution channel. Segmentation methods (hierarchical and non-hierarchical) were used to identify and validate an optimal number of market segments. A profile of the market segments was studied, considering the characteristics of the e-consumers and the probability of reservation a room. In addition, the price elasticity of demand was estimated for each segment using econometric models. Finally, predictive models were used to define rules for classifying new e-consumers into pre-defined segments. The empirical study illustrates how it is possible to improve the intelligence of an online distribution channel system through an optimal dynamic pricing strategy and a contextualized offer to the profile of each new e-consumer. A database of 11 million e-consumers of an online distribution channel was used in this study. The results suggest that an appropriate policy of market segmentation in using of online reservation systems is benefit for the service suppliers because it brings high probability of reservation and generates more profit than fixed pricing.

Keywords: dynamic pricing, e-consumers segmentation, online reservation systems, predictive analytics

Procedia PDF Downloads 220
2259 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field

Authors: Jeronimo Cox, Tomonari Furukawa

Abstract:

Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.

Keywords: motion tracking, sensor fusion, magnetometer, state estimation

Procedia PDF Downloads 66
2258 Runoff Estimation Using NRCS-CN Method

Authors: E. K. Naseela, B. M. Dodamani, Chaithra Chandran

Abstract:

The GIS and remote sensing techniques facilitate accurate estimation of surface runoff from watershed. In the present study an attempt has been made to evaluate the applicability of Natural Resources Service Curve Number method using GIS and Remote sensing technique in the upper Krishna basin (69,425 Sq.km). Landsat 7 (with resolution 30 m) satellite data for the year 2012 has been used for the preparation of land use land cover (LU/LC) map. The hydrologic soil group is mapped using GIS platform. The weighted curve numbers (CN) for all the 5 subcatchments calculated on the basis of LU/LC type and hydrologic soil class in the area by considering antecedent moisture condition. Monthly rainfall data was available for 58 raingauge stations. Overlay technique is adopted for generating weighted curve number. Results of the study show that land use changes determined from satellite images are useful in studying the runoff response of the basin. The results showed that there is no significant difference between observed and estimated runoff depths. For each subcatchment, statistically positive correlations were detected between observed and estimated runoff depth (0.6Keywords: curve number, GIS, remote sensing, runoff

Procedia PDF Downloads 522
2257 An Approach to Apply Kernel Density Estimation Tool for Crash Prone Location Identification

Authors: Kazi Md. Shifun Newaz, S. Miaji, Shahnewaz Hazanat-E-Rabbi

Abstract:

In this study, the kernel density estimation tool has been used to identify most crash prone locations in a national highway of Bangladesh. Like other developing countries, in Bangladesh road traffic crashes (RTC) have now become a great social alarm and the situation is deteriorating day by day. Today’s black spot identification process is not based on modern technical tools and most of the cases provide wrong output. In this situation, characteristic analysis and black spot identification by spatial analysis would be an effective and low cost approach in ensuring road safety. The methodology of this study incorporates a framework on the basis of spatial-temporal study to identify most RTC occurrence locations. In this study, a very important and economic corridor like Dhaka to Sylhet highway has been chosen to apply the method. This research proposes that KDE method for identification of Hazardous Road Location (HRL) could be used for all other National highways in Bangladesh and also for other developing countries. Some recommendations have been suggested for policy maker to reduce RTC in Dhaka-Sylhet especially in black spots.

Keywords: hazardous road location (HRL), crash, GIS, kernel density

Procedia PDF Downloads 298
2256 Estimating View-Through Ad Attribution from User Surveys Using Convex Optimization

Authors: Yuhan Lin, Rohan Kekatpure, Cassidy Yeung

Abstract:

In Digital Marketing, robust quantification of View-through attribution (VTA) is necessary for evaluating channel effectiveness. VTA occurs when a product purchase is aided by an Ad but without an explicit click (e.g. a TV ad). A lack of a tracking mechanism makes VTA estimation challenging. Most prevalent VTA estimation techniques rely on post-purchase in-product user surveys. User surveys enable the calculation of channel multipliers, which are the ratio of the view-attributed to the click-attributed purchases of each marketing channel. Channel multipliers thus provide a way to estimate the unknown VTA for a channel from its known click attribution. In this work, we use Convex Optimization to compute channel multipliers in a way that enables a mathematical encoding of the expected channel behavior. Large fluctuations in channel attributions often result from overfitting the calculations to user surveys. Casting channel attribution as a Convex Optimization problem allows an introduction of constraints that limit such fluctuations. The result of our study is a distribution of channel multipliers across the entire marketing funnel, with important implications for marketing spend optimization. Our technique can be broadly applied to estimate Ad effectiveness in a privacy-centric world that increasingly limits user tracking.

Keywords: digital marketing, survey analysis, operational research, convex optimization, channel attribution

Procedia PDF Downloads 165
2255 The Response of the Central Bank to the Exchange Rate Movement: A Dynamic Stochastic General Equilibrium-Vector Autoregressive Approach for Tunisian Economy

Authors: Abdelli Soulaima, Belhadj Besma

Abstract:

The paper examines the choice of the central bank toward the movements of the nominal exchange rate and evaluates its effects on the volatility of the output growth and the inflation. The novel hybrid method of the dynamic stochastic general equilibrium called the DSGE-VAR is proposed for analyzing this policy experiment in a small scale open economy in particular Tunisia. The contribution is provided to the empirical literature as we apply the Tunisian data with this model, which is rarely used in this context. Note additionally that the issue of treating the degree of response of the central bank to the exchange rate in Tunisia is special. To ameliorate the estimation, the Bayesian technique is carried out for the sample 1980:q1 to 2011 q4. Our results reveal that the central bank should not react or softly react to the exchange rate. The variance decomposition displayed that the overall inflation volatility is more pronounced with the fixed exchange rate regime for most of the shocks except for the productivity and the interest rate. The output volatility is also higher with this regime with the majority of the shocks exempting the foreign interest rate and the interest rate shocks.

Keywords: DSGE-VAR modeling, exchange rate, monetary policy, Bayesian estimation

Procedia PDF Downloads 283
2254 Predictive Analytics in Traffic Flow Management: Integrating Temporal Dynamics and Traffic Characteristics to Estimate Travel Time

Authors: Maria Ezziani, Rabie Zine, Amine Amar, Ilhame Kissani

Abstract:

This paper introduces a predictive model for urban transportation engineering, which is vital for efficient traffic management. Utilizing comprehensive datasets and advanced statistical techniques, the model accurately forecasts travel times by considering temporal variations and traffic dynamics. Machine learning algorithms, including regression trees and neural networks, are employed to capture sequential dependencies. Results indicate significant improvements in predictive accuracy, particularly during peak hours and holidays, with the incorporation of traffic flow and speed variables. Future enhancements may integrate weather conditions and traffic incidents. The model's applications range from adaptive traffic management systems to route optimization algorithms, facilitating congestion reduction and enhancing journey reliability. Overall, this research extends beyond travel time estimation, offering insights into broader transportation planning and policy-making realms, empowering stakeholders to optimize infrastructure utilization and improve network efficiency.

Keywords: predictive analytics, traffic flow, travel time estimation, urban transportation, machine learning, traffic management

Procedia PDF Downloads 60
2253 Dual-Channel Multi-Band Spectral Subtraction Algorithm Dedicated to a Bilateral Cochlear Implant

Authors: Fathi Kallel, Ahmed Ben Hamida, Christian Berger-Vachon

Abstract:

In this paper, a Speech Enhancement Algorithm based on Multi-Band Spectral Subtraction (MBSS) principle is evaluated for Bilateral Cochlear Implant (BCI) users. Specifically, dual-channel noise power spectral estimation algorithm using Power Spectral Densities (PSD) and Cross Power Spectral Densities (CPSD) of the observed signals is studied. The enhanced speech signal is obtained using Dual-Channel Multi-Band Spectral Subtraction ‘DC-MBSS’ algorithm. For performance evaluation, objective speech assessment test relying on Perceptual Evaluation of Speech Quality (PESQ) score is performed to fix the optimal number of frequency bands needed in DC-MBSS algorithm. In order to evaluate the speech intelligibility, subjective listening tests are assessed with 3 deafened BCI patients. Experimental results obtained using French Lafon database corrupted by an additive babble noise at different Signal-to-Noise Ratios (SNR) showed that DC-MBSS algorithm improves speech understanding for single and multiple interfering noise sources.

Keywords: speech enhancement, spectral substracion, noise estimation, cochlear impalnt

Procedia PDF Downloads 535
2252 Effects of Family Order and Informal Social Control on Protecting against Child Maltreatment: A Comparative Study of Seoul and Kathmandu

Authors: Thapa Sirjana, Clifton R. Emery

Abstract:

This paper examines the family order and Informal Social Control (ISC) by the extended families as a protective factor against Child Maltreatment. The findings are discussed using the main effects and the interaction effects of family order and informal social control by the extended families. The findings suggest that IPV mothers are associated with child abuse and child neglect. The children are neglected in the home more and physical abuse occurs in the case, if mothers are abused by their husbands. The mother’s difficulties of being abused may lead them to neglect their children. The findings suggest that ‘family order’ is a significant protective factor against child maltreatment. The results suggest that if the family order is neither too high nor too low than that can play a role as a protective factor. Soft type of ISC is significantly associated with child maltreatment. This study suggests that the soft type of ISC by the extended families is a helpful approach to develop child protection in both the countries. This study is analyzed the data collected from Seoul and Kathmandu families and neighborhood study (SKFNS). Random probability cluster sample of married or partnered women in 20 Kathmandu wards and in Seoul 34 dongs were selected using probability proportional to size (PPS) sampling. Overall, the study is to make a comparative study of Korea and Nepal and examine how the cultural differences and similarities associate with the child maltreatment.

Keywords: child maltreatment, intimate partner violence, informal social control and family order Seoul, Kathmandu

Procedia PDF Downloads 233
2251 Determination of Medians of Biochemical Maternal Serum Markers in Healthy Women Giving Birth to Normal Babies

Authors: Noreen Noreen, Aamir Ijaz, Hamza Akhtar

Abstract:

Background: Screening plays a major role to detect chromosomal abnormalities, Down syndrome, neural tube defects and other inborn diseases of the newborn. Serum biomarkers in the second trimester are useful in determining risk of most common chromosomal anomalies; these test include Alpha-fetoprotein (AFP), Human chorionic gonadotropin (hCG), Unconjugated Oestriol (UEȝ)and inhibin-A. Quadruple biomarkers are worth test in diagnosing the congenital pathology during pregnancy, these procedures does not form a part of routine health care of pregnant women in Pakistan, so the median value is lacking for population in Pakistan. Objective: To determine median values of biochemical maternal serum markers in local population during second trimester maternal screening. Study settings: Department of Chemical Pathology and Endocrinology, Armed Forces Institute of Pathology (AFIP) Rawalpindi. Methods: Cross-Sectional study for estimation of reference values. Non-probability consecutive sampling, 155 healthy pregnant women, of 30-40 years of age, will be included. As non-parametric statistics will be used, the minimum sample size is 120. Result: Total 155 women were enrolled into this study. The age of all women enrolled ranged from 30 to39 yrs. Among them, 39 per cent of women were less than 34 years. Mean maternal age 33.46±2.35 SD and maternal body weight were 54.98±2.88. Median value of quadruple markers calculated from 15-18th week of gestation that will be used for calculation of MOM for screening of trisomy21 in this gestational age. Median value at 15 week of gestation were observed hCG 36650 mIU/ml, AFP 23.3 IU/ml, UEȝ 3.5 nmol/L, InhibinA 198 ng/L, at 16 week of gestation hCG 29050 mIU/ml, AFP 35.4 IU/ml, UEȝ 4.1 nmol/L, InhibinA 179 ng/L, at 17 week of gestation hCG 28450 mIU/ml, AFP 36.0 IU/ml, UEȝ 6.7 nmol/L, InhibinA 176 ng/L and at 18 week of gestation hCG 25200 mIU/ml, AFP 38.2 IU/ml, UEȝ 8.2 nmol/L, InhibinA 190 ng/L respectively.All the comparisons were significant (p-Value <0.005) with 95% confidence Interval (CI) and level of significance of study set by going through literature and set at 5%. Conclusion: The median values for these four biomarkers in Pakistani pregnant women can be used to calculate MoM.

Keywords: screening, down syndrome, quadruple test, second trimester, serum biomarkers

Procedia PDF Downloads 162
2250 Household Food Security and Poverty Reduction in Cameroon

Authors: Bougema Theodore Ntenkeh, Chi-bikom Barbara Kyien

Abstract:

The reduction of poverty and hunger sits at the heart of the United Nations 2030 Agenda for Sustainable Development, and are the first two of the Sustainable Development Goals. The World Food Day celebrated on the 16th of October every year, highlights the need for people to have physical and economic access at all times to enough nutritious and safe food to live a healthy and active life; while the world poverty day celebrated on the 17th of October is an opportunity to acknowledge the struggle of people living in poverty, a chance for them to make their concerns heard, and for the community to recognize and support poor people in their fight against poverty. The association between household food security and poverty reduction is not only sparse in Cameroon but mostly qualitative. The paper therefore investigates the effect of household food security on poverty reduction in Cameroon quantitatively using data from the Cameroon Household Consumption Survey collected by the Government Statistics Office. The methodology employed five indicators of household food security using the Multiple Correspondence Analysis and poverty is captured as a dummy variable. Using a control function technique, with pre and post estimation test for robustness, the study postulates that household food security has a positive and significant effect on poverty reduction in Cameroon. A unit increase in the food security score reduces the probability of the household being poor by 31.8%, and this effect is statistically significant at 1%. The result further illustrates that the age of the household head and household size increases household poverty while households residing in urban areas are significantly less poor. The paper therefore recommends that households should diversify their food intake to enhance an effective supply of labour in the job market as a strategy to reduce household poverty. Furthermore, family planning methods should be encouraged as a strategy to reduce birth rate for an equitable distribution of household resources including food while the government of Cameroon should also develop the rural areas given that trend in urbanization are associated with the concentration of productive economic activities, leading to increase household income, increased household food security and poverty reduction.

Keywords: food security, poverty reduction, SDGs, Cameroon

Procedia PDF Downloads 58
2249 Sequence Polymorphism and Haplogroup Distribution of Mitochondrial DNA Control Regions HVS1 and HVS2 in a Southwestern Nigerian Population

Authors: Ogbonnaya O. Iroanya, Samson T. Fakorede, Osamudiamen J. Edosa, Hadiat A. Azeez

Abstract:

The human mitochondrial DNA (mtDNA) is about 17 kbp circular DNA fragments found within the mitochondria together with smaller fragments of 1200 bp known as the control region. Knowledge of variation within populations has been employed in forensic and molecular anthropology studies. The study was aimed at investigating the polymorphic nature of the two hypervariable segments (HVS) of the mtDNA, i.e., HVS1 and HVS2, and to determine the haplogroup distribution among individuals resident in Lagos, Southwestern Nigeria. Peripheral blood samples were obtained from sixty individuals who are not related maternally, followed by DNA extraction and amplification of the extracted DNA using primers specific for the regions under investigation. DNA amplicons were sequenced, and sequenced data were aligned and compared to the revised Cambridge Reference Sequence (rCRS) GenBank Accession number: NC_012920.1) using BioEdit software. Results obtained showed 61 and 52 polymorphic nucleotide positions for HVS1 and HVS2, respectively. While a total of three indels mutation were recorded for HVS1, there were seven for HVS2. Also, transition mutations predominate nucleotide change observed in the study. Genetic diversity (GD) values for HVS1 and HVS2 were estimated to be 84.21 and 90.4%, respectively, while random match probability was 0.17% for HVS1 and 0.89% for HVS2. The study also revealed mixed haplogroups specific to the African (L1-L3) and the Eurasians (U and H) lineages. New polymorphic sites obtained from the study are promising for human identification purposes.

Keywords: hypervariable region, indels, mitochondrial DNA, polymorphism, random match probability

Procedia PDF Downloads 102
2248 Structural Stress of Hegemon’s Power Loss: A Pestle Analysis for Pacification and Security Policy Plan

Authors: Sehrish Qayyum

Abstract:

Active military power contention is shifting to economic and cyberwar to retain hegemony. Attuned Pestle analysis confirms that structural stress of hegemon’s power loss drives a containment approach towards caging actions. Ongoing diplomatic, asymmetric, proxy and direct wars are increasing stress hegemon’s power retention due to tangled military and economic alliances. It creates the condition of catalepsy with defective reflexive control which affects the core warfare operations. When one’s own power is doubted it gives power to one’s own doubt to ruin all planning either done with superlative cost-benefit analysis. Strategically calculated estimation of Hegemon’s power game since the early WWI to WWII, WWII-to Cold War and then to the current era in three chronological periods exposits that Thucydides’s trap became the reason for war broke out. Thirst for power is the demise of imagination and cooperation for better sense to prevail instead it drives ashes to dust. Pestle analysis is a wide array of evaluation from political and economic to legal dimensions of the state matters. It helps to develop the Pacification and Security Policy Plan (PSPP) to avoid hegemon’s structural stress of power loss in fact, in turn, creates an alliance with maximum amicable outputs. PSPP may serve to regulate and pause the hurricane of power clashes. PSPP along with a strategic work plan is based on Pestle analysis to deal with any conceivable war condition and approach for saving international peace. Getting tangled into self-imposed epistemic dilemmas results in regret that becomes the only option of performance. It is a generic application of probability tests to find the best possible options and conditions to develop PSPP for any adversity possible so far. Innovation in expertise begets innovation in planning and action-plan to serve as a rheostat approach to deal with any plausible power clash.

Keywords: alliance, hegemon, pestle analysis, pacification and security policy plan, security

Procedia PDF Downloads 93
2247 ‘Groupitizing’ – A Key Factor in Math Learning Disabilities

Authors: Michal Wolk, Bat-Sheva Hadad, Orly Rubinsten

Abstract:

Objective: The visuospatial perception system process that allows us to decompose and recompose small quantities into a whole is often called “groupitizing.” Previous studies have been found that adults use groupitizing processes in quantity estimation tasks and link this ability of subgroups recognition to arithmetic proficiency. This pilot study examined if adults with math difficulties benefit from visuospatial grouping cues when asked to estimate the quantity of a given set. It also compared the tipping point in which a significant improvement occurs in adults with typical development compared to adults with math difficulties. Method: In this pilot research, we recruited adults with low arithmetic abilities and matched controls. Participants were asked to estimate the quantity of a given set. Different grouping cues were displayed (space, color, or none) with different visual configurations (different quantities-different shapes, same quantities- different shapes, same quantities- same shapes). Results: Both groups showed significant performance improvement when grouping cues appeared. However, adults with low arithmetic abilities benefited from the grouping cues already in very small quantities as four. Conclusion: impaired perceptual groupitizing abilities may be a characteristic of low arithmetic abilities.

Keywords: groupitizing, math learning disability, quantity estimation, visual perception system

Procedia PDF Downloads 187
2246 Scour Depth Prediction around Bridge Piers Using Neuro-Fuzzy and Neural Network Approaches

Authors: H. Bonakdari, I. Ebtehaj

Abstract:

The prediction of scour depth around bridge piers is frequently considered in river engineering. One of the key aspects in efficient and optimum bridge structure design is considered to be scour depth estimation around bridge piers. In this study, scour depth around bridge piers is estimated using two methods, namely the Adaptive Neuro-Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN). Therefore, the effective parameters in scour depth prediction are determined using the ANN and ANFIS methods via dimensional analysis, and subsequently, the parameters are predicted. In the current study, the methods’ performances are compared with the nonlinear regression (NLR) method. The results show that both methods presented in this study outperform existing methods. Moreover, using the ratio of pier length to flow depth, ratio of median diameter of particles to flow depth, ratio of pier width to flow depth, the Froude number and standard deviation of bed grain size parameters leads to optimal performance in scour depth estimation.

Keywords: adaptive neuro-fuzzy inference system (ANFIS), artificial neural network (ANN), bridge pier, scour depth, nonlinear regression (NLR)

Procedia PDF Downloads 204
2245 Real-Time Classification of Hemodynamic Response by Functional Near-Infrared Spectroscopy Using an Adaptive Estimation of General Linear Model Coefficients

Authors: Sahar Jahani, Meryem Ayse Yucel, David Boas, Seyed Kamaledin Setarehdan

Abstract:

Near-infrared spectroscopy allows monitoring of oxy- and deoxy-hemoglobin concentration changes associated with hemodynamic response function (HRF). HRF is usually affected by natural physiological hemodynamic (systemic interferences) which occur in all body tissues including brain tissue. This makes HRF extraction a very challenging task. In this study, we used Kalman filter based on a general linear model (GLM) of brain activity to define the proportion of systemic interference in the brain hemodynamic. The performance of the proposed algorithm is evaluated in terms of the peak to peak error (Ep), mean square error (MSE), and Pearson’s correlation coefficient (R2) criteria between the estimated and the simulated hemodynamic responses. This technique also has the ability of real time estimation of single trial functional activations as it was applied to classify finger tapping versus resting state. The average real-time classification accuracy of 74% over 11 subjects demonstrates the feasibility of developing an effective functional near infrared spectroscopy for brain computer interface purposes (fNIRS-BCI).

Keywords: hemodynamic response function, functional near-infrared spectroscopy, adaptive filter, Kalman filter

Procedia PDF Downloads 140
2244 Estimations of Spectral Dependence of Tropospheric Aerosol Single Scattering Albedo in Sukhothai, Thailand

Authors: Siriluk Ruangrungrote

Abstract:

Analyses of available data from MFR-7 measurement were performed and discussed on the study of tropospheric aerosol and its consequence in Thailand. Since, ASSA (w) is one of the most important parameters for a determination of aerosol effect on radioactive forcing. Here the estimation of w was directly determined in terms of the ratio of aerosol scattering optical depth to aerosol extinction optical depth (ωscat/ωext) without any utilization of aerosol computer code models. This is of benefit for providing the elimination of uncertainty causing by the modeling assumptions and the estimation of actual aerosol input data. Diurnal w of 5 cloudless-days in winter and early summer at 5 distinct wavelengths of 415, 500, 615, 673 and 870 nm with the consideration of Rayleigh scattering and atmospheric column NO2 and Ozone contents were investigated, respectively. Besides, the tendency of spectral dependence of ω representing two seasons was observed. The characteristic of spectral results reveals that during wintertime the atmosphere of the inland rural vicinity for the period of measurement possibly dominated with a lesser amount of soil dust aerosols loading than one in early summer. Hence, the major aerosol loading particularly in summer was subject to a mixture of both soil dust and biomass burning aerosols.

Keywords: aerosol scattering optical depth, aerosol extinction optical depth, biomass burning aerosol, soil dust aerosol

Procedia PDF Downloads 389
2243 Prediction of Physical Properties and Sound Absorption Performance of Automotive Interior Materials

Authors: Un-Hwan Park, Jun-Hyeok Heo, In-Sung Lee, Seong-Jin Cho, Tae-Hyeon Oh, Dae-Kyu Park

Abstract:

Sound absorption coefficient is considered important when designing because noise affects emotion quality of car. It is designed with lots of experiment tunings in the field because it is unreliable to predict it for multi-layer material. In this paper, we present the design of sound absorption for automotive interior material with multiple layers using estimation software of sound absorption coefficient for reverberation chamber. Additionally, we introduce the method for estimation of physical properties required to predict sound absorption coefficient of car interior materials with multiple layers too. It is calculated by inverse algorithm. It is very economical to get information about physical properties without expensive equipment. Correlation test is carried out to ensure reliability for accuracy. The data to be used for the correlation is sound absorption coefficient measured in the reverberation chamber. In this way, it is considered economical and efficient to design automotive interior materials. And design optimization for sound absorption coefficient is also easy to implement when it is designed.

Keywords: sound absorption coefficient, optimization design, inverse algorithm, automotive interior material, multiple layers nonwoven, scaled reverberation chamber, sound impedance tubes

Procedia PDF Downloads 291
2242 Analysis of Translational Ship Oscillations in a Realistic Environment

Authors: Chen Zhang, Bernhard Schwarz-Röhr, Alexander Härting

Abstract:

To acquire accurate ship motions at the center of gravity, a single low-cost inertial sensor is utilized and applied on board to measure ship oscillating motions. As observations, the three axes accelerations and three axes rotational rates provided by the sensor are used. The mathematical model of processing the observation data includes determination of the distance vector between the sensor and the center of gravity in x, y, and z directions. After setting up the transfer matrix from sensor’s own coordinate system to the ship’s body frame, an extended Kalman filter is applied to deal with nonlinearities between the ship motion in the body frame and the observation information in the sensor’s frame. As a side effect, the method eliminates sensor noise and other unwanted errors. Results are not only roll and pitch, but also linear motions, in particular heave and surge at the center of gravity. For testing, we resort to measurements recorded on a small vessel in a well-defined sea state. With response amplitude operators computed numerically by a commercial software (Seaway), motion characteristics are estimated. These agree well with the measurements after processing with the suggested method.

Keywords: extended Kalman filter, nonlinear estimation, sea trial, ship motion estimation

Procedia PDF Downloads 510
2241 Corrosion Risk Assessment/Risk Based Inspection (RBI)

Authors: Lutfi Abosrra, Alseddeq Alabaoub, Nuri Elhaloudi

Abstract:

Corrosion processes in the Oil & Gas industry can lead to failures that are usually costly to repair, costly in terms of loss of contaminated product, in terms of environmental damage and possibly costly in terms of human safety. This article describes the results of the corrosion review and criticality assessment done at Mellitah Gas (SRU unit) for pressure equipment and piping system. The information gathered through the review was intended for developing a qualitative RBI study. The corrosion criticality assessment has been carried out by applying company procedures and industrial recommended practices such as API 571, API 580/581, ASME PCC 3, which provides a guideline for establishing corrosion integrity assessment. The corrosion review is intimately related to the probability of failure (POF). During the corrosion study, the process units are reviewed by following the applicable process flow diagrams (PFDs) in the presence of Mellitah’s personnel from process engineering, inspection, and corrosion/materials and reliability engineers. The expected corrosion damage mechanism (internal and external) was identified, and the corrosion rate was estimated for every piece of equipment and corrosion loop in the process units. A combination of both Consequence and Likelihood of failure was used for determining the corrosion risk. A qualitative consequence of failure (COF) for each individual item was assigned based on the characteristics of the fluid as per its flammability, toxicity, and pollution into three levels (High, Medium, and Low). A qualitative probability of failure (POF)was applied to evaluate the internal and external degradation mechanism, a high-level point-based (0 to 10) for the purpose of risk prioritizing in the range of Low, Medium, and High.

Keywords: corrosion, criticality assessment, RBI, POF, COF

Procedia PDF Downloads 56
2240 The Moment of the Optimal Average Length of the Multivariate Exponentially Weighted Moving Average Control Chart for Equally Correlated Variables

Authors: Edokpa Idemudia Waziri, Salisu S. Umar

Abstract:

The Hotellng’s T^2 is a well-known statistic for detecting a shift in the mean vector of a multivariate normal distribution. Control charts based on T have been widely used in statistical process control for monitoring a multivariate process. Although it is a powerful tool, the T statistic is deficient when the shift to be detected in the mean vector of a multivariate process is small and consistent. The Multivariate Exponentially Weighted Moving Average (MEWMA) control chart is one of the control statistics used to overcome the drawback of the Hotellng’s T statistic. In this paper, the probability distribution of the Average Run Length (ARL) of the MEWMA control chart when the quality characteristics exhibit substantial cross correlation and when the process is in-control and out-of-control was derived using the Markov Chain algorithm. The derivation of the probability functions and the moments of the run length distribution were also obtained and they were consistent with some existing results for the in-control and out-of-control situation. By simulation process, the procedure identified a class of ARL for the MEWMA control when the process is in-control and out-of-control. From our study, it was observed that the MEWMA scheme is quite adequate for detecting a small shift and a good way to improve the quality of goods and services in a multivariate situation. It was also observed that as the in-control average run length ARL0¬ or the number of variables (p) increases, the optimum value of the ARL0pt increases asymptotically and as the magnitude of the shift σ increases, the optimal ARLopt decreases. Finally, we use the example from the literature to illustrate our method and demonstrate its efficiency.

Keywords: average run length, markov chain, multivariate exponentially weighted moving average, optimal smoothing parameter

Procedia PDF Downloads 406
2239 Modulation of Isoprenaline-Induced Myocardial Damage by Atorvastatin

Authors: Dalia Atallah, Lamiaa Ahmed, Hala Zaki, Mahmoud Khattab

Abstract:

Background: Isoprenaline (ISO) administration induces myocardial damage via oxidative stress and endothelial dysfunction. Atorvastatin (ATV) treatment improves both oxidative stress and endothelial dysfunction yet recent studies have reported a pro-oxidant effect upon ATV administration on both clinical and experimental studies. The present study was directed to investigate the effect of ATV pre-treatment and treatment on ISO-induced myocardial damage. Methods: Male rats were divided into five groups (n = 10). Rats were given ISO (5mg/kg/day, i.p.) for one week with or without ATV (10mg/kg/day, p.o.). ATV was given either as pre-treatment for one week before its co-administration with ISO for another week or as a treatment for two weeks at the end of the ISO administration. At the end of the experiment, the electrocardiographic examination was done and blood was isolated for the estimation of plasma creatine kinase MB (CK-MB) activity. Rats were then sacrificed and the whole ventricles were isolated for histological examination and the estimation of lipid peroxides as malondialdehyde (MDA) level, reduced glutathione (GSH) level, catalase activity, total nitrate-nitrite (NOx), as well as the estimation of both endothelial nitric oxide synthase (eNOS) and inducible nitric oxide synthase (iNOS) protein expression. Results: ISO-induced myocardial damage showed a significant elevation in ST segment, an increase in CK-MB activity, as well as increased oxidative stress biomarkers. Also, ISO-treated rats showed a significant decrease in myocardial NOx level and eNOS as well as degeneration in the myocardium. ATV pre-treatment didn’t show any protection to ISO-treated rats. On the other hand, ATV treatment showed a significant decrease in both the elevated ST wave and CK-MB activity. Moreover, ATV Treatment succeeded to improve oxidative stress biomarkers, tissue NOx, and eNOS protein expression, as well as amelioration of the histological alterations. Conclusion: Pre-treatment with ATV failed to protect against ISO-induced damage. This might suggest a synergistic pro-oxidant effect upon administration of the pro-oxidant ISO along with ATV as demonstrated by the increased oxidative stress and endothelial dysfunction. On the other side, ATV treatment succeeded to significantly improve oxidative stress biomarkers, endothelial dysfunction and myocardial degeneration.

Keywords: atorvastatin, endothelial dysfunction, isoprenaline, oxidative stress

Procedia PDF Downloads 432
2238 Analysis of Two Methods to Estimation Stochastic Demand in the Vehicle Routing Problem

Authors: Fatemeh Torfi

Abstract:

Estimation of stochastic demand in physical distribution in general and efficient transport routs management in particular is emerging as a crucial factor in urban planning domain. It is particularly important in some municipalities such as Tehran where a sound demand management calls for a realistic analysis of the routing system. The methodology involved critically investigating a fuzzy least-squares linear regression approach (FLLRs) to estimate the stochastic demands in the vehicle routing problem (VRP) bearing in mind the customer's preferences order. A FLLR method is proposed in solving the VRP with stochastic demands. Approximate-distance fuzzy least-squares (ADFL) estimator ADFL estimator is applied to original data taken from a case study. The SSR values of the ADFL estimator and real demand are obtained and then compared to SSR values of the nominal demand and real demand. Empirical results showed that the proposed methods can be viable in solving problems under circumstances of having vague and imprecise performance ratings. The results further proved that application of the ADFL was realistic and efficient estimator to face the stochastic demand challenges in vehicle routing system management and solve relevant problems.

Keywords: fuzzy least-squares, stochastic, location, routing problems

Procedia PDF Downloads 409
2237 From Responses of Macroinvertebrate Metrics to the Definition of Reference Thresholds

Authors: Hounyèmè Romuald, Mama Daouda, Argillier Christine

Abstract:

The present study focused on the use of benthic macrofauna to define the reference state of an anthropized lagoon (Nokoué-Benin) from the responses of relevant metrics to proxies. The approach used is a combination of a joint species distribution model and Bayesian networks. The joint species distribution model was used to select the relevant metrics and generate posterior probabilities that were then converted into posterior response probabilities for each of the quality classes (pressure levels), which will constitute the conditional probability tables allowing the establishment of the probabilistic graph representing the different causal relationships between metrics and pressure proxies. For the definition of the reference thresholds, the predicted responses for low-pressure levels were read via probability density diagrams. Observations collected during high and low water periods spanning 03 consecutive years (2004-2006), sampling 33 macroinvertebrate taxa present at all seasons and sampling points, and measurements of 14 environmental parameters were used as application data. The study demonstrated reliable inferences, selection of 07 relevant metrics and definition of quality thresholds for each environmental parameter. The relevance of the metrics as well as the reference thresholds for ecological assessment despite the small sample size, suggests the potential for wider applicability of the approach for aquatic ecosystem monitoring and assessment programs in developing countries generally characterized by a lack of monitoring data.

Keywords: pressure proxies, bayesian inference, bioindicators, acadjas, functional traits

Procedia PDF Downloads 70
2236 Estimation of Adult Patient Doses for Chest X-Ray Diagnostic Examinations in a Tertiary Institution Health Centre

Authors: G. E. Okungbowa, H. O. Adams, S. E. Eze

Abstract:

This study is on the estimation of adult patient doses for Chest X-ray diagnostic examinations of new admitted undergraduate students attending a tertiary institution health centre as part of their routine clearance and check up on admitted into the institution. A total of 531 newly admitted undergraduate students were recruited for this survey in the first quarter of 2016 (January to March, 2016). CALDOSE_X 5.0 software was used to compute the Entrance Surface Dose (ESD) and Effective Dose (ED); while the Statistical Package for Social Sciences (SPSS) version 21.0 was used to carry out the statistical analyses. The basic patients' data and exposure parameters required for the software are age, sex, examination type, projection posture, tube potential and current-time product. The mean Entrance Surface Dose and Effective Doses of the undergraduate students were calculated using the software, and the values were compared with existing literature and internationally established diagnostic reference levels. The mean ESD calculated is 0.29 mGy, and the mean effective dose is 0.04 mSv. The values of ESD and ED obtained are below the internationally established diagnostic reference levels, which could be attributed to good radiographic techniques employed during the chest X-ray procedure for these students.

Keywords: x-ray, dose, examination, chest

Procedia PDF Downloads 167
2235 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability

Procedia PDF Downloads 200
2234 Numerical Response of Planar HPGe Detector for 241Am Contamination of Various Shapes

Authors: M. Manohari, Himanshu Gupta, S. Priyadharshini, R.Santhanam, S.Chandrasekaran, B|.Venkatraman

Abstract:

Injection is one of the potential routes of intake in a radioactive facility. The internal dose due to this intake is monitored at the radiation emergency medical centre, IGCAR using a portable planar HPGe detector. The contaminated wound may be having different shapes. In a reprocessing potential of wound contamination with actinide is more. Efficiency is one of the input parameters for estimation of internal dose. Estimating these efficiencies experimentally would be tedious and cumbersome. Numerical estimation can be a supplement to experiment. As an initial step in this study 241Am contamination of different shapes are studied. In this study portable planar HPGe detector was modeled using Monte Carlo code FLUKA and the effect of different parameters like distance of the contamination from the detector, radius of the circular contamination were studied. Efficiency values for point and surface contamination located at different distances were estimated. The effect of efficiency on the radius of the surface source was more predominant when the source is at 1 cm distance compared to when the source to detector distance is 10 cm. At 1 cm the efficiency decreased quadratically as the radius increased and at 10 cm it decreased linearly. The point source efficiency varied exponentially with source to detector distance.

Keywords: Planar HPGe, efficiency value, injection, surface source

Procedia PDF Downloads 25
2233 Frequency Interpretation of a Wave Function, and a Vertical Waveform Treated as A 'Quantum Leap'

Authors: Anthony Coogan

Abstract:

Born’s probability interpretation of wave functions would have led to nearly identical results had he chosen a frequency interpretation instead. Logically, Born may have assumed that only one electron was under consideration, making it nonsensical to propose a frequency wave. Author’s suggestion: the actual experimental results were not of a single electron; rather, they were groups of reflected x-ray photons. The vertical waveform used by Scrhödinger in his Particle in the Box Theory makes sense if it was intended to represent a quantum leap. The author extended the single vertical panel to form a bar chart: separate panels would represent different energy levels. The proposed bar chart would be populated by reflected photons. Expansion of basic ideas: Part of Scrhödinger’s ‘Particle in the Box’ theory may be valid despite negative criticism. The waveform used in the diagram is vertical, which may seem absurd because real waves decay at a measurable rate, rather than instantaneously. However, there may be one notable exception. Supposedly, following from the theory, the Uncertainty Principle was derived – may a Quantum Leap not be represented as an instantaneous waveform? The great Scrhödinger must have had some reason to suggest a vertical waveform if the prevalent belief was that they did not exist. Complex wave forms representing a particle are usually assumed to be continuous. The actual observations made were x-ray photons, some of which had struck an electron, been reflected, and then moved toward a detector. From Born’s perspective, doing similar work the years in question 1926-7, he would also have considered a single electron – leading him to choose a probability distribution. Probability Distributions appear very similar to Frequency Distributions, but the former are considered to represent the likelihood of future events. Born’s interpretation of the results of quantum experiments led (or perhaps misled) many researchers into claiming that humans can influence events just by looking at them, e.g. collapsing complex wave functions by 'looking at the electron to see which slit it emerged from', while in reality light reflected from the electron moved in the observer’s direction after the electron had moved away. Astronomers may say that they 'look out into the universe' but are actually using logic opposed to the views of Newton and Hooke and many observers such as Romer, in that light carries information from a source or reflector to an observer, rather the reverse. Conclusion: Due to the controversial nature of these ideas, especially its implications about the nature of complex numbers used in applications in science and engineering, some time may pass before any consensus is reached.

Keywords: complex wave functions not necessary, frequency distributions instead of wave functions, information carried by light, sketch graph of uncertainty principle

Procedia PDF Downloads 188