Search results for: spectrum estimation
2418 Single Centre Retrospective Analysis of MR Imaging in Placenta Accreta Spectrum Disorder with Histopathological Correlation
Authors: Frank Dorrian, Aniket Adhikari
Abstract:
The placenta accreta spectrum (PAS), which includes placenta accreta, increta, and percreta, is characterized by the abnormal implantation of placental chorionic villi beyond the decidua basalis. Key risk factors include placenta previa, prior cesarean sections, advanced maternal age, uterine surgeries, multiparity, pelvic radiation, and in vitro fertilization (IVF). The incidence of PAS has increased tenfold over the past 50 years, largely due to rising cesarean rates. PAS is associated with significant peripartum and postpartum hemorrhage. Magnetic resonance imaging (MRI) and ultrasound assist in the evaluation of PAS, enabling a multidisciplinary approach to mitigate morbidity and mortality. This study retrospectively analyzed PAS cases at Royal Prince Alfred Hospital, Sydney, Australia. Using the SAR-ESUR joint consensus statement, seven imaging signs were reassessed for their sensitivity and specificity in predicting PAS, with histopathological correlation. The standardized MRI protocols for PAS at the institution were also reviewed. Data were collected from the picture archiving and communication system (PACS) records from 2010 to July 2024, focusing on cases where MR imaging and confirmed histopathology or operative notes were available. This single-center, observational study provides insights into the reliability of MRI for PAS detection and the optimization of imaging protocols for accurate diagnosis. The findings demonstrate that intraplacental dark bands serve as highly sensitive markers for diagnosing PAS, achieving sensitivities of 88.9%, 85.7%, and 100% for placenta accreta, increta, and percreta, respectively, with a combined specificity of 42.9%. Sensitivity for abnormal vascularization was lower (33.3%, 28.6%, and 50%), with a specificity of 57.1%. The placenta bulge exhibited sensitivities of 55.5%, 57.1%, and 100%, with a specificity of 57.1%. Loss of the T2 hypointense interface had sensitivities of 66.6%, 85.7%, and 100%, with 42.9% specificity. Myometrial thinning showed high sensitivity across PAS conditions (88.9%, 71.4%, and 100%) and a specificity of 57.1%. Bladder wall thinning was sensitive only for placenta percreta (50%) but had a specificity of 100%. Focal exophytic mass displayed variable sensitivity (22.9%, 42.9%, and 100%) with a specificity of 85.7%. These results highlight the diagnostic variability among markers, with intraplacental dark bands and myometrial thinning being useful in detecting abnormal placentation, though they lack high specificity. The literature and the results of our study highlight that while no single feature can definitively diagnose PAS, the presence of multiple features -especially when combined with elevated clinical risk- significantly increases the likelihood of an underlying PAS. A thorough understanding of the range of MRI findings associated with PAS, along with awareness of the clinical significance of each sign, helps the radiologist more accurately diagnose the condition and assist in surgical planning, ultimately improving patient care.Keywords: placenta, accreta, spectrum, MRI
Procedia PDF Downloads 82417 Basics of Gamma Ray Burst and Its Afterglow
Authors: Swapnil Kumar Singh
Abstract:
Gamma-ray bursts (GRB's), short and intense pulses of low-energy γ rays, have fascinated astronomers and astrophysicists since their unexpected discovery in the late sixties. GRB'sare accompanied by long-lasting afterglows, and they are associated with core-collapse supernovae. The detection of delayed emission in X-ray, optical, and radio wavelength, or "afterglow," following a γ-ray burst can be described as the emission of a relativistic shell decelerating upon collision with the interstellar medium. While it is fair to say that there is strong diversity amongst the afterglow population, probably reflecting diversity in the energy, luminosity, shock efficiency, baryon loading, progenitor properties, circumstellar medium, and more, the afterglows of GRBs do appear more similar than the bursts themselves, and it is possible to identify common features within afterglows that lead to some canonical expectations. After an initial flash of gamma rays, a longer-lived "afterglow" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave, and radio). It is a slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. In X-ray wavelengths, the GRB afterglow fades quickly at first, then transitions to a less-steep drop-off (it does other stuff after that, but we'll ignore that for now). During these early phases, the X-ray afterglow has a spectrum that looks like a power law: flux F∝ E^β, where E is energy and beta is some number called the spectral index. This kind of spectrum is characteristic of synchrotron emission, which is produced when charged particles spiral around magnetic field lines at close to the speed of light. In addition to the outgoing forward shock that ploughs into the interstellar medium, there is also a so-called reverse shock, which propagates backward through the ejecta. In many ways," reverse" shock can be misleading; this shock is still moving outward from the restframe of the star at relativistic velocity but is ploughing backward through the ejecta in their frame and is slowing the expansion. This reverse shock can be dynamically important, as it can carry comparable energy to the forward shock. The early phases of the GRB afterglow still provide a good description even if the GRB is highly collimated since the individual emitting regions of the outflow are not in causal contact at large angles and so behave as though they are expanding isotropically. The majority of afterglows, at times typically observed, fall in the slow cooling regime, and the cooling break lies between the optical and the X-ray. Numerous observations support this broad picture for afterglows in the spectral energy distribution of the afterglow of the very bright GRB. The bluer light (optical and X-ray) appears to follow a typical synchrotron forward shock expectation (note that the apparent features in the X-ray and optical spectrum are due to the presence of dust within the host galaxy). We need more research in GRB and Particle Physics in order to unfold the mysteries of afterglow.Keywords: GRB, synchrotron, X-ray, isotropic energy
Procedia PDF Downloads 882416 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing
Authors: S. Bouhouche, R. Drai, J. Bast
Abstract:
This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement
Procedia PDF Downloads 2832415 Synthesis, Characterization, Optical and Photophysical Properties of Pyrene-Labeled Ruthenium(Ii) Trisbipyridine Complex Cored Dendrimers
Authors: Mireille Vonlanthen, Pasquale Porcu, Ernesto Rivera
Abstract:
Dendritic macromolecules are presenting unique physical and chemical properties. One of them is the faculty of transferring energy from a donor moiety introduced at the periphery to an acceptor moiety at the core, mimicking the antenna effect of the process of photosynthesis. The mechanism of energy transfer is based on the Förster resonance energy exchange and requires some overlap between the emission spectrum of the donor and the absorption spectrum of the acceptor. Since it requires a coupling of transition dipole but no overlap of the physical wavefunctions, the energy transfer by Förster mechanism can occur over quite long distances from 1 to a maximum of 10 nm. However, the efficiency of the transfer depends strongly on distance. The Förster radius is the distance at which 50% of the donor’s emission is deactivated by FRET. In this work, we synthesized and characterized a novel series of dendrimers bearing pyrene moieties at the periphery and a Ru (II) complex at the core. The optical and photophysical properties of these compounds were studied by absorption and fluorescence spectroscopy. Pyrene is a well-studied chromophore that has the particularity to present monomer as well as excimer fluorescence emission. The coordination compounds of Ru (II) are red emitters with low quantum yield and long excited lifetime. We observed an efficient singulet to singulet energy transfer in such constructs. Moreover, it is known that the energy of the MLCT emitting state of Ru (II) can be tuned to become almost isoenegetic with respect to the triplet state of pyrene, leading to an extended phosphorescence lifetime. Using dendrimers bearing pyrene moieties as ligands for Ru (II), we could combine the antenna effect of dendrimers as well as its protection effect to the quenching by dioxygen with lifetime increase due to triplet-triplet equilibrium.Keywords: dendritic molecules, energy transfer, pyrene, ru-trisbipyridine complex
Procedia PDF Downloads 2772414 Optimization by Means of Genetic Algorithm of the Equivalent Electrical Circuit Model of Different Order for Li-ion Battery Pack
Authors: V. Pizarro-Carmona, S. Castano-Solis, M. Cortés-Carmona, J. Fraile-Ardanuy, D. Jimenez-Bermejo
Abstract:
The purpose of this article is to optimize the Equivalent Electric Circuit Model (EECM) of different orders to obtain greater precision in the modeling of Li-ion battery packs. Optimization includes considering circuits based on 1RC, 2RC and 3RC networks, with a dependent voltage source and a series resistor. The parameters are obtained experimentally using tests in the time domain and in the frequency domain. Due to the high non-linearity of the behavior of the battery pack, Genetic Algorithm (GA) was used to solve and optimize the parameters of each EECM considered (1RC, 2RC and 3RC). The objective of the estimation is to minimize the mean square error between the measured impedance in the real battery pack and those generated by the simulation of different proposed circuit models. The results have been verified by comparing the Nyquist graphs of the estimation of the complex impedance of the pack. As a result of the optimization, the 2RC and 3RC circuit alternatives are considered as viable to represent the battery behavior. These battery pack models are experimentally validated using a hardware-in-the-loop (HIL) simulation platform that reproduces the well-known New York City cycle (NYCC) and Federal Test Procedure (FTP) driving cycles for electric vehicles. The results show that using GA optimization allows obtaining EECs with 2RC or 3RC networks, with high precision to represent the dynamic behavior of a battery pack in vehicular applications.Keywords: Li-ion battery packs modeling optimized, EECM, GA, electric vehicle applications
Procedia PDF Downloads 1232413 Stability-Indicating High-Performance Thin-Layer Chromatography Method for Estimation of Naftopidil
Authors: P. S. Jain, K. D. Bobade, S. J. Surana
Abstract:
A simple, selective, precise and Stability-indicating High-performance thin-layer chromatographic method for analysis of Naftopidil both in a bulk and in pharmaceutical formulation has been developed and validated. The method employed, HPTLC aluminium plates precoated with silica gel as the stationary phase. The solvent system consisted of hexane: ethyl acetate: glacial acetic acid (4:4:2 v/v). The system was found to give compact spot for Naftopidil (Rf value of 0.43±0.02). Densitometric analysis of Naftopidil was carried out in the absorbance mode at 253 nm. The linear regression analysis data for the calibration plots showed good linear relationship with r2=0.999±0.0001 with respect to peak area in the concentration range 200-1200 ng per spot. The method was validated for precision, recovery and robustness. The limits of detection and quantification were 20.35 and 61.68 ng per spot, respectively. Naftopidil was subjected to acid and alkali hydrolysis, oxidation and thermal degradation. The drug undergoes degradation under acidic, basic, oxidation and thermal conditions. This indicates that the drug is susceptible to acid, base, oxidation and thermal conditions. The degraded product was well resolved from the pure drug with significantly different Rf value. Statistical analysis proves that the method is repeatable, selective and accurate for the estimation of investigated drug. The proposed developed HPTLC method can be applied for identification and quantitative determination of Naftopidil in bulk drug and pharmaceutical formulation.Keywords: naftopidil, HPTLC, validation, stability, degradation
Procedia PDF Downloads 4002412 Capture-recapture to Estimate Completeness of Pulmonary Tuberculosis with Two Sources
Authors: Ratchadaporn Ungcharoen, Lily Ingsrisawang
Abstract:
Capture-recapture methods are popular techniques for indirect estimation the size of wildlife populations and the completeness of cases in epidemiology and social sciences. The aim of this study was to estimate the completeness of pulmonary tuberculosis cases confirmed by two sources of hospital registrations and surveillance systems in 2013 in Nakhon Pathom province, Thailand. Several estimators of population size were considered: the Lincoln-Petersen estimator, the Chapman estimator, the Chao’s lower bound estimator, the Zelterman’s estimator, etc. We focus on the Chapman and Chao’s lower bound estimators for estimating the completeness of pulmonary tuberculosis from two sources. The retrieved pulmonary tuberculosis data from two sources were analyzed and bootstrapped for 30 samples, with 241 observations from source 1 and 305 observations from source 2 per sample, for additional exploration of the completeness of pulmonary tuberculosis. The results from the original data show that the Chapman’s estimator gave the estimation of a total 360 (95% CI: 349-371) pulmonary tuberculosis cases, resulting in 57% estimated completeness cases. But the Chao’s lower bound estimator estimated the total of 365 (95% CI: 354-376) pulmonary tuberculosis cases and its estimated completeness cases was 55.9%. For the results from bootstrap samples, the Chapman and the Chao’s lower bound estimators gave an estimated 347 (95% CI: 309-385) and 353 (95% CI: 315-390) pulmonary tuberculosis cases, respectively. If for two sources recoding systems are available, record-linkage and capture-recapture analysis can be useful for estimating the completeness of different registration system. Both Chapman and Chao’s lower bound estimator approaches produce very close estimates.Keywords: capture-recapture, Chao, Chapman, pulmonary tuberculosis
Procedia PDF Downloads 5162411 Engine Thrust Estimation by Strain Gauging of Engine Mount Assembly
Authors: Rohit Vashistha, Amit Kumar Gupta, G. P. Ravishankar, Mahesh P. Padwale
Abstract:
Accurate thrust measurement is required for aircraft during takeoff and after ski-jump. In a developmental aircraft, takeoff from ship is extremely critical and thrust produced by the engine should be known to the pilot before takeoff so that if thrust produced is not sufficient then take-off can be aborted and accident can be avoided. After ski-jump, thrust produced by engine is required because the horizontal speed of aircraft is less than the normal takeoff speed. Engine should be able to produce enough thrust to provide nominal horizontal takeoff speed to the airframe within prescribed time limit. The contemporary low bypass gas turbine engines generally have three mounts where the two side mounts transfer the engine thrust to the airframe. The third mount only takes the weight component. It does not take any thrust component. In the present method of thrust estimation, the strain gauging of the two side mounts is carried out. The strain produced at various power settings is used to estimate the thrust produced by the engine. The quarter Wheatstone bridge is used to acquire the strain data. The engine mount assembly is subjected to Universal Test Machine for determination of equivalent elasticity of assembly. This elasticity value is used in the analytical approach for estimation of engine thrust. The estimated thrust is compared with the test bed load cell thrust data. The experimental strain data is also compared with strain data obtained from FEM analysis. Experimental setup: The strain gauge is mounted on the tapered portion of the engine mount sleeve. Two strain gauges are mounted on diametrically opposite locations. Both of the strain gauges on the sleeve were in the horizontal plane. In this way, these strain gauges were not taking any strain due to the weight of the engine (except negligible strain due to material's poison's ratio) or the hoop's stress. Only the third mount strain gauge will show strain when engine is not running i.e. strain due to weight of engine. When engine starts running, all the load will be taken by the side mounts. The strain gauge on the forward side of the sleeve was showing a compressive strain and the strain gauge on the rear side of the sleeve shows a tensile strain. Results and conclusion: the analytical calculation shows that the hoop stresses dominate the bending stress. The estimated thrust by strain gauge shows good accuracy at higher power setting as compared to lower power setting. The accuracy of estimated thrust at max power setting is 99.7% whereas at lower power setting is 78%.Keywords: engine mounts, finite elements analysis, strain gauge, stress
Procedia PDF Downloads 4832410 Optimizing Microwave Assisted Extraction of Anti-Diabetic Plant Tinospora cordifolia Used in Ayush System for Estimation of Berberine Using Taguchi L-9 Orthogonal Design
Authors: Saurabh Satija, Munish Garg
Abstract:
Present work reports an efficient extraction method using microwaves based solvent–sample duo-heating mechanism, for the extraction of an important anti-diabetic plant Tinospora cordifolia from AYUSH system for estimation of berberine content. The process is based on simultaneous heating of sample matrix and extracting solvent under microwave energy. Methanol was used as the extracting solvent, which has excellent berberine solubilizing power and warms up under microwave attributable to its great dispersal factor. Extraction conditions like time of irradition, microwave power, solute-solvent ratio and temperature were optimized using Taguchi design and berberine was quantified using high performance thin layer chromatography. The ranked optimized parameters were microwave power (rank 1), irradiation time (rank 2) and temperature (rank 3). This kind of extraction mechanism under dual heating provided choice of extraction parameters for better precision and higher yield with significant reduction in extraction time under optimum extraction conditions. This developed extraction protocol will lead to extract higher amounts of berberine which is a major anti-diabetic moiety in Tinospora cordifolia which can lead to development of cheaper formulations of the plant Tinospora cordifolia and can help in rapid prevention of diabetes in the world.Keywords: berberine, microwave, optimization, Taguchi
Procedia PDF Downloads 3472409 Parallel Self Organizing Neural Network Based Estimation of Archie’s Parameters and Water Saturation in Sandstone Reservoir
Authors: G. M. Hamada, A. A. Al-Gathe, A. M. Al-Khudafi
Abstract:
Determination of water saturation in sandstone is a vital question to determine the initial oil or gas in place in reservoir rocks. Water saturation determination using electrical measurements is mainly on Archie’s formula. Consequently accuracy of Archie’s formula parameters affects water saturation values rigorously. Determination of Archie’s parameters a, m, and n is proceeded by three conventional techniques, Core Archie-Parameter Estimation (CAPE) and 3-D. This work introduces the hybrid system of parallel self-organizing neural network (PSONN) targeting accepted values of Archie’s parameters and, consequently, reliable water saturation values. This work focuses on Archie’s parameters determination techniques; conventional technique, CAPE technique, and 3-D technique, and then the calculation of water saturation using current. Using the same data, a hybrid parallel self-organizing neural network (PSONN) algorithm is used to estimate Archie’s parameters and predict water saturation. Results have shown that estimated Arche’s parameters m, a, and n are highly accepted with statistical analysis, indicating that the PSONN model has a lower statistical error and higher correlation coefficient. This study was conducted using a high number of measurement points for 144 core plugs from a sandstone reservoir. PSONN algorithm can provide reliable water saturation values, and it can supplement or even replace the conventional techniques to determine Archie’s parameters and thereby calculate water saturation profiles.Keywords: water saturation, Archie’s parameters, artificial intelligence, PSONN, sandstone reservoir
Procedia PDF Downloads 1282408 Perception of Tactile Stimuli in Children with Autism Spectrum Disorder
Authors: Kseniya Gladun
Abstract:
Tactile stimulation of a dorsal side of the wrist can have a strong impact on our attitude toward physical objects such as pleasant and unpleasant impact. This study explored different aspects of tactile perception to investigate atypical touch sensitivity in children with autism spectrum disorder (ASD). This study included 40 children with ASD and 40 healthy children aged 5 to 9 years. We recorded rsEEG (sampling rate of 250 Hz) during 20 min using EEG amplifier “Encephalan” (Medicom MTD, Taganrog, Russian Federation) with 19 AgCl electrodes placed according to the International 10–20 System. The electrodes placed on the left, and right mastoids served as joint references under unipolar montage. The registration of EEG v19 assignments was carried out: frontal (Fp1-Fp2; F3-F4), temporal anterior (T3-T4), temporal posterior (T5-T6), parietal (P3-P4), occipital (O1-O2). Subjects were passively touched by 4 types of tactile stimuli on the left wrist. Our stimuli were presented with a velocity of about 3–5 cm per sec. The stimuli materials and procedure were chosen for being the most "pleasant," "rough," "prickly" and "recognizable". Type of tactile stimulation: Soft cosmetic brush - "pleasant" , Rough shoe brush - "rough", Wartenberg pin wheel roller - "prickly", and the cognitive tactile stimulation included letters by finger (most of the patient’s name ) "recognizable". To designate the moments of the stimuli onset-offset, we marked the moment when the moment of the touch began and ended; the stimulation was manual, and synchronization was not precise enough for event-related measures. EEG epochs were cleaned from eye movements by ICA-based algorithm in EEGLAB plugin for MatLab 7.11.0 (Mathwork Inc.). Muscle artifacts were cut out by manual data inspection. The response to tactile stimuli was significantly different in the group of children with ASD and healthy children, which was also depended on type of tactile stimuli and the severity of ASD. Amplitude of Alpha rhythm increased in parietal region to response for only pleasant stimulus, for another type of stimulus ("rough," "thorny", "recognizable") distinction of amplitude was not observed. Correlation dimension D2 was higher in healthy children compared to children with ASD (main effect ANOVA). In ASD group D2 was lower for pleasant and unpleasant compared to the background in the right parietal area. Hilbert transform changes in the frequency of the theta rhythm found only for a rough tactile stimulation compared with healthy participants only in the right parietal area. Children with autism spectrum disorders and healthy children were responded to tactile stimulation differently with specific frequency distribution alpha and theta band in the right parietal area. Thus, our data supports the hypothesis that rsEEG may serve as a sensitive index of altered neural activity caused by ASD. Children with autism have difficulty in distinguishing the emotional stimuli ("pleasant," "rough," "prickly" and "recognizable").Keywords: autism, tactile stimulation, Hilbert transform, pediatric electroencephalography
Procedia PDF Downloads 2512407 Estimation of Relative Permeabilities and Capillary Pressures in Shale Using Simulation Method
Authors: F. C. Amadi, G. C. Enyi, G. Nasr
Abstract:
Relative permeabilities are practical factors that are used to correct the single phase Darcy’s law for application to multiphase flow. For effective characterisation of large-scale multiphase flow in hydrocarbon recovery, relative permeability and capillary pressures are used. These parameters are acquired via special core flooding experiments. Special core analysis (SCAL) module of reservoir simulation is applied by engineers for the evaluation of these parameters. But, core flooding experiments in shale core sample are expensive and time consuming before various flow assumptions are achieved for instance Darcy’s law. This makes it imperative for the application of coreflooding simulations in which various analysis of relative permeabilities and capillary pressures of multiphase flow can be carried out efficiently and effectively at a relative pace. This paper presents a Sendra software simulation of core flooding to achieve to relative permeabilities and capillary pressures using different correlations. The approach used in this study was three steps. The first step, the basic petrophysical parameters of Marcellus shale sample such as porosity was determined using laboratory techniques. Secondly, core flooding was simulated for particular scenario of injection using different correlations. And thirdly the best fit correlations for the estimation of relative permeability and capillary pressure was obtained. This research approach saves cost and time and very reliable in the computation of relative permeability and capillary pressures at steady or unsteady state, drainage or imbibition processes in oil and gas industry when compared to other methods.Keywords: relative permeabilty, porosity, 1-D black oil simulator, capillary pressures
Procedia PDF Downloads 4422406 Module Valuations and Quasi-Valuations
Authors: Shai Sarussi
Abstract:
Suppose F is a field with valuation v and valuation domain Oᵥ, and R is an Oᵥ-algebra. It is known that there exists a filter quasi-valuation on R; the existence of a quasi-valuation yields several important connections between Oᵥ and R, in particular with respect to their prime spectra. In this paper, the notion of a module valuation is introduced. It is shown that any torsion-free module over Oᵥ has an induced module valuation. Moreover, several results connecting the filter quasi-valuation and module valuations are presented.Keywords: valuations, quasi-valuations, prime spectrum, algebras over valuation domains
Procedia PDF Downloads 2242405 Risk Analysis of Leaks from a Subsea Oil Facility Based on Fuzzy Logic Techniques
Authors: Belén Vinaixa Kinnear, Arturo Hidalgo López, Bernardo Elembo Wilasi, Pablo Fernández Pérez, Cecilia Hernández Fuentealba
Abstract:
The expanded use of risk assessment in legislative and corporate decision-making has increased the role of expert judgement in giving data for security-related decision-making. Expert judgements are required in most steps of risk assessment: danger recognizable proof, hazard estimation, risk evaluation, and examination of choices. This paper presents a fault tree analysis (FTA), which implies a probabilistic failure analysis applied to leakage of oil in a subsea production system. In standard FTA, the failure probabilities of items of a framework are treated as exact values while evaluating the failure probability of the top event. There is continuously insufficiency of data for calculating the failure estimation of components within the drilling industry. Therefore, fuzzy hypothesis can be used as a solution to solve the issue. The aim of this paper is to examine the leaks from the Zafiro West subsea oil facility by using fuzzy fault tree analysis (FFTA). As a result, the research has given theoretical and practical contributions to maritime safety and environmental protection. It has been also an effective strategy used traditionally in identifying hazards in nuclear installations and power industries.Keywords: expert judgment, probability assessment, fault tree analysis, risk analysis, oil pipelines, subsea production system, drilling, quantitative risk analysis, leakage failure, top event, off-shore industry
Procedia PDF Downloads 1902404 Cognitive Deficits and Association with Autism Spectrum Disorder and Attention Deficit Hyperactivity Disorder in 22q11.2 Deletion Syndrome
Authors: Sinead Morrison, Ann Swillen, Therese Van Amelsvoort, Samuel Chawner, Elfi Vergaelen, Michael Owen, Marianne Van Den Bree
Abstract:
22q11.2 Deletion Syndrome (22q11.2DS) is caused by the deletion of approximately 60 genes on chromosome 22 and is associated with high rates of neurodevelopmental disorders such as Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorders (ASD). The presentation of these disorders in 22q11.2DS is reported to be comparable to idiopathic forms and therefore presents a valuable model for understanding mechanisms of neurodevelopmental disorders. Cognitive deficits are thought to be a core feature of neurodevelopmental disorders, and possibly manifest in behavioural and emotional problems. There have been mixed findings in 22q11.2DS on whether the presence of ADHD or ASD is associated with greater cognitive deficits. Furthermore, the influence of developmental stage has never been taken into account. The aim was therefore to examine whether the presence of ADHD or ASD was associated with cognitive deficits in childhood and/or adolescence in 22q11.2DS. We conducted the largest study to date of this kind in 22q11.2DS. The same battery of tasks measuring processing speed, attention and spatial working memory were completed by 135 participants with 22q11.2DS. Wechsler IQ tests were completed, yielding Full Scale (FSIQ), Verbal (VIQ) and Performance IQ (PIQ). Age-standardised difference scores were produced for each participant. Developmental stages were defined as children (6-10 years) and adolescents (10-18 years). ADHD diagnosis was ascertained from a semi-structured interview with a parent. ASD status was ascertained from a questionnaire completed by a parent. Interaction and main effects of cognitive performance of those with or without a diagnosis of ADHD or ASD in childhood or adolescence were conducted with 2x2 ANOVA. Significant interactions were followed up with t-tests of simple effects. Adolescents with ASD displayed greater deficits in all measures (processing speed, p = 0.022; sustained attention, p = 0.016; working memory, p = 0.006) than adolescents without ASD; there was no difference between children with and without ASD. There were no significant differences on IQ measures. Both children and adolescents with ADHD displayed greater deficits on sustained attention (p = 0.002) than those without ADHD. There were no significant differences on any other measures for ADHD. Magnitude of cognitive deficit in individuals with 22q11.2DS varied by cognitive domain, developmental stage and presence of neurodevelopmental disorder. Adolescents with 22q11.2DS and ASD showed greater deficits on all measures, which suggests there may be a sensitive period in childhood to acquire these domains, or reflect increasing social and academic demands in adolescence. The finding of poorer sustained attention in children and adolescents with ADHD supports previous research and suggests a specific deficit which can be separated from processing speed and working memory. This research provides unique insights into the association of ASD and ADHD with cognitive deficits in a group at high genomic risk of neurodevelopmental disorders.Keywords: 22q11.2 deletion syndrome, attention deficit hyperactivity disorder, autism spectrum disorder, cognitive development
Procedia PDF Downloads 1512403 Measuring the Economic Impact of Cultural Heritage: Comparative Analysis of the Multiplier Approach and the Value Chain Approach
Authors: Nina Ponikvar, Katja Zajc Kejžar
Abstract:
While the positive impacts of heritage on a broad societal spectrum have long been recognized and measured, the economic effects of the heritage sector are often less visible and frequently underestimated. At macro level, economic effects are usually studied based on one of the two mainstream approach, i.e. either the multiplier approach or the value chain approach. Consequently, there is limited comparability of the empirical results due to the use of different methodological approach in the literature. Furthermore, it is also not clear on which criteria the used approach was selected. Our aim is to bring the attention to the difference in the scope of effects that are encompassed by the two most frequent methodological approaches to valuation of economic effects of cultural heritage on macroeconomic level, i.e. the multiplier approach and the value chain approach. We show that while the multiplier approach provides a systematic, theory-based view of economic impacts but requires more data and analysis, the value chain approach has less solid theoretical foundations and depends on the availability of appropriate data to identify the contribution of cultural heritage to other sectors. We conclude that the multiplier approach underestimates the economic impact of cultural heritage, mainly due to the narrow definition of cultural heritage in the statistical classification and the inability to identify part of the contribution of cultural heritage that is hidden in other sectors. Yet it is not possible to clearly determine whether the value chain method overestimates or underestimates the actual economic impact of cultural heritage since there is a risk that the direct effects are overestimated and double counted, but not all indirect and induced effects are considered. Accordingly, these two approaches are not substitutes but rather complementary. Consequently, a direct comparison of the estimated impacts is not possible and should not be done due to the different scope. To illustrate the difference of the impact assessment of the cultural heritage, we apply both approaches to the case of Slovenia in the 2015-2022 period and measure the economic impact of cultural heritage sector in terms of turnover, gross value added and employment. The empirical results clearly show that the estimation of the economic impact of a sector using the multiplier approach is more conservative, while the estimates based on value added capture a much broader range of impacts. According to the multiplier approach, each euro in cultural heritage sector generates an additional 0.14 euros in indirect effects and an additional 0.44 euros in induced effects. Based on the value-added approach, the indirect economic effect of the “narrow” heritage sectors is amplified by the impact of cultural heritage activities on other sectors. Accordingly, every euro of sales and every euro of gross value added in the cultural heritage sector generates approximately 6 euros of sales and 4 to 5 euros of value added in other sectors. In addition, each employee in the cultural heritage sector is linked to 4 to 5 jobs in other sectors.Keywords: economic value of cultural heritage, multiplier approach, value chain approach, indirect effects, slovenia
Procedia PDF Downloads 752402 Satellite LiDAR-Based Digital Terrain Model Correction using Gaussian Process Regression
Authors: Keisuke Takahata, Hiroshi Suetsugu
Abstract:
Forest height is an important parameter for forest biomass estimation, and precise elevation data is essential for accurate forest height estimation. There are several globally or nationally available digital elevation models (DEMs) like SRTM and ASTER. However, its accuracy is reported to be low particularly in mountainous areas where there are closed canopy or steep slope. Recently, space-borne LiDAR, such as the Global Ecosystem Dynamics Investigation (GEDI), have started to provide sparse but accurate ground elevation and canopy height estimates. Several studies have reported the high degree of accuracy in their elevation products on their exact footprints, while it is not clear how this sparse information can be used for wider area. In this study, we developed a digital terrain model correction algorithm by spatially interpolating the difference between existing DEMs and GEDI elevation products by using Gaussian Process (GP) regression model. The result shows that our GP-based methodology can reduce the mean bias of the elevation data from 3.7m to 0.3m when we use airborne LiDAR-derived elevation information as ground truth. Our algorithm is also capable of quantifying the elevation data uncertainty, which is critical requirement for biomass inventory. Upcoming satellite-LiDAR missions, like MOLI (Multi-footprint Observation Lidar and Imager), are expected to contribute to the more accurate digital terrain model generation.Keywords: digital terrain model, satellite LiDAR, gaussian processes, uncertainty quantification
Procedia PDF Downloads 1832401 A Density Function Theory Based Comparative Study of Trans and Cis - Resveratrol
Authors: Subhojyoti Chatterjee, Peter J. Mahon, Feng Wang
Abstract:
Resveratrol (RvL), a phenolic compound, is a key ingredient in wine and tomatoes that has been studied over the years because of its important bioactivities such as anti-oxidant, anti-aging and antimicrobial properties. Out of the two isomeric forms of resveratrol i.e. trans and cis, the health benefit is primarily associated with the trans form. Thus, studying the structural properties of the isomers will not only provide an insight into understanding the RvL isomers, but will also help in designing parameters for differentiation in order to achieve 99.9% purity of trans-RvL. In the present study, density function theory (DFT) study is conducted, using the B3LYP/6-311++G** model to explore the through bond and through space intramolecular interactions. Properties such as vibrational spectroscopy (IR and Raman), nuclear magnetic resonance (NMR) spectra, excess orbital energy spectrum (EOES), energy based decomposition analyses (EDA) and Fukui function are calculated. It is discovered that the structure of trans-RvL, although it is C1 non-planar, the backbone non-H atoms are nearly in the same plane; whereas the cis-RvL consists of two major planes of R1 and R2 that are not in the same plane. The absence of planarity gives rise to a H-bond of 2.67Å in cis-RvL. Rotation of the C(5)-C(8) single bond in trans-RvL produces higher energy barriers since it may break the (planar) entire conjugated structure; while such rotation in cis-RvL produces multiple minima and maxima depending on the positions of the rings. The calculated FT-IR spectrum shows very different spectral features for trans and cis-RvL in the region 900 – 1500 cm-1, where the spectral peaks at 1138-1158 cm-1 are split in cis-RvL compared to a single peak at 1165 cm-1 in trans-RvL. In the Raman spectra, there is significant enhancement of cis-RvL in the region above 3000cm-1. Further, the carbon chemical environment (13C NMR) of the RvL molecule exhibit a larger chemical shift for cis-RvL compared to trans-RvL (Δδ = 8.18 ppm) for the carbon atom C(11), indicating that the chemical environment of the C group in cis-RvL is more diverse than its other isomer. The energy gap between highest occupied molecular orbital (HOMO) and the lowest occupied molecular orbital (LUMO) is 3.95 eV for trans and 4.35 eV for cis-RvL. A more detailed inspection using the recently developed EOES revealed that most of the large energy differences i.e. Δεcis-trans > ±0.30 eV, in their orbitals are contributed from the outer valence shell. They are MO60 (HOMO), MO52-55 and MO46. The active sites that has been captured by Fukui function (f + > 0.08) are associated with the stilbene C=C bond of RvL and cis-RvL is more active at these sites than in trans-RvL, as cis orientation breaks the large conjugation of trans-RvL so that the hydroxyl oxygen’s are more active in cis-RvL. Finally, EDA highlights the interaction energy (ΔEInt) of the phenolic compound, where trans is preferred over the cis-RvL (ΔΔEi = -4.35 kcal.mol-1) isomer. Thus, these quantum mechanics results could help in unwinding the diversified beneficial activities associated with resveratrol.Keywords: resveratrol, FT-IR, Raman, NMR, excess orbital energy spectrum, energy decomposition analysis, Fukui function
Procedia PDF Downloads 1942400 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana
Authors: Gautier Viaud, Paul-Henry Cournède
Abstract:
Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models
Procedia PDF Downloads 3032399 The Effectiveness of Using Dramatic Conventions as the Teaching Strategy on Self-Efficacy for Children With Autism Spectrum Disorder
Authors: Tso Sheng-Yang, Wang Tien-Ni
Abstract:
Introduction and Purpose: Previous researchers have documented children with ASD (Autism Spectrum Disorders) prefer to escaping internal privates and external privates when they face tough conditions they can’t control or they don’t like.Especially, when children with ASD need to learn challenging tasks, such us Chinese language, their inappropriate behaviors will occur apparently. Recently, researchers apply positive behavior support strategies for children with ASD to enhance their self-efficacy and therefore to reduce their adverse behaviors. Thus, the purpose of this research was to design a series of lecture based on art therapy and to evaluate its effectiveness on the child’s self-efficacy. Method: This research was the single-case design study that recruited a high school boy with ASD. Whole research can be separated into three conditions. First, baseline condition, before the class started and ended, the researcher collected participant’s competencies of self-efficacy every session. In intervention condition, the research used dramatic conventions to teach the child in Chinese language twice a week.When the data was stable across three documents, the period entered to the maintenance condition. In maintenance condition, the researcher only collected the score of self-efficacynot to do other interventions five times a month to represent the effectiveness of maintenance.The time and frequency of data collection among three conditions are identical. Concerning art therapy, the common approach, e.g., music, drama, or painting is to use art medium as independent variable. Due to visual cues of art medium, the ASD can be easily to gain joint attention with teachers. Besides, the ASD have difficulties in understanding abstract objectives Thus, using the drama convention is helpful for the ASD to construct the environment and understand the context of Classical Chinese. By real operation, it can improve the ASD to understand the context and construct prior knowledge. Result: Bassd on the 10-points Likert scale and research, we product following results. (a) In baseline condition, the average score of self-efficacyis 1.12 points, rangedfrom 1 to 2 points, and the level change is 0 point. (b)In intervention condition, the average score of self-efficacy is 7.66 points rangedfrom 7 to 9 points, and the level change is 1 point. (c)In maintenance condition, the average score of self-efficacy is 6.66 points rangedfrom 6 to 7 points, and the level change is 1 point. Concerning immediacy of change, between baseline and intervention conditions, the difference is 5 points. No overlaps were found between these two conditions. Conclusion: According to the result, we find that it is effective that using dramatic conventions a s teaching strategies to teach children with ASD. The result presents the score of self-efficacyimmediately enhances when the dramatic conventions commences. Thus, we suggest the teacher can use this approach and adjust, based on the student’s trait, to teach the ASD on difficult task.Keywords: dramatic conventions, autism spectrum disorder, slef-efficacy, teaching strategy
Procedia PDF Downloads 832398 Intensive Neurophysiological Rehabilitation System: New Approach for Treatment of Children with Autism
Authors: V. I. Kozyavkin, L. F. Shestopalova, T. B. Voloshyn
Abstract:
Introduction: Rehabilitation of children with Autism is the issue of the day in psychiatry and neurology. It is attributed to constantly increasing quantity of autistic children - Autistic Spectrum Disorders (ASD) Existing rehabilitation approaches in treatment of children with Autism improve their medico- social and social- psychological adjustment. Experience of treatment for different kinds of Autistic disorders in International Clinic of Rehabilitation (ICR) reveals the necessity of complex intensive approach for healing this malady and wider implementation of a Kozyavkin method for treatment of children with ASD. Methods: 19 children aged from 3 to 14 years were examined. They were diagnosed ‘Autism’ (F84.0) with comorbid neurological pathology (from pyramidal insufficiency to para- and tetraplegia). All patients underwent rehabilitation in ICR during two weeks, where INRS approach was used. INRS included methods like biomechanical correction of the spine, massage, physical therapy, joint mobilization, wax-paraffin applications. They were supplemented by art- therapy, ergotherapy, rhythmical group exercises, computer game therapy, team Olympic games and other methods for improvement of motivation and social integration of the child. Estimation of efficacy was conducted using parent’s questioning and done twice- on the onset of INRS rehabilitation course and two weeks afterward. For efficacy assessment of rehabilitation of autistic children in ICR standardized tool was used, namely Autism Treatment Evaluation Checklist (ATEC). This scale was selected because any rehabilitation approaches for the child with Autism can be assessed using it. Results: Before the onset of INRS treatment mean score according to ATEC scale was 64,75±9,23, it reveals occurrence in examined children severe communication, speech, socialization and behavioral impairments. After the end of the rehabilitation course, the mean score was 56,5±6,7, what indicates positive dynamics in comparison to the onset of rehabilitation. Generally, improvement of psychoemotional state occurred in 90% of cases. Most significant changes occurred in the scope of speech (16,5 before and 14,5 after the treatment), socialization (15.1 before and 12,5 after) and behavior (20,1 before and 17.4 after). Conclusion: As a result of INRS rehabilitation course reduction of autistic symptoms was noted. Particularly improvements in speech were observed (children began to spell out new syllables, words), there was some decrease in signs of destructiveness, quality of contact with the surrounding people improved, new skills of self-service appeared. The prospect of the study is further, according to evidence- based medicine standards, deeper examination of INRS and assessment of its usefulness in treatment for Autism and ASD.Keywords: intensive neurophysiological rehabilitation system (INRS), international clinic od rehabilitation, ASD, rehabilitation
Procedia PDF Downloads 1692397 Investigation and Estimation of State of Health of Battery Pack in Battery Electric Vehicles-Online Battery Characterization
Authors: Ali Mashayekh, Mahdiye Khorasani, Thomas Weyh
Abstract:
The tendency to use the Battery-Electric vehicle (BEV) for the low and medium driving range or even high driving range has been growing more and more. As a result, higher safety, reliability, and durability of the battery pack as a component of electric vehicles, which has a great share of cost and weight of the final product, are the topics to be considered and investigated. Battery aging can be considered as the predominant factor regarding the reliability and durability of BEV. To better understand the aging process, offline battery characterization has been widely used, which is time-consuming and needs very expensive infrastructures. This paper presents the substitute method for the conventional battery characterization methods, which is based on battery Modular Multilevel Management (BM3). According to this Topology, the battery cells can be drained and charged concerning their capacity, which allows varying battery pack structures. Due to the integration of the power electronics, the output voltage of the battery pack is no longer fixed but can be dynamically adjusted in small steps. In other words, each cell can have three different states, namely series, parallel, and bypass in connection with the neighbor cells. With the help of MATLAB/Simulink and by using the BM3 modules, the battery string model is created. This model allows us to switch two cells with the different SoC as parallel, which results in the internal balancing of the cells. But if the parallel switching lasts just for a couple of ms, we can have a perturbation pulse which can stimulate the cells out of the relaxation phase. With the help of modeling the voltage response pulse of the battery, it would be possible to characterize the cell. The Online EIS method, which is discussed in this paper, can be a robust substitute for the conventional battery characterization methods.Keywords: battery characterization, SoH estimation, RLS, BEV
Procedia PDF Downloads 1492396 Price Effect Estimation of Tobacco on Low-wage Male Smokers: A Causal Mediation Analysis
Authors: Kawsar Ahmed, Hong Wang
Abstract:
The study's goal was to estimate the causal mediation impact of tobacco tax before and after price hikes among low-income male smokers, with a particular emphasis on the effect estimating pathways framework for continuous and dichotomous variables. From July to December 2021, a cross-sectional investigation of observational data (n=739) was collected from Bangladeshi low-wage smokers. The Quasi-Bayesian technique, binomial probit model, and sensitivity analysis using a simulation of the computational tools R mediation package had been used to estimate the effect. After a price rise for tobacco products, the average number of cigarettes or bidis sticks taken decreased from 6.7 to 4.56. Tobacco product rising prices have a direct effect on low-income people's decisions to quit or lessen their daily smoking habits of Average Causal Mediation Effect (ACME) [effect=2.31, 95 % confidence interval (C.I.) = (4.71-0.00), p<0.01], Average Direct Effect (ADE) [effect=8.6, 95 percent (C.I.) = (6.8-0.11), p<0.001], and overall significant effects (p<0.001). Tobacco smoking choice is described by the mediated proportion of income effect, which is 26.1% less of following price rise. The curve of ACME and ADE is based on observational figures of the coefficients of determination that asses the model of hypothesis as the substantial consequence after price rises in the sensitivity analysis. To reduce smoking product behaviors, price increases through taxation have a positive causal mediation with income that affects the decision to limit tobacco use and promote low-income men's healthcare policy.Keywords: causal mediation analysis, directed acyclic graphs, tobacco price policy, sensitivity analysis, pathway estimation
Procedia PDF Downloads 1122395 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector
Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini
Abstract:
Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products
Procedia PDF Downloads 1512394 Kemmer Oscillator in Cosmic String Background
Authors: N. Messai, A. Boumali
Abstract:
In this work, we aim to solve the two dimensional Kemmer equation including Dirac oscillator interaction term, in the background space-time generated by a cosmic string which is submitted to an uniform magnetic field. Eigenfunctions and eigenvalues of our problem have been found and the influence of the cosmic string space-time on the energy spectrum has been analyzed.Keywords: Kemmer oscillator, cosmic string, Dirac oscillator, eigenfunctions
Procedia PDF Downloads 5842393 A Two-Stage Bayesian Variable Selection Method with the Extension of Lasso for Geo-Referenced Data
Authors: Georgiana Onicescu, Yuqian Shen
Abstract:
Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso were considered and compared as alternative methods. The simulation results indicated that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.Keywords: Lasso, Bayesian analysis, spatial analysis, variable selection
Procedia PDF Downloads 1432392 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets
Authors: Ece Cigdem Mutlu, Burak Alakent
Abstract:
Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.Keywords: average run length, M-estimators, quality control, robust estimators
Procedia PDF Downloads 1902391 Wind Resource Estimation and Economic Analysis for Rakiraki, Fiji
Authors: Kaushal Kishore
Abstract:
Immense amount of imported fuels are used in Fiji for electricity generation, transportation and for carrying out miscellaneous household work. To alleviate its dependency on fossil fuel, paramount importance has been given to instigate the utilization of renewable energy sources for power generation and to reduce the environmental dilapidation. Amongst the many renewable energy sources, wind has been considered as one of the best identified renewable sources that are comprehensively available in Fiji. In this study the wind resource assessment for three locations in Rakiraki, Fiji has been carried out. The wind resource estimation at Rokavukavu, Navolau and at Tuvavatu has been analyzed. The average wind speed at 55 m above ground level (a.g.l) at Rokavukavu, Navolau, and Tuvavatu sites are 5.91 m/s, 8.94 m/s and 8.13 m/s with the turbulence intensity of 14.9%, 17.1%, and 11.7% respectively. The moment fitting method has been used to estimate the Weibull parameter and the power density at each sites. A high resolution wind resource map for the three locations has been developed by using Wind Atlas Analysis and Application Program (WAsP). The results obtained from WAsP exhibited good wind potential at Navolau and Tuvavatu sites. A wind farm has been proposed at Navolau and Tuvavatu site that comprises six Vergnet 275 kW wind turbines at each site. The annual energy production (AEP) for each wind farm is estimated and an economic analysis is performed. The economic analysis for the proposed wind farms at Navolau and Tuvavatu sites showed a payback period of 5 and 6 years respectively.Keywords: annual energy production, Rakiraki Fiji, turbulence intensity, Weibull parameter, wind speed, Wind Atlas Analysis and Application Program
Procedia PDF Downloads 1882390 Crack Propagation in Concrete Gravity Dam
Authors: Faramarz Khoshnoudian
Abstract:
A seismic stability assessment of the concrete gravity dam was performed. Initially (Phase 1), a linear response spectrum analysis was performed to verify the potential for crack formation. The result shows the possibility of developing cracks in the upstream face of the dam close to the lowest gallery, which were sufficiently long that the dam would not be stable following the earthquake. The results show the dam has potentially inadequate seismic and post-earthquake resistance and recommended an update of the stability analysis.Keywords: crack propgation, concrete gravity dam, seismic, assesment
Procedia PDF Downloads 712389 Tracing Sources of Sediment in an Arid River, Southern Iran
Authors: Hesam Gholami
Abstract:
Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran
Procedia PDF Downloads 74