Search results for: error masking probability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3066

Search results for: error masking probability

2136 Probabilistic Robustness Assessment of Structures under Sudden Column-Loss Scenario

Authors: Ali Y Al-Attraqchi, P. Rajeev, M. Javad Hashemi, Riadh Al-Mahaidi

Abstract:

This paper presents a probabilistic incremental dynamic analysis (IDA) of a full reinforced concrete building subjected to column loss scenario for the assessment of progressive collapse. The IDA is chosen to explicitly account for uncertainties in loads and system capacity. Fragility curves are developed to predict the probability of progressive collapse given the loss of one or more columns. At a broader scale, it will also provide critical information needed to support the development of a new generation of design codes that attempt to explicitly quantify structural robustness.

Keywords: fire, nonlinear incremental dynamic analysis, progressive collapse, structural engineering

Procedia PDF Downloads 254
2135 Performance of the Strong Stability Method in the Univariate Classical Risk Model

Authors: Safia Hocine, Zina Benouaret, Djamil A¨ıssani

Abstract:

In this paper, we study the performance of the strong stability method of the univariate classical risk model. We interest to the stability bounds established using two approaches. The first based on the strong stability method developed for a general Markov chains. The second approach based on the regenerative processes theory . By adopting an algorithmic procedure, we study the performance of the stability method in the case of exponential distribution claim amounts. After presenting numerically and graphically the stability bounds, an interpretation and comparison of the results have been done.

Keywords: Marcov chain, regenerative process, risk model, ruin probability, strong stability

Procedia PDF Downloads 305
2134 Modelling Causal Effects from Complex Longitudinal Data via Point Effects of Treatments

Authors: Xiaoqin Wang, Li Yin

Abstract:

Background and purpose: In many practices, one estimates causal effects arising from a complex stochastic process, where a sequence of treatments are assigned to influence a certain outcome of interest, and there exist time-dependent covariates between treatments. When covariates are plentiful and/or continuous, statistical modeling is needed to reduce the huge dimensionality of the problem and allow for the estimation of causal effects. Recently, Wang and Yin (Annals of statistics, 2020) derived a new general formula, which expresses these causal effects in terms of the point effects of treatments in single-point causal inference. As a result, it is possible to conduct the modeling via point effects. The purpose of the work is to study the modeling of these causal effects via point effects. Challenges and solutions: The time-dependent covariates often have influences from earlier treatments as well as on subsequent treatments. Consequently, the standard parameters – i.e., the mean of the outcome given all treatments and covariates-- are essentially all different (null paradox). Furthermore, the dimension of the parameters is huge (curse of dimensionality). Therefore, it can be difficult to conduct the modeling in terms of standard parameters. Instead of standard parameters, we have use point effects of treatments to develop likelihood-based parametric approach to the modeling of these causal effects and are able to model the causal effects of a sequence of treatments by modeling a small number of point effects of individual treatment Achievements: We are able to conduct the modeling of the causal effects from a sequence of treatments in the familiar framework of single-point causal inference. The simulation shows that our method achieves not only an unbiased estimate for the causal effect but also the nominal level of type I error and a low level of type II error for the hypothesis testing. We have applied this method to a longitudinal study of COVID-19 mortality among Scandinavian countries and found that the Swedish approach performed far worse than the other countries' approach for COVID-19 mortality and the poor performance was largely due to its early measure during the initial period of the pandemic.

Keywords: causal effect, point effect, statistical modelling, sequential causal inference

Procedia PDF Downloads 192
2133 Security System for Safe Transmission of Medical Image

Authors: Mohammed Jamal Al-Mansor, Kok Beng Gan

Abstract:

This paper develops an optimized embedding of payload in medical image by using genetic optimization. The goal is to preserve region of interest from being distorted because of the watermark. By using this developed system there is no need of manual defining of region of interest through experts as the system will apply the genetic optimization to select the parts of image that can carry the watermark with guaranteeing less distortion. The experimental results assure that genetic based optimization is useful for performing steganography with less mean square error percentage.

Keywords: AES, DWT, genetic algorithm, watermarking

Procedia PDF Downloads 397
2132 Discontinuous Galerkin Method for Higher-Order Ordinary Differential Equations

Authors: Helmi Temimi

Abstract:

In this paper, we study the super-convergence properties of the discontinuous Galerkin (DG) method applied to one-dimensional mth-order ordinary differential equations without introducing auxiliary variables. We found that nth−derivative of the DG solution exhibits an optimal O (hp+1−n) convergence rates in the L2-norm when p-degree piecewise polynomials with p≥1 are used. We further found that the odd-derivatives and the even derivatives are super convergent, respectively, at the upwind and downwind endpoints.

Keywords: discontinuous, galerkin, superconvergence, higherorder, error, estimates

Procedia PDF Downloads 464
2131 Applications of Probabilistic Interpolation via Orthogonal Matrices

Authors: Dariusz Jacek Jakóbczak

Abstract:

Mathematics and computer science are interested in methods of 2D curve interpolation and extrapolation using the set of key points (knots). A proposed method of Hurwitz- Radon Matrices (MHR) is such a method. This novel method is based on the family of Hurwitz-Radon (HR) matrices which possess columns composed of orthogonal vectors. Two-dimensional curve is interpolated via different functions as probability distribution functions: polynomial, sinus, cosine, tangent, cotangent, logarithm, exponent, arcsin, arccos, arctan, arcctg or power function, also inverse functions. It is shown how to build the orthogonal matrix operator and how to use it in a process of curve reconstruction.

Keywords: 2D data interpolation, hurwitz-radon matrices, MHR method, probabilistic modeling, curve extrapolation

Procedia PDF Downloads 513
2130 Comparison of Multivariate Adaptive Regression Splines and Random Forest Regression in Predicting Forced Expiratory Volume in One Second

Authors: P. V. Pramila , V. Mahesh

Abstract:

Pulmonary Function Tests are important non-invasive diagnostic tests to assess respiratory impairments and provides quantifiable measures of lung function. Spirometry is the most frequently used measure of lung function and plays an essential role in the diagnosis and management of pulmonary diseases. However, the test requires considerable patient effort and cooperation, markedly related to the age of patients esulting in incomplete data sets. This paper presents, a nonlinear model built using Multivariate adaptive regression splines and Random forest regression model to predict the missing spirometric features. Random forest based feature selection is used to enhance both the generalization capability and the model interpretability. In the present study, flow-volume data are recorded for N= 198 subjects. The ranked order of feature importance index calculated by the random forests model shows that the spirometric features FVC, FEF 25, PEF,FEF 25-75, FEF50, and the demographic parameter height are the important descriptors. A comparison of performance assessment of both models prove that, the prediction ability of MARS with the `top two ranked features namely the FVC and FEF 25 is higher, yielding a model fit of R2= 0.96 and R2= 0.99 for normal and abnormal subjects. The Root Mean Square Error analysis of the RF model and the MARS model also shows that the latter is capable of predicting the missing values of FEV1 with a notably lower error value of 0.0191 (normal subjects) and 0.0106 (abnormal subjects). It is concluded that combining feature selection with a prediction model provides a minimum subset of predominant features to train the model, yielding better prediction performance. This analysis can assist clinicians with a intelligence support system in the medical diagnosis and improvement of clinical care.

Keywords: FEV, multivariate adaptive regression splines pulmonary function test, random forest

Procedia PDF Downloads 296
2129 Evaluation of External Costs of Traffic Accident in Slovak Republic

Authors: Anna Dolinayova, Jozef Danis, Juraj Camaj

Abstract:

The report deals with comparison of traffic accidents in Slovak republic in road and rail transport since year 2009 until 2014, with evaluation of external costs and consequently with the possibilities of their internalization. The results of road traffic accidents analysis are realized in line with after-effects they have caused; in line with main cause, place of origin (within or out of town) and in accordance to age of people they were killed or hard, eventually easy injured in traffic accidents. Evaluation of individual after-effects is carried in terms of probability of traffic accidents occurrence.

Keywords: external costs, traffic accident, rail transport, road transport

Procedia PDF Downloads 570
2128 Heat-Induced Uncertainty of Industrial Computed Tomography Measuring a Stainless Steel Cylinder

Authors: Verena M. Moock, Darien E. Arce Chávez, Mariana M. Espejel González, Leopoldo Ruíz-Huerta, Crescencio García-Segundo

Abstract:

Uncertainty analysis in industrial computed tomography is commonly related to metrological trace tools, which offer precision measurements of external part features. Unfortunately, there is no such reference tool for internal measurements to profit from the unique imaging potential of X-rays. Uncertainty approximations for computed tomography are still based on general aspects of the industrial machine and do not adapt to acquisition parameters or part characteristics. The present study investigates the impact of the acquisition time on the dimensional uncertainty measuring a stainless steel cylinder with a circular tomography scan. The authors develop the figure difference method for X-ray radiography to evaluate the volumetric differences introduced within the projected absorption maps of the metal workpiece. The dimensional uncertainty is dominantly influenced by photon energy dissipated as heat causing the thermal expansion of the metal, as monitored by an infrared camera within the industrial tomograph. With the proposed methodology, we are able to show evolving temperature differences throughout the tomography acquisition. This is an early study showing that the number of projections in computer tomography induces dimensional error due to energy absorption. The error magnitude would depend on the thermal properties of the sample and the acquisition parameters by placing apparent non-uniform unwanted volumetric expansion. We introduce infrared imaging for the experimental display of metrological uncertainty in a particular metal part of symmetric geometry. We assess that the current results are of fundamental value to reach the balance between the number of projections and uncertainty tolerance when performing analysis with X-ray dimensional exploration in precision measurements with industrial tomography.

Keywords: computed tomography, digital metrology, infrared imaging, thermal expansion

Procedia PDF Downloads 111
2127 Clusterization Probability in 14N Nuclei

Authors: N. Burtebayev, Sh. Hamada, Zh. Kerimkulov, D. K. Alimov, A. V. Yushkov, N. Amangeldi, A. N. Bakhtibaev

Abstract:

The main aim of the current work is to examine if 14N is candidate to be clusterized nuclei or not. In order to check this attendance, we have measured the angular distributions for 14N ion beam elastically scattered on 12C target nuclei at different low energies; 17.5, 21, and 24.5MeV which are close to the Coulomb barrier energy for 14N+12C nuclear system. Study of various transfer reactions could provide us with useful information about the attendance of nuclei to be in a composite form (core + valence). The experimental data were analyzed using two approaches; Phenomenological (Optical Potential) and semi-microscopic (Double Folding Potential). The agreement between the experimental data and the theoretical predictions is fairly good in the whole angular range.

Keywords: deuteron transfer, elastic scattering, optical model, double folding, density distribution

Procedia PDF Downloads 320
2126 Underrepresentation of Right Middle Cerebral Infarct: A Statistical Parametric Mapping

Authors: Wi-Sun Ryu, Eun-Kee Bae

Abstract:

Prior studies have shown that patients with right hemispheric stroke are likely to seek medical service compared with those with left hemispheric stroke. However, the underlying mechanism for this phenomenon is unknown. In the present study, we generated lesion probability maps in a patient with right and left middle cerebral artery infarct and statistically compared. We found that precentral gyrus-Brodmann area 44, a language area in the left hemisphere - involvement was significantly higher in patients with left hemispheric stroke. This finding suggests that a language dysfunction was more noticeable, thereby taking more patients to hospitals.

Keywords: cerebral infarct, brain MRI, statistical parametric mapping, middle cerebral infarct

Procedia PDF Downloads 323
2125 Investigation of Delivery of Triple Play Data in GE-PON Fiber to the Home Network

Authors: Ashima Anurag Sharma

Abstract:

Optical fiber based networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This research paper is targeted to show the simultaneous delivery of triple play service (data, voice, and video). The comparison between various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be decreases due to increase in bit error rate.

Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT

Procedia PDF Downloads 513
2124 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management

Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro

Abstract:

This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R² between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. The study has some limitations, such as the use of a single ANN architecture and two optimizers, the validation with data from only one producer, and the possible underestimation of the influence of seasonality and local climate variability. An irrigation management application using the most efficient models from this study is already under development. Future research can explore different ANN architectures and optimization techniques, validate models with data from multiple producers and regions, and investigate the model's response to different seasonal and climatic conditions.

Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization

Procedia PDF Downloads 29
2123 Valuing Cultural Ecosystem Services of Natural Treatment Systems Using Crowdsourced Data

Authors: Andrea Ghermandi

Abstract:

Natural treatment systems such as constructed wetlands and waste stabilization ponds are increasingly used to treat water and wastewater from a variety of sources, including stormwater and polluted surface water. The provision of ancillary benefits in the form of cultural ecosystem services makes these systems unique among water and wastewater treatment technologies and greatly contributes to determine their potential role in promoting sustainable water management practices. A quantitative analysis of these benefits, however, has been lacking in the literature. Here, a critical assessment of the recreational and educational benefits in natural treatment systems is provided, which combines observed public use from a survey of managers and operators with estimated public use as obtained using geotagged photos from social media as a proxy for visitation rates. Geographic Information Systems (GIS) are used to characterize the spatial boundaries of 273 natural treatment systems worldwide. Such boundaries are used as input for the Application Program Interfaces (APIs) of two popular photo-sharing websites (Flickr and Panoramio) in order to derive the number of photo-user-days, i.e., the number of yearly visits by individual photo users in each site. The adequateness and predictive power of four univariate calibration models using the crowdsourced data as a proxy for visitation are evaluated. A high correlation is found between photo-user-days and observed annual visitors (Pearson's r = 0.811; p-value < 0.001; N = 62). Standardized Major Axis (SMA) regression is found to outperform Ordinary Least Squares regression and count data models in terms of predictive power insofar as standard verification statistics – such as the root mean square error of prediction (RMSEP), the mean absolute error of prediction (MAEP), the reduction of error (RE), and the coefficient of efficiency (CE) – are concerned. The SMA regression model is used to estimate the intensity of public use in all 273 natural treatment systems. System type, influent water quality, and area are found to statistically affect public use, consistently with a priori expectations. Publicly available information regarding the home location of the sampled visitors is derived from their social media profiles and used to infer the distance they are willing to travel to visit the natural treatment systems in the database. Such information is analyzed using the travel cost method to derive monetary estimates of the recreational benefits of the investigated natural treatment systems. Overall, the findings confirm the opportunities arising from an integrated design and management of natural treatment systems, which combines the objectives of water quality enhancement and provision of cultural ecosystem services through public use in a multi-functional approach and compatibly with the need to protect public health.

Keywords: constructed wetlands, cultural ecosystem services, ecological engineering, waste stabilization ponds

Procedia PDF Downloads 168
2122 Investigation of Delivery of Triple Play Services

Authors: Paramjit Mahey, Monica Sharma, Jasbinder Singh

Abstract:

Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.

Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT

Procedia PDF Downloads 528
2121 Study of Syntactic Errors for Deep Parsing at Machine Translation

Authors: Yukiko Sasaki Alam, Shahid Alam

Abstract:

Syntactic parsing is vital for semantic treatment by many applications related to natural language processing (NLP), because form and content coincide in many cases. However, it has not yet reached the levels of reliable performance. By manually examining and analyzing individual machine translation output errors that involve syntax as well as semantics, this study attempts to discover what is required for improving syntactic and semantic parsing.

Keywords: syntactic parsing, error analysis, machine translation, deep parsing

Procedia PDF Downloads 536
2120 The Critical Relevance of Credit and Debt Data in Household Food Security Analysis: The Risks of Ineffective Response Actions

Authors: Siddharth Krishnaswamy

Abstract:

Problem Statement: Currently, when analyzing household food security, the most commonly studied food access indicators are household income and expenditure. Larger studies do take into account other indices such as credit and employment. But these are baselines studies and by definition are conducted infrequently. Food security analysis for access is usually dedicated to analyzing income and expenditure indicators. And both these indicators are notoriously inconsistent. Yet this data can very often end up being the basis on which household food access is calculated; and by extension, be used for decision making. Objectives: This paper argues that along with income and expenditure, credit and debit information should be collected so that an accurate analysis of household food security (and in particular) food access can be determined. The lack of collection and analysis of this information routinely means that there is often a “masking” of the actual situation; a household’s food access and food availability patterns may be adequate mainly as a result of borrowing and may even be due to a long- term dependency (a debt cycle). In other words, such a household is, in reality, worse off than it appears a factor masked by its performance on basic access indicators. Procedures/methodologies/approaches: Existing food security data sets collected in 2005 in Azerbaijan, 2010 across Myanmar and 2014-15 across Uganda were used to support the theory that analyzing income and expenditure of a HHs and analyzing the same in addition to data on credit & borrowing patterns will result in an entirely different scenario of food access of the household. Furthermore, the data analyzed depicts food consumption patterns across groups of households and then relates this to the extent of dependency on credit, i.e. households borrowing money in order to meet food needs. Finally, response options that were based on analyzing only income and expenditure; and response options based on income, expenditure, credit, and borrowing – from the same geographical area of operation are studied and discussed. Results: The purpose of this work was to see if existing methods of household food security analysis could be improved. It is hoped that food security analysts will collect household level information on credit and debit and analyze them against income, expenditure and consumption patterns. This will help determine if a household’s food access and availability are dependent on unsustainable strategies such as borrowing money for food or undertaking sustained debts. Conclusions: The results clearly show the amount of relevant information that is missing in Food Access analysis if debit and borrowing of the household is not analyzed along with the typical Food Access indicators that are usually analyzed. And the serious repercussions this has on Programmatic response and interventions.

Keywords: analysis, food security indicators, response, resilience analysis

Procedia PDF Downloads 320
2119 A Computational Model of the Thermal Grill Illusion: Simulating the Perceived Pain Using Neuronal Activity in Pain-Sensitive Nerve Fibers

Authors: Subhankar Karmakar, Madhan Kumar Vasudevan, Manivannan Muniyandi

Abstract:

Thermal Grill Illusion (TGI) elicits a strong and often painful sensation of burn when interlacing warm and cold stimuli that are individually non-painful, excites thermoreceptors beneath the skin. Among several theories of TGI, the “disinhibition” theory is the most widely accepted in the literature. According to this theory, TGI is the result of the disinhibition or unmasking of the pain-sensitive HPC (Heat-Pinch-Cold) nerve fibers due to the inhibition of cold-sensitive nerve fibers that are responsible for masking HPC nerve fibers. Although researchers focused on understanding TGI throughexperiments and models, none of them investigated the prediction of TGI pain intensity through a computational model. Furthermore, the comparison of psychophysically perceived TGI intensity with neurophysiological models has not yet been studied. The prediction of pain intensity through a computational model of TGI can help inoptimizing thermal displays and understanding pathological conditions related to temperature perception. The current studyfocuses on developing a computational model to predict the intensity of TGI pain and experimentally observe the perceived TGI pain. The computational model is developed based on the disinhibition theory and by utilizing the existing popular models of warm and cold receptors in the skin. The model aims to predict the neuronal activity of the HPC nerve fibers. With a temperature-controlled thermal grill setup, fifteen participants (ten males and five females) were presented with five temperature differences between warm and cold grills (each repeated three times). All the participants rated the perceived TGI pain sensation on a scale of one to ten. For the range of temperature differences, the experimentally observed perceived intensity of TGI is compared with the neuronal activity of pain-sensitive HPC nerve fibers. The simulation results show a monotonically increasing relationship between the temperature differences and the neuronal activity of the HPC nerve fibers. Moreover, a similar monotonically increasing relationship is experimentally observed between temperature differences and the perceived TGI intensity. This shows the potential comparison of TGI pain intensity observed through the experimental study with the neuronal activity predicted through the model. The proposed model intends to bridge the theoretical understanding of the TGI and the experimental results obtained through psychophysics. Further studies in pain perception are needed to develop a more accurate version of the current model.

Keywords: thermal grill Illusion, computational modelling, simulation, psychophysics, haptics

Procedia PDF Downloads 156
2118 Pion/Muon Identification in a Nuclear Emulsion Cloud Chamber Using Neural Networks

Authors: Kais Manai

Abstract:

The main part of this work focuses on the study of pion/muon separation at low energy using a nuclear Emulsion Cloud Chamber (ECC) made of lead and nuclear emulsion films. The work consists of two parts: particle reconstruction algorithm and a Neural Network that assigns to each reconstructed particle the probability to be a muon or a pion. The pion/muon separation algorithm has been optimized by using a detailed Monte Carlo simulation of the ECC and tested on real data. The algorithm allows to achieve a 60% muon identification efficiency with a pion misidentification smaller than 3%.

Keywords: nuclear emulsion, particle identification, tracking, neural network

Procedia PDF Downloads 486
2117 Effect of Lullabies on Babies Stress and Relaxation Symptoms in the Neonatal Intensive Care Units

Authors: Meltem Kürtüncü, Işın Alkan

Abstract:

Objective: This study was carried out with an experimental design in order to determine whether the lullaby, which was listened from mother’s voice and a stranger’s voice to the babies born at term and hospitalized in neonatal intensive care unit, had an effect on stress and relaxation symptoms of the infants. Method: Data from the study were obtained from 90 newborn babies who were hospitalized in Neonatal Intensive Care Unit of Zonguldak Maternity And Children Hospital between September 2015-January 2016 and who met the eligibility criteria. Lullaby concert was performed by choosing one of the suitable care hours. Stress and relaxation symptoms were recorded by the researcher on “Newborn response follow-up form” at pre-care and post-care. Results: After lullaby concert when stress symptoms compared to infants in the experimental and control groups before the care was not detected statistically significant difference between crying, contraction, facial grimacing, flushing, cyanosis and the rates of increase in temperature. After care, crying, contractions, facial grimacing, flushing, and restlessness revealed a statistically significant difference between the groups, but as the cyanosis and temperature increased stress responses did not result in a significant difference between the groups. In the control group babies the crying, contraction, facial grimacing, flushing, and restlessness behaviors rates were found to be significantly higher than experimental group babies. After lullaby concert when relaxation symptoms compared to infants in the experimental and control groups before the care, eye contact rates who listen to lullaby from mother’s voice was found to be significantly higher than infants who listen to lullaby from stranger’s voice and infants in the control group. After care as eye contact, smiling, sucking/searching, yawning, non-crying and sleep behaviors relaxation symptoms revealed statistically significant results. In the control group, these behaviors were found statistically lower degree than the experimental groups. Conclusion: Lullaby concerts as masking the ambient noise, reducing the stress symptoms and increasing the relaxation symptoms, and also for soothing and stimulant affects, due to ease the transition to the sleep state should be preferred in the neonatal intensive care units.

Keywords: lullaby, mother voice, relaxation, stress

Procedia PDF Downloads 218
2116 Decision Tree Modeling in Emergency Logistics Planning

Authors: Yousef Abu Nahleh, Arun Kumar, Fugen Daver, Reham Al-Hindawi

Abstract:

Despite the availability of natural disaster related time series data for last 110 years, there is no forecasting tool available to humanitarian relief organizations to determine forecasts for emergency logistics planning. This study develops a forecasting tool based on identifying probability of disaster for each country in the world by using decision tree modeling. Further, the determination of aggregate forecasts leads to efficient pre-disaster planning. Based on the research findings, the relief agencies can optimize the various resources allocation in emergency logistics planning.

Keywords: decision tree modeling, forecasting, humanitarian relief, emergency supply chain

Procedia PDF Downloads 467
2115 Utilizing Spatial Uncertainty of On-The-Go Measurements to Design Adaptive Sampling of Soil Electrical Conductivity in a Rice Field

Authors: Ismaila Olabisi Ogundiji, Hakeem Mayowa Olujide, Qasim Usamot

Abstract:

The main reasons for site-specific management for agricultural inputs are to increase the profitability of crop production, to protect the environment and to improve products’ quality. Information about the variability of different soil attributes within a field is highly essential for the decision-making process. Lack of fast and accurate acquisition of soil characteristics remains one of the biggest limitations of precision agriculture due to being expensive and time-consuming. Adaptive sampling has been proven as an accurate and affordable sampling technique for planning within a field for site-specific management of agricultural inputs. This study employed spatial uncertainty of soil apparent electrical conductivity (ECa) estimates to identify adaptive re-survey areas in the field. The original dataset was grouped into validation and calibration groups where the calibration group was sub-grouped into three sets of different measurements pass intervals. A conditional simulation was performed on the field ECa to evaluate the ECa spatial uncertainty estimates by the use of the geostatistical technique. The grouping of high-uncertainty areas for each set was done using image segmentation in MATLAB, then, high and low area value-separate was identified. Finally, an adaptive re-survey was carried out on those areas of high-uncertainty. Adding adaptive re-surveying significantly minimized the time required for resampling whole field and resulted in ECa with minimal error. For the most spacious transect, the root mean square error (RMSE) yielded from an initial crude sampling survey was minimized after an adaptive re-survey, which was close to that value of the ECa yielded with an all-field re-survey. The estimated sampling time for the adaptive re-survey was found to be 45% lesser than that of all-field re-survey. The results indicate that designing adaptive sampling through spatial uncertainty models significantly mitigates sampling cost, and there was still conformity in the accuracy of the observations.

Keywords: soil electrical conductivity, adaptive sampling, conditional simulation, spatial uncertainty, site-specific management

Procedia PDF Downloads 117
2114 The Lexicographic Serial Rule

Authors: Thi Thao Nguyen, Andrew McLennan, Shino Takayama

Abstract:

We study the probabilistic allocation of finitely many indivisible objects to finitely many agents. Well known allocation rules for this problem include random priority, the market mechanism proposed by Hylland and Zeckhauser [1979], and the probabilistic serial rule of Bogomolnaia and Moulin [2001]. We propose a new allocation rule, which we call the lexico-graphic (serial) rule, that is tailored for situations in which each agent's primary concern is to maximize the probability of receiving her favourite object. Three axioms, lex efficiency, lex envy freeness and fairness, are proposed and fully characterize the lexicographic serial rule. We also discuss how our axioms and the lexicographic rule are related to other allocation rules, particularly the probabilistic serial rule.

Keywords: Efficiency, Envy free, Lexicographic, Probabilistic Serial Rule

Procedia PDF Downloads 132
2113 Risk Identification of Investment Feasibility in Indonesia’s Toll Road Infrastructure Investment

Authors: Christo Februanto Putra

Abstract:

This paper presents risk identification that affects investment feasibility on toll road infrastructure in Indonesia using qualitative methods survey based on the expert practitioner in investor, contractor, and state officials. The problems on infrastructure investment in Indonesia, especially on KPBU model contract, is many risk factors in the investment plan is not calculated in detail thoroughly. Risk factor is a value used to provide an overview of the risk level assessment of an event which is a function of the probability of the occurrence and the consequences of the risks that arise. As results of the survey which is to show which risk factors impacts directly to the investment feasibility and rank them by their impacts on the investment.

Keywords: risk identification, indonesia toll road, investment feasibility

Procedia PDF Downloads 263
2112 Study of Error Analysis and Sources of Uncertainty in the Measurement of Residual Stresses by the X-Ray Diffraction

Authors: E. T. Carvalho Filho, J. T. N. Medeiros, L. G. Martinez

Abstract:

Residual stresses are self equilibrating in a rigid body that acts on the microstructure of the material without application of an external load. They are elastic stresses and can be induced by mechanical, thermal and chemical processes causing a deformation gradient in the crystal lattice favoring premature failure in mechanicals components. The search for measurements with good reliability has been of great importance for the manufacturing industries. Several methods are able to quantify these stresses according to physical principles and the response of the mechanical behavior of the material. The diffraction X-ray technique is one of the most sensitive techniques for small variations of the crystalline lattice since the X-ray beam interacts with the interplanar distance. Being very sensitive technique is also susceptible to variations in measurements requiring a study of the factors that influence the final result of the measurement. Instrumental, operational factors, form deviations of the samples and geometry of analyzes are some variables that need to be considered and analyzed in order for the true measurement. The aim of this work is to analyze the sources of errors inherent to the residual stress measurement process by X-ray diffraction technique making an interlaboratory comparison to verify the reproducibility of the measurements. In this work, two specimens were machined, differing from each other by the surface finishing: grinding and polishing. Additionally, iron powder with particle size less than 45 µm was selected in order to be a reference (as recommended by ASTM E915 standard) for the tests. To verify the deviations caused by the equipment, those specimens were positioned and with the same analysis condition, seven measurements were carried out at 11Ψ tilts. To verify sample positioning errors, seven measurements were performed by positioning the sample at each measurement. To check geometry errors, measurements were repeated for the geometry and Bragg Brentano parallel beams. In order to verify the reproducibility of the method, the measurements were performed in two different laboratories and equipments. The results were statistically worked out and the quantification of the errors.

Keywords: residual stress, x-ray diffraction, repeatability, reproducibility, error analysis

Procedia PDF Downloads 167
2111 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks

Authors: Andrew N. Saylor, James R. Peters

Abstract:

Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.

Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging

Procedia PDF Downloads 115
2110 A Geographic Information System Mapping Method for Creating Improved Satellite Solar Radiation Dataset Over Qatar

Authors: Sachin Jain, Daniel Perez-Astudillo, Dunia A. Bachour, Antonio P. Sanfilippo

Abstract:

The future of solar energy in Qatar is evolving steadily. Hence, high-quality spatial solar radiation data is of the uttermost requirement for any planning and commissioning of solar technology. Generally, two types of solar radiation data are available: satellite data and ground observations. Satellite solar radiation data is developed by the physical and statistical model. Ground data is collected by solar radiation measurement stations. The ground data is of high quality. However, they are limited to distributed point locations with the high cost of installation and maintenance for the ground stations. On the other hand, satellite solar radiation data is continuous and available throughout geographical locations, but they are relatively less accurate than ground data. To utilize the advantage of both data, a product has been developed here which provides spatial continuity and higher accuracy than any of the data alone. The popular satellite databases: National Solar radiation Data Base, NSRDB (PSM V3 model, spatial resolution: 4 km) is chosen here for merging with ground-measured solar radiation measurement in Qatar. The spatial distribution of ground solar radiation measurement stations is comprehensive in Qatar, with a network of 13 ground stations. The monthly average of the daily total Global Horizontal Irradiation (GHI) component from ground and satellite data is used for error analysis. The normalized root means square error (NRMSE) values of 3.31%, 6.53%, and 6.63% for October, November, and December 2019 were observed respectively when comparing in-situ and NSRDB data. The method is based on the Empirical Bayesian Kriging Regression Prediction model available in ArcGIS, ESRI. The workflow of the algorithm is based on the combination of regression and kriging methods. A regression model (OLS, ordinary least square) is fitted between the ground and NSBRD data points. A semi-variogram is fitted into the experimental semi-variogram obtained from the residuals. The kriging residuals obtained after fitting the semi-variogram model were added to NSRBD data predicted values obtained from the regression model to obtain the final predicted values. The NRMSE values obtained after merging are respectively 1.84%, 1.28%, and 1.81% for October, November, and December 2019. One more explanatory variable, that is the ground elevation, has been incorporated in the regression and kriging methods to reduce the error and to provide higher spatial resolution (30 m). The final GHI maps have been created after merging, and NRMSE values of 1.24%, 1.28%, and 1.28% have been observed for October, November, and December 2019, respectively. The proposed merging method has proven as a highly accurate method. An additional method is also proposed here to generate calibrated maps by using regression and kriging model and further to use the calibrated model to generate solar radiation maps from the explanatory variable only when not enough historical ground data is available for long-term analysis. The NRMSE values obtained after the comparison of the calibrated maps with ground data are 5.60% and 5.31% for November and December 2019 month respectively.

Keywords: global horizontal irradiation, GIS, empirical bayesian kriging regression prediction, NSRDB

Procedia PDF Downloads 77
2109 0.13-µm Complementary Metal-Oxide Semiconductor Vector Modulator for Beamforming System

Authors: J. S. Kim

Abstract:

This paper presents a 0.13-µm Complementary Metal-Oxide Semiconductor (CMOS) vector modulator for beamforming system. The vector modulator features a 360° phase and gain range of -10 dB to 10 dB with a root mean square phase and amplitude error of only 2.2° and 0.45 dB, respectively. These features make it a suitable for wireless backhaul system in the 5 GHz industrial, scientific, and medical (ISM) bands. It draws a current of 20.4 mA from a 1.2 V supply. The total chip size is 1.87x1.34 mm².

Keywords: CMOS, vector modulator, beamforming, 802.11ac

Procedia PDF Downloads 193
2108 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation

Authors: Li-hsing Shih, Wei-Jen Hsu

Abstract:

Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.

Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation

Procedia PDF Downloads 53
2107 Study and Analysis of the Factors Affecting Road Safety Using Decision Tree Algorithms

Authors: Naina Mahajan, Bikram Pal Kaur

Abstract:

The purpose of traffic accident analysis is to find the possible causes of an accident. Road accidents cannot be totally prevented but by suitable traffic engineering and management the accident rate can be reduced to a certain extent. This paper discusses the classification techniques C4.5 and ID3 using the WEKA Data mining tool. These techniques use on the NH (National highway) dataset. With the C4.5 and ID3 technique it gives best results and high accuracy with less computation time and error rate.

Keywords: C4.5, ID3, NH(National highway), WEKA data mining tool

Procedia PDF Downloads 320