Search results for: error masking probability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3066

Search results for: error masking probability

2466 Algorithm Development of Individual Lumped Parameter Modelling for Blood Circulatory System: An Optimization Study

Authors: Bao Li, Aike Qiao, Gaoyang Li, Youjun Liu

Abstract:

Background: Lumped parameter model (LPM) is a common numerical model for hemodynamic calculation. LPM uses circuit elements to simulate the human blood circulatory system. Physiological indicators and characteristics can be acquired through the model. However, due to the different physiological indicators of each individual, parameters in LPM should be personalized in order for convincing calculated results, which can reflect the individual physiological information. This study aimed to develop an automatic and effective optimization method to personalize the parameters in LPM of the blood circulatory system, which is of great significance to the numerical simulation of individual hemodynamics. Methods: A closed-loop LPM of the human blood circulatory system that is applicable for most persons were established based on the anatomical structures and physiological parameters. The patient-specific physiological data of 5 volunteers were non-invasively collected as personalized objectives of individual LPM. In this study, the blood pressure and flow rate of heart, brain, and limbs were the main concerns. The collected systolic blood pressure, diastolic blood pressure, cardiac output, and heart rate were set as objective data, and the waveforms of carotid artery flow and ankle pressure were set as objective waveforms. Aiming at the collected data and waveforms, sensitivity analysis of each parameter in LPM was conducted to determine the sensitive parameters that have an obvious influence on the objectives. Simulated annealing was adopted to iteratively optimize the sensitive parameters, and the objective function during optimization was the root mean square error between the collected waveforms and data and simulated waveforms and data. Each parameter in LPM was optimized 500 times. Results: In this study, the sensitive parameters in LPM were optimized according to the collected data of 5 individuals. Results show a slight error between collected and simulated data. The average relative root mean square error of all optimization objectives of 5 samples were 2.21%, 3.59%, 4.75%, 4.24%, and 3.56%, respectively. Conclusions: Slight error demonstrated good effects of optimization. The individual modeling algorithm developed in this study can effectively achieve the individualization of LPM for the blood circulatory system. LPM with individual parameters can output the individual physiological indicators after optimization, which are applicable for the numerical simulation of patient-specific hemodynamics.

Keywords: blood circulatory system, individual physiological indicators, lumped parameter model, optimization algorithm

Procedia PDF Downloads 126
2465 Energy Consumption Forecast Procedure for an Industrial Facility

Authors: Tatyana Aleksandrovna Barbasova, Lev Sergeevich Kazarinov, Olga Valerevna Kolesnikova, Aleksandra Aleksandrovna Filimonova

Abstract:

We regard forecasting of energy consumption by private production areas of a large industrial facility as well as by the facility itself. As for production areas the forecast is made based on empirical dependencies of the specific energy consumption and the production output. As for the facility itself implementation of the task to minimize the energy consumption forecasting error is based on adjustment of the facility’s actual energy consumption values evaluated with the metering device and the total design energy consumption of separate production areas of the facility. The suggested procedure of optimal energy consumption was tested based on the actual data of core product output and energy consumption by a group of workshops and power plants of the large iron and steel facility. Test results show that implementation of this procedure gives the mean accuracy of energy consumption forecasting for winter 2014 of 0.11% for the group of workshops and 0.137% for the power plants.

Keywords: energy consumption, energy consumption forecasting error, energy efficiency, forecasting accuracy, forecasting

Procedia PDF Downloads 428
2464 Performance of VSAT MC-CDMA System Using LDPC and Turbo Codes over Multipath Channel

Authors: Hassan El Ghazi, Mohammed El Jourmi, Tayeb Sadiki, Esmail Ahouzi

Abstract:

The purpose of this paper is to model and analyze a geostationary satellite communication system based on VSAT network and Multicarrier CDMA system scheme which presents a combination of multicarrier modulation scheme and CDMA concepts. In this study the channel coding strategies (Turbo codes and LDPC codes) are adopted to achieve good performance due to iterative decoding. The envisaged system is examined for a transmission over Multipath channel with use of Ku band in the uplink case. The simulation results are obtained for each different case. The performance of the system is given in terms of Bit Error Rate (BER) and energy per bit to noise power spectral density ratio (Eb/N0). The performance results of designed system shown that the communication system coded with LDPC codes can achieve better error rate performance compared to VSAT MC-CDMA system coded with Turbo codes.

Keywords: satellite communication, VSAT Network, MC-CDMA, LDPC codes, turbo codes, uplink

Procedia PDF Downloads 484
2463 Behavioral and EEG Reactions in Children during Recognition of Emotionally Colored Sentences That Describe the Choice Situation

Authors: Tuiana A. Aiusheeva, Sergey S. Tamozhnikov, Alexander E. Saprygin, Arina A. Antonenko, Valentina V. Stepanova, Natalia N. Tolstykh, Alexander N. Savostyanov

Abstract:

Situation of choice is an important condition for the formation of essential character qualities of a child, such as being initiative, responsible, hard-working. We have studied the behavioral and EEG reactions in Russian schoolchildren during recognition of syntactic errors in emotionally colored sentences that describe the choice situation. Twenty healthy children (mean age 9,0±0,3 years, 12 boys, 8 girls) were examined. Forty sentences were selected for the experiment; the half of them contained a syntactic error. The experiment additionally had the hidden condition: 50% of the sentences described the children's own choice and were emotionally colored (positive or negative). The other 50% of the sentences described the forced-choice situation, also with positive or negative coloring. EEG were recorded during execution of error-recognition task. Reaction time and quality of syntactic error detection were chosen as behavioral measures. Event-related spectral perturbation (ERSP) was applied to characterize the oscillatory brain activity of children. There were two time-frequency intervals in EEG reactions: (1) 500-800 ms in the 3-7 Hz frequency range (theta synchronization) and (2) 500-1000 ms in the 8-12 Hz range (alpha desynchronization). We found out that behavioral and brain reactions in child brain during recognition of positive and negative sentences describing forced-choice situation did not have significant differences. Theta synchronization and alpha desynchronization were stronger during recognition of sentences with children's own choice, especially with negative coloring. Also, the quality and execution time of the task were higher for this types of sentences. The results of our study will be useful for improvement of teaching methods and diagnostics of children affective disorders.

Keywords: choice situation, electroencephalogram (EEG), emotionally colored sentences, schoolchildren

Procedia PDF Downloads 259
2462 Understanding the Interactive Nature in Auditory Recognition of Phonological/Grammatical/Semantic Errors at the Sentence Level: An Investigation Based upon Japanese EFL Learners’ Self-Evaluation and Actual Language Performance

Authors: Hirokatsu Kawashima

Abstract:

One important element of teaching/learning listening is intensive listening such as listening for precise sounds, words, grammatical, and semantic units. Several classroom-based investigations have been conducted to explore the usefulness of auditory recognition of phonological, grammatical and semantic errors in such a context. The current study reports the results of one such investigation, which targeted auditory recognition of phonological, grammatical, and semantic errors at the sentence level. 56 Japanese EFL learners participated in this investigation, in which their recognition performance of phonological, grammatical and semantic errors was measured on a 9-point scale by learners’ self-evaluation from the perspective of 1) two types of similar English sound (vowel and consonant minimal pair words), 2) two types of sentence word order (verb phrase-based and noun phrase-based word orders), and 3) two types of semantic consistency (verb-purpose and verb-place agreements), respectively, and their general listening proficiency was examined using standardized tests. A number of findings have been made about the interactive relationships between the three types of auditory error recognition and general listening proficiency. Analyses based on the OPLS (Orthogonal Projections to Latent Structure) regression model have disclosed, for example, that the three types of auditory error recognition are linked in a non-linear way: the highest explanatory power for general listening proficiency may be attained when quadratic interactions between auditory recognition of errors related to vowel minimal pair words and that of errors related to noun phrase-based word order are embraced (R2=.33, p=.01).

Keywords: auditory error recognition, intensive listening, interaction, investigation

Procedia PDF Downloads 499
2461 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model

Authors: Donatella Giuliani

Abstract:

In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.

Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation

Procedia PDF Downloads 205
2460 Statistical Analysis of Extreme Flow (Regions of Chlef)

Authors: Bouthiba Amina

Abstract:

The estimation of the statistics bound to the precipitation represents a vast domain, which puts numerous challenges to meteorologists and hydrologists. Sometimes, it is necessary, to approach in value the extreme events for sites where there is little, or no datum, as well as their periods of return. The search for a model of the frequency of the heights of daily rains dresses a big importance in operational hydrology: It establishes a basis for predicting the frequency and intensity of floods by estimating the amount of precipitation in past years. The most known and the most common approach is the statistical approach, It consists in looking for a law of probability that fits best the values observed by the random variable " daily maximal rain " after a comparison of various laws of probability and methods of estimation by means of tests of adequacy. Therefore, a frequent analysis of the annual series of daily maximal rains was realized on the data of 54 pluviometric stations of the pond of high and average. This choice was concerned with five laws usually applied to the study and the analysis of frequent maximal daily rains. The chosen period is from 1970 to 2013. It was of use to the forecast of quantiles. The used laws are the law generalized by extremes to three components, those of the extreme values to two components (Gumbel and log-normal) in two parameters, the law Pearson typifies III and Log-Pearson III in three parameters. In Algeria, Gumbel's law has been used for a long time to estimate the quantiles of maximum flows. However, and we will check and choose the most reliable law.

Keywords: return period, extreme flow, statistics laws, Gumbel, estimation

Procedia PDF Downloads 63
2459 Performance Evaluation of a Prioritized, Limited Multi-Server Processor-Sharing System that Includes Servers with Various Capacities

Authors: Yoshiaki Shikata, Nobutane Hanayama

Abstract:

We present a prioritized, limited multi-server processor sharing (PS) system where each server has various capacities, and N (≥2) priority classes are allowed in each PS server. In each prioritized, limited server, different service ratio is assigned to each class request, and the number of requests to be processed is limited to less than a certain number. Routing strategies of such prioritized, limited multi-server PS systems that take into account the capacity of each server are also presented, and a performance evaluation procedure for these strategies is discussed. Practical performance measures of these strategies, such as loss probability, mean waiting time, and mean sojourn time, are evaluated via simulation. In the PS server, at the arrival (or departure) of a request, the extension (shortening) of the remaining sojourn time of each request receiving service can be calculated by using the number of requests of each class and the priority ratio. Utilising a simulation program which executes these events and calculations, the performance of the proposed prioritized, limited multi-server PS rule can be analyzed. From the evaluation results, most suitable routing strategy for the loss or waiting system is clarified.

Keywords: processor sharing, multi-server, various capacity, N-priority classes, routing strategy, loss probability, mean sojourn time, mean waiting time, simulation

Procedia PDF Downloads 319
2458 Influence of Scalable Energy-Related Sensor Parameters on Acoustic Localization Accuracy in Wireless Sensor Swarms

Authors: Joyraj Chakraborty, Geoffrey Ottoy, Jean-Pierre Goemaere, Lieven De Strycker

Abstract:

Sensor swarms can be a cost-effectieve and more user-friendly alternative for location based service systems in different application like health-care. To increase the lifetime of such swarm networks, the energy consumption should be scaled to the required localization accuracy. In this paper we have investigated some parameter for energy model that couples localization accuracy to energy-related sensor parameters such as signal length,Bandwidth and sample frequency. The goal is to use the model for the localization of undetermined environmental sounds, by means of wireless acoustic sensors. we first give an overview of TDOA-based localization together with the primary sources of TDOA error (including reverberation effects, Noise). Then we show that in localization, the signal sample rate can be under the Nyquist frequency, provided that enough frequency components remain present in the undersampled signal. The resulting localization error is comparable with that of similar localization systems.

Keywords: sensor swarms, localization, wireless sensor swarms, scalable energy

Procedia PDF Downloads 406
2457 Comparison between Deterministic and Probabilistic Stability Analysis, Featuring Consequent Risk Assessment

Authors: Isabela Moreira Queiroz

Abstract:

Slope stability analyses are largely carried out by deterministic methods and evaluated through a single security factor. Although it is known that the geotechnical parameters can present great dispersal, such analyses are considered fixed and known. The probabilistic methods, in turn, incorporate the variability of input key parameters (random variables), resulting in a range of values of safety factors, thus enabling the determination of the probability of failure, which is an essential parameter in the calculation of the risk (probability multiplied by the consequence of the event). Among the probabilistic methods, there are three frequently used methods in geotechnical society: FOSM (First-Order, Second-Moment), Rosenblueth (Point Estimates) and Monte Carlo. This paper presents a comparison between the results from deterministic and probabilistic analyses (FOSM method, Monte Carlo and Rosenblueth) applied to a hypothetical slope. The end was held to evaluate the behavior of the slope and consequent risk analysis, which is used to calculate the risk and analyze their mitigation and control solutions. It can be observed that the results obtained by the three probabilistic methods were quite close. It should be noticed that the calculation of the risk makes it possible to list the priority to the implementation of mitigation measures. Therefore, it is recommended to do a good assessment of the geological-geotechnical model incorporating the uncertainty in viability, design, construction, operation and closure by means of risk management. 

Keywords: probabilistic methods, risk assessment, risk management, slope stability

Procedia PDF Downloads 372
2456 Timing and Probability of Presurgical Teledermatology: Survival Analysis

Authors: Felipa de Mello-Sampayo

Abstract:

The aim of this study is to undertake, from patient’s perspective, the timing and probability of using teledermatology, comparing it with a conventional referral system. The dynamic stochastic model’s main value-added consists of the concrete application to patients waiting for dermatology surgical intervention. Patients with low health level uncertainty must use teledermatology treatment as soon as possible, which is precisely when the teledermatology is least valuable. The results of the model were then tested empirically with the teledermatology network covering the area served by the Hospital Garcia da Horta, Portugal, links the primary care centers of 24 health districts with the hospital’s dermatology department via the corporate intranet of the Portuguese healthcare system. Health level volatility can be understood as the hazard of developing skin cancer and the trend of health level as the bias of developing skin lesions. The results of the survival analysis suggest that the theoretical model can explain the use of teledermatology. It depends negatively on the volatility of patients' health, and positively on the trend of health, i.e., the lower the risk of developing skin cancer and the younger the patients, the more presurgical teledermatology one expects to occur. Presurgical teledermatology also depends positively on out-of-pocket expenses and negatively on the opportunity costs of teledermatology, i.e., the lower the benefit missed by using teledermatology, the more presurgical teledermatology one expects to occur.

Keywords: teledermatology, wait time, uncertainty, opportunity cost, survival analysis

Procedia PDF Downloads 112
2455 Neural Network Approaches for Sea Surface Height Predictability Using Sea Surface Temperature

Authors: Luther Ollier, Sylvie Thiria, Anastase Charantonis, Carlos E. Mejia, Michel Crépon

Abstract:

Sea Surface Height Anomaly (SLA) is a signature of the sub-mesoscale dynamics of the upper ocean. Sea Surface Temperature (SST) is driven by these dynamics and can be used to improve the spatial interpolation of SLA fields. In this study, we focused on the temporal evolution of SLA fields. We explored the capacity of deep learning (DL) methods to predict short-term SLA fields using SST fields. We used simulated daily SLA and SST data from the Mercator Global Analysis and Forecasting System, with a resolution of (1/12)◦ in the North Atlantic Ocean (26.5-44.42◦N, -64.25–41.83◦E), covering the period from 1993 to 2019. Using a slightly modified image-to-image convolutional DL architecture, we demonstrated that SST is a relevant variable for controlling the SLA prediction. With a learning process inspired by the teaching-forcing method, we managed to improve the SLA forecast at five days by using the SST fields as additional information. We obtained predictions of a 12 cm (20 cm) error of SLA evolution for scales smaller than mesoscales and at time scales of 5 days (20 days), respectively. Moreover, the information provided by the SST allows us to limit the SLA error to 16 cm at 20 days when learning the trajectory.

Keywords: deep-learning, altimetry, sea surface temperature, forecast

Procedia PDF Downloads 75
2454 Models Comparison for Solar Radiation

Authors: Djelloul Benatiallah

Abstract:

Due to the current high consumption and recent industry growth, the depletion of fossil and natural energy supplies like oil, gas, and uranium is declining. Due to pollution and climate change, there needs to be a swift switch to renewable energy sources. Research on renewable energy is being done to meet energy needs. Solar energy is one of the renewable resources that can currently meet all of the world's energy needs. In most parts of the world, solar energy is a free and unlimited resource that can be used in a variety of ways, including photovoltaic systems for the generation of electricity and thermal systems for the generation of heatfor the residential sector's production of hot water. In this article, we'll conduct a comparison. The first step entails identifying the two empirical models that will enable us to estimate the daily irradiations on a horizontal plane. On the other hand, we compare it using the data obtained from measurements made at the Adrar site over the four distinct seasons. The model 2 provides a better estimate of the global solar components, with an absolute mean error of less than 7% and a correlation coefficient of more than 0.95, as well as a relative coefficient of the bias error that is less than 6% in absolute value and a relative RMSE that is less than 10%, according to a comparison of the results obtained by simulating the two models.

Keywords: solar radiation, renewable energy, fossil, photovoltaic systems

Procedia PDF Downloads 66
2453 An Improved Prediction Model of Ozone Concentration Time Series Based on Chaotic Approach

Authors: Nor Zila Abd Hamid, Mohd Salmi M. Noorani

Abstract:

This study is focused on the development of prediction models of the Ozone concentration time series. Prediction model is built based on chaotic approach. Firstly, the chaotic nature of the time series is detected by means of phase space plot and the Cao method. Then, the prediction model is built and the local linear approximation method is used for the forecasting purposes. Traditional prediction of autoregressive linear model is also built. Moreover, an improvement in local linear approximation method is also performed. Prediction models are applied to the hourly ozone time series observed at the benchmark station in Malaysia. Comparison of all models through the calculation of mean absolute error, root mean squared error and correlation coefficient shows that the one with improved prediction method is the best. Thus, chaotic approach is a good approach to be used to develop a prediction model for the Ozone concentration time series.

Keywords: chaotic approach, phase space, Cao method, local linear approximation method

Procedia PDF Downloads 314
2452 The Use of Performance Indicators for Evaluating Models of Drying Jackfruit (Artocarpus heterophyllus L.): Page, Midilli, and Lewis

Authors: D. S. C. Soares, D. G. Costa, J. T. S., A. K. S. Abud, T. P. Nunes, A. M. Oliveira Júnior

Abstract:

Mathematical models of drying are used for the purpose of understanding the drying process in order to determine important parameters for design and operation of the dryer. The jackfruit is a fruit with high consumption in the Northeast and perishability. It is necessary to apply techniques to improve their conservation for longer in order to diffuse it by regions with low consumption. This study aimed to analyse several mathematical models (Page, Lewis, and Midilli) to indicate one that best fits the conditions of convective drying process using performance indicators associated with each model: accuracy (Af) and noise factors (Bf), mean square error (RMSE) and standard error of prediction (% SEP). Jackfruit drying was carried out in convective type tray dryer at a temperature of 50°C for 9 hours. It is observed that the model Midili was more accurate with Af: 1.39, Bf: 1.33, RMSE: 0.01%, and SEP: 5.34. However, the use of the Model Midilli is not appropriate for purposes of control process due to need four tuning parameters. With the performance indicators used in this paper, the Page model showed similar results with only two parameters. It is concluded that the best correlation between the experimental and estimated data is given by the Page’s model.

Keywords: drying, models, jackfruit, biotechnology

Procedia PDF Downloads 367
2451 A Multi-Objective Programming Model to Supplier Selection and Order Allocation Problem in Stochastic Environment

Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh

Abstract:

This paper aims at developing a multi-objective model for supplier selection and order allocation problem in stochastic environment, where purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. In this regard, dependent chance programming is used which maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. The abovementioned stochastic multi-objective programming problem is then transformed into a stochastic single objective programming problem using minimum deviation method. In the next step, the further problem is solved applying a genetic algorithm, which performs a simulation process in order to calculate the stochastic objective function as its fitness function. Finally, the impact of stochastic parameters on the given solution is examined via a sensitivity analysis exploiting coefficient of variation. The results show that whatever stochastic parameters have greater coefficients of variation, the value of the objective function in the stochastic single objective programming problem is deteriorated.

Keywords: supplier selection, order allocation, dependent chance programming, genetic algorithm

Procedia PDF Downloads 299
2450 The Effect of Training and Development Practice on Employees’ Performance

Authors: Sifen Abreham

Abstract:

Employees are resources in organizations; as such, they need to be trained and developed properly to achieve an organization's goals and expectations. The initial development of the human resource management concept is based on the effective utilization of people to treat them as resources, leading to the realization of business strategies and organizational objectives. The study aimed to assess the effect of training and development practices on employee performance. The researcher used an explanatory research design, which helps to explain, understand, and predict the relationship between variables. To collect the data from the respondents, the study used probability sampling. From the probability, the researcher used stratified random sampling, which can branch off the entire population into homogenous groups. The result was analyzed and presented by using the statistical package for the social science (SPSS) version 26. The major finding of the study was that the training has an impact on employees' job performance to achieve organizational objectives. The district has a policy and procedure for training and development, but it doesn’t apply actively, and it’s not suitable for district-advised reform this policy and procedure and applied actively; the district gives training for the majority of its employees, but most of the time, the training is theoretical the district advised to use practical training method to see positive change, the district gives evaluation after the employees take training and development, but the result is not encouraging the district advised to assess employees skill gap and feel that gap, the district has a budget, but it’s not adequate the district advised to strengthen its financial ground.

Keywords: training, development, employees, performance, policy

Procedia PDF Downloads 33
2449 A Mathematical Analysis of a Model in Capillary Formation: The Roles of Endothelial, Pericyte and Macrophages in the Initiation of Angiogenesis

Authors: Serdal Pamuk, Irem Cay

Abstract:

Our model is based on the theory of reinforced random walks coupled with Michealis-Menten mechanisms which view endothelial cell receptors as the catalysts for transforming both tumor and macrophage derived tumor angiogenesis factor (TAF) into proteolytic enzyme which in turn degrade the basal lamina. The model consists of two main parts. First part has seven differential equations (DE’s) in one space dimension over the capillary, whereas the second part has the same number of DE’s in two space dimensions in the extra cellular matrix (ECM). We connect these two parts via some boundary conditions to move the cells into the ECM in order to initiate capillary formation. But, when does this movement begin? To address this question we estimate the thresholds that activate the transport equations in the capillary. We do this by using steady-state analysis of TAF equation under some assumptions. Once these equations are activated endothelial, pericyte and macrophage cells begin to move into the ECM for the initiation of angiogenesis. We do believe that our results play an important role for the mechanisms of cell migration which are crucial for tumor angiogenesis. Furthermore, we estimate the long time tendency of these three cells, and find that they tend to the transition probability functions as time evolves. We provide our numerical solutions which are in good agreement with our theoretical results.

Keywords: angiogenesis, capillary formation, mathematical analysis, steady-state, transition probability function

Procedia PDF Downloads 146
2448 Photo-Fenton Decolorization of Methylene Blue Adsolubilized on Co2+ -Embedded Alumina Surface: Comparison of Process Modeling through Response Surface Methodology and Artificial Neural Network

Authors: Prateeksha Mahamallik, Anjali Pal

Abstract:

In the present study, Co(II)-adsolubilized surfactant modified alumina (SMA) was prepared, and methylene blue (MB) degradation was carried out on Co-SMA surface by visible light photo-Fenton process. The entire reaction proceeded on solid surface as MB was embedded on Co-SMA surface. The reaction followed zero order kinetics. Response surface methodology (RSM) and artificial neural network (ANN) were used for modeling the decolorization of MB by photo-Fenton process as a function of dose of Co-SMA (10, 20 and 30 g/L), initial concentration of MB (10, 20 and 30 mg/L), concentration of H2O2 (174.4, 348.8 and 523.2 mM) and reaction time (30, 45 and 60 min). The prediction capabilities of both the methodologies (RSM and ANN) were compared on the basis of correlation coefficient (R2), root mean square error (RMSE), standard error of prediction (SEP), relative percent deviation (RPD). Due to lower value of RMSE (1.27), SEP (2.06) and RPD (1.17) and higher value of R2 (0.9966), ANN was proved to be more accurate than RSM in order to predict decolorization efficiency.

Keywords: adsolubilization, artificial neural network, methylene blue, photo-fenton process, response surface methodology

Procedia PDF Downloads 241
2447 A Novel RLS Based Adaptive Filtering Method for Speech Enhancement

Authors: Pogula Rakesh, T. Kishore Kumar

Abstract:

Speech enhancement is a long standing problem with numerous applications like teleconferencing, VoIP, hearing aids, and speech recognition. The motivation behind this research work is to obtain a clean speech signal of higher quality by applying the optimal noise cancellation technique. Real-time adaptive filtering algorithms seem to be the best candidate among all categories of the speech enhancement methods. In this paper, we propose a speech enhancement method based on Recursive Least Squares (RLS) adaptive filter of speech signals. Experiments were performed on noisy data which was prepared by adding AWGN, Babble and Pink noise to clean speech samples at -5dB, 0dB, 5dB, and 10dB SNR levels. We then compare the noise cancellation performance of proposed RLS algorithm with existing NLMS algorithm in terms of Mean Squared Error (MSE), Signal to Noise ratio (SNR), and SNR loss. Based on the performance evaluation, the proposed RLS algorithm was found to be a better optimal noise cancellation technique for speech signals.

Keywords: adaptive filter, adaptive noise canceller, mean squared error, noise reduction, NLMS, RLS, SNR, SNR loss

Procedia PDF Downloads 465
2446 The Integrated Strategy of Maintenance with a Scientific Analysis

Authors: Mahmoud Meckawey

Abstract:

This research is dealing with one of the most important aspects of maintenance fields, that is Maintenance Strategy. It's the branch which concerns the concepts and the schematic thoughts in how to manage maintenance and how to deal with the defects in the engineering products (buildings, machines, etc.) in general. Through the papers we will act with the followings: i) The Engineering Product & the Technical Systems: When we act with the maintenance process, in a strategic view, we act with an (engineering product) which consists of multi integrated systems. In fact, there is no engineering product with only one system. We will discuss and explain this topic, through which we will derivate a developed definition for the maintenance process. ii) The factors or basis of the functionality efficiency: That is the main factors affect the functional efficiency of the systems and the engineering products, then by this way we can give a technical definition of defects and how they occur. iii) The legality of occurrence of defects (Legal defects and Illegal defects): with which we assume that all the factors of the functionality efficiency been applied, and then we will discuss the results. iv) The Guarantee, the Functional Span Age and the Technical surplus concepts: In the complementation with the above topic, and associated with the Reliability theorems, where we act with the Probability of Failure state, with which we almost interest with the design stages, that is to check and adapt the design of the elements. But in Maintainability we act in a different way as we act with the actual state of the systems. So, we act with the rest of the story that means we have to act with the complementary part of the probability of failure term which refers to the actual surplus of the functionality for the systems.

Keywords: engineering product and technical systems, functional span age, legal and illegal defects, technical and functional surplus

Procedia PDF Downloads 463
2445 Analysis of Human Mental and Behavioral Models for Development of an Electroencephalography-Based Human Performance Management System

Authors: John Gaber, Youssef Ahmed, Hossam A. Gabbar, Jing Ren

Abstract:

Accidents at Nuclear Power Plants (NPPs) occur due to various factors, notable among them being poor safety management and poor safety culture. During abnormal situations, the likelihood of human error is many-fold higher due to the higher cognitive workload. The most common cause of human error and high cognitive workload is mental fatigue. Electroencephalography (EEG) is a method of gathering the electromagnetic waves emitted by a human brain. We propose a safety system by monitoring brainwaves for signs of mental fatigue using an EEG system. This requires an analysis of the mental model of the NPP operator, changes in brain wave power in response to certain stimuli, and the risk factors on mental fatigue and attention that NPP operators face when performing their tasks. We analyzed these factors and developed an EEG-based monitoring system, which aims to alert NPP operators when levels of mental fatigue and attention hinders their ability to maintain safety.

Keywords: brain imaging, EEG, power plant operator, psychology

Procedia PDF Downloads 82
2444 Evaluation of Solid-Gas Separation Efficiency in Natural Gas Cyclones

Authors: W. I. Mazyan, A. Ahmadi, M. Hoorfar

Abstract:

Objectives/Scope: This paper proposes a mathematical model for calculating the solid-gas separation efficiency in cyclones. This model provides better agreement with experimental results compared to existing mathematical models. Methods: The separation ratio efficiency, ϵsp, is evaluated by calculating the outlet to inlet count ratio. Similar to mathematical derivations in the literature, the inlet and outlet particle count were evaluated based on Eulerian approach. The model also includes the external forces acting on the particle (i.e., centrifugal and drag forces). In addition, the proposed model evaluates the exact length that the particle travels inside the cyclone for the evaluation of number of turns inside the cyclone. The separation efficiency model derivation using Stoke’s law considers the effect of the inlet tangential velocity on the separation performance. In cyclones, the inlet velocity is a very important factor in determining the performance of the cyclone separation. Therefore, the proposed model provides accurate estimation of actual cyclone separation efficiency. Results/Observations/Conclusion: The separation ratio efficiency, ϵsp, is studied to evaluate the performance of the cyclone for particles ranging from 1 microns to 10 microns. The proposed model is compared with the results in the literature. It is shown that the proposed mathematical model indicates an error of 7% between its efficiency and the efficiency obtained from the experimental results for 1 micron particles. At the same time, the proposed model gives the user the flexibility to analyze the separation efficiency at different inlet velocities. Additive Information: The proposed model determines the separation efficiency accurately and could also be used to optimize the separation efficiency of cyclones at low cost through trial and error testing, through dimensional changes to enhance separation and through increasing the particle centrifugal forces. Ultimately, the proposed model provides a powerful tool to optimize and enhance existing cyclones at low cost.

Keywords: cyclone efficiency, solid-gas separation, mathematical model, models error comparison

Procedia PDF Downloads 377
2443 Using Computer Vision to Detect and Localize Fractures in Wrist X-ray Images

Authors: John Paul Q. Tomas, Mark Wilson L. de los Reyes, Kirsten Joyce P. Vasquez

Abstract:

The most frequent type of fracture is a wrist fracture, which often makes it difficult for medical professionals to find and locate. In this study, fractures in wrist x-ray pictures were located and identified using deep learning and computer vision. The researchers used image filtering, masking, morphological operations, and data augmentation for the image preprocessing and trained the RetinaNet and Faster R-CNN models with ResNet50 backbones and Adam optimizers separately for each image filtering technique and projection. The RetinaNet model with Anisotropic Diffusion Smoothing filter trained with 50 epochs has obtained the greatest accuracy of 99.14%, precision of 100%, sensitivity/recall of 98.41%, specificity of 100%, and an IoU score of 56.44% for the Posteroanterior projection utilizing augmented data. For the Lateral projection using augmented data, the RetinaNet model with an Anisotropic Diffusion filter trained with 50 epochs has produced the highest accuracy of 98.40%, precision of 98.36%, sensitivity/recall of 98.36%, specificity of 98.43%, and an IoU score of 58.69%. When comparing the test results of the different individual projections, models, and image filtering techniques, the Anisotropic Diffusion filter trained with 50 epochs has produced the best classification and regression scores for both projections.

Keywords: Artificial Intelligence, Computer Vision, Wrist Fracture, Deep Learning

Procedia PDF Downloads 62
2442 Imperfect Production Inventory Model with Inspection Errors and Fuzzy Demand and Deterioration Rates

Authors: Chayanika Rout, Debjani Chakraborty, Adrijit Goswami

Abstract:

Our work presents an inventory model which illustrates imperfect production and imperfect inspection processes for deteriorating items. A cost-minimizing model is studied considering two types of inspection errors, namely, Type I error of falsely screening out a proportion of non-defects, thereby passing them on for rework and Type II error of falsely not screening out a proportion of defects, thus selling those to customers which incurs a penalty cost. The screened items are reworked; however, no returns are entertained due to deteriorating nature of the items. In more practical situations, certain parameters such as the demand rate and the deterioration rate of inventory cannot be accurately determined, and therefore, they are assumed to be triangular fuzzy numbers in our model. We calculate the optimal lot size that must be produced in order to minimize the total inventory cost for both the crisp and the fuzzy models. A numerical example is also considered to exemplify the procedure which is followed by the analysis of sensitivity of various parameters on the decision variable and the objective function.

Keywords: deteriorating items, EPQ, imperfect quality, rework, type I and type II inspection errors

Procedia PDF Downloads 171
2441 Improved Acoustic Source Sensing and Localization Based On Robot Locomotion

Authors: V. Ramu Reddy, Parijat Deshpande, Ranjan Dasgupta

Abstract:

This paper presents different methodology for an acoustic source sensing and localization in an unknown environment. The developed methodology includes an acoustic based sensing and localization system, a converging target localization based on the recursive direction of arrival (DOA) error minimization, and a regressive obstacle avoidance function. Our method is able to augment the existing proven localization techniques and improve results incrementally by utilizing robot locomotion and is capable of converging to a position estimate with greater accuracy using fewer measurements. The results also evinced the DOA error minimization at each iteration, improvement in time for reaching the destination and the efficiency of this target localization method as gradually converging to the real target position. Initially, the system is tested using Kinect mounted on turntable with DOA markings which serve as a ground truth and then our approach is validated using a FireBird VI (FBVI) mobile robot on which Kinect is used to obtain bearing information.

Keywords: acoustic source localization, acoustic sensing, recursive direction of arrival, robot locomotion

Procedia PDF Downloads 480
2440 The Probability of Smallholder Broiler Chicken Farmers' Participation in the Mainstream Market within Maseru District in Lesotho

Authors: L. E. Mphahama, A. Mushunje, A. Taruvinga

Abstract:

Although broiler production does not generate any large incomes among the smallholder community, it represents the main source of livelihood and part of nutritional requirement. As a result, market for broiler meat is growing faster than that of any other meat products and is projected to continue growing in the coming decades. However, the implication is that a multitude of factors manipulates transformation of smallholder broiler farmers participating in the mainstream markets. From 217 smallholder broiler farmers, socio-economic and institutional factors in broiler farming were incorporated into Binary model to estimate the probability of broiler farmers’ participation in the mainstream markets within the Maseru district in Lesotho. Of the thirteen (13) predictor variables fitted into the model, six (6) variables (household size, number of years in broiler business, stock size, access to transport, access to extension services and access to market information) had significant coefficients while seven (7) variables (level of education, marital status, price of broilers, poultry association, access to contract, access to credit and access to storage) did not have a significant impact. It is recommended that smallholder broiler farmers organize themselves into cooperatives which will act as a vehicle through which they can access contracts and formal markets. These cooperatives will also enable easy training and workshops for broiler rearing and marketing/markets through extension visits.

Keywords: broiler chicken, mainstream market, Maseru district, participation, smallholder farmers

Procedia PDF Downloads 130
2439 Virtual Chemistry Laboratory as Pre-Lab Experiences: Stimulating Student's Prediction Skill

Authors: Yenni Kurniawati

Abstract:

Students Prediction Skill in chemistry experiments is an important skill for pre-service chemistry students to stimulate students reflective thinking at each stage of many chemistry experiments, qualitatively and quantitatively. A Virtual Chemistry Laboratory was designed to give students opportunities and times to practicing many kinds of chemistry experiments repeatedly, everywhere and anytime, before they do a real experiment. The Virtual Chemistry Laboratory content was constructed using the Model of Educational Reconstruction and developed to enhance students ability to predicted the experiment results and analyzed the cause of error, calculating the accuracy and precision with carefully in using chemicals. This research showed students changing in making a decision and extremely beware with accuracy, but still had a low concern in precision. It enhancing students level of reflective thinking skill related to their prediction skill 1 until 2 stage in average. Most of them could predict the characteristics of the product in experiment, and even the result will going to be an error. In addition, they take experiments more seriously and curiously about the experiment results. This study recommends for a different subject matter to provide more opportunities for students to learn about other kinds of chemistry experiments design.

Keywords: virtual chemistry laboratory, chemistry experiments, prediction skill, pre-lab experiences

Procedia PDF Downloads 324
2438 Exclusive Breastfeeding Abandonment among Adolescent Mothers: A Cohort Study

Authors: Maria I. Nuñez-Hernández, Maria L. Riesco

Abstract:

Background: Exclusive breastfeeding (EBF) up to 6 months old infant have been considered one of the most important factors in the overall development of children. Nevertheless, as resources are scarce, it is essential to identify the most vulnerable groups that have major risk of EBF abandonment, in order to deliver the best strategies. Children of adolescent mothers are within these groups. Aims: To determine the EBF abandonment rate among adolescent mothers and to analyze the associated factors. Methods: Prospective cohort study of adolescent mothers in the southern area of Santiago, Chile, conducted in primary care services of public health system. The cohort was established from 2014 to 2015, with a sample of 105 adolescent mothers and their children at 2 months of life. The inclusion criteria were: adolescent mother from 14 to 19 years old; not twin babies; mother and baby leaving the hospital together after birthchild; correct attachment of the baby to the breast; no difficulty understanding the Spanish language or communicating. Follow-up was performed at 4 and 6 months old infant. Data were collected by interviews, considering EBF as breastfeeding only, without adding other milk, tea, juice, water or other product that not breast milk, except drugs. Data were analyzed by descriptive and inferential statistics, by Kaplan-Meier estimator and Log-Rank test, admitting the probability of occurrence of type I error of 5% (p-value = 0.05). Results: The cumulative EBF abandonment rate at 2, 4 and 6 months was 33.3%, 52.2% and 63.8%, respectively. Factors associated with EBF abandonment were maternal perception of the quality of milk as poor (p < 0.001), maternal perception that the child was not satisfied after breastfeeding (p < 0.001), use of pacifier (p < 0.001), maternal consumption of illicit drugs after delivery (p < 0.001), mother return to school (p = 0.040) and presence of nipple trauma (p = 0.045). Conclusion: EBF abandonment rate was higher in the first 4 months of life and is superior to the population of women who breastfeed. Among the EBF abandonment factors, one of them is related to the adolescent condition, and two are related to the maternal subjective perception.

Keywords: adolescent, breastfeeding, midwifery, nursing

Procedia PDF Downloads 307
2437 Mixed Integer Programming-Based One-Class Classification Method for Process Monitoring

Authors: Younghoon Kim, Seoung Bum Kim

Abstract:

One-class classification plays an important role in detecting outlier and abnormality from normal observations. In the previous research, several attempts were made to extend the scope of application of the one-class classification techniques to statistical process control problems. For most previous approaches, such as support vector data description (SVDD) control chart, the design of the control limits is commonly based on the assumption that the proportion of abnormal observations is approximately equal to an expected Type I error rate in Phase I process. Because of the limitation of the one-class classification techniques based on convex optimization, we cannot make the proportion of abnormal observations exactly equal to expected Type I error rate: controlling Type I error rate requires to optimize constraints with integer decision variables, but convex optimization cannot satisfy the requirement. This limitation would be undesirable in theoretical and practical perspective to construct effective control charts. In this work, to address the limitation of previous approaches, we propose the one-class classification algorithm based on the mixed integer programming technique, which can solve problems formulated with continuous and integer decision variables. The proposed method minimizes the radius of a spherically shaped boundary subject to the number of normal data to be equal to a constant value specified by users. By modifying this constant value, users can exactly control the proportion of normal data described by the spherically shaped boundary. Thus, the proportion of abnormal observations can be made theoretically equal to an expected Type I error rate in Phase I process. Moreover, analogous to SVDD, the boundary can be made to describe complex structures by using some kernel functions. New multivariate control chart applying the effectiveness of the algorithm is proposed. This chart uses a monitoring statistic to characterize the degree of being an abnormal point as obtained through the proposed one-class classification. The control limit of the proposed chart is established by the radius of the boundary. The usefulness of the proposed method was demonstrated through experiments with simulated and real process data from a thin film transistor-liquid crystal display.

Keywords: control chart, mixed integer programming, one-class classification, support vector data description

Procedia PDF Downloads 164