Search results for: collocational errors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 925

Search results for: collocational errors

355 Root Mean Square-Based Method for Fault Diagnosis and Fault Detection and Isolation of Current Fault Sensor in an Induction Machine

Authors: Ahmad Akrad, Rabia Sehab, Fadi Alyoussef

Abstract:

Nowadays, induction machines are widely used in industry thankful to their advantages comparing to other technologies. Indeed, there is a big demand because of their reliability, robustness and cost. The objective of this paper is to deal with diagnosis, detection and isolation of faults in a three-phase induction machine. Among the faults, Inter-turn short-circuit fault (ITSC), current sensors fault and single-phase open circuit fault are selected to deal with. However, a fault detection method is suggested using residual errors generated by the root mean square (RMS) of phase currents. The application of this method is based on an asymmetric nonlinear model of Induction Machine considering the winding fault of the three axes frame state space. In addition, current sensor redundancy and sensor fault detection and isolation (FDI) are adopted to ensure safety operation of induction machine drive. Finally, a validation is carried out by simulation in healthy and faulty operation modes to show the benefit of the proposed method to detect and to locate with, a high reliability, the three types of faults.

Keywords: induction machine, asymmetric nonlinear model, fault diagnosis, inter-turn short-circuit fault, root mean square, current sensor fault, fault detection and isolation

Procedia PDF Downloads 162
354 Development of a Work-Related Stress Management Program Guaranteeing Fitness-For-Duty for Human Error Prevention

Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee

Abstract:

Human error is one of the most dreaded factors that may result in unexpected accidents, especially in nuclear power plants. For accident prevention, it is quite indispensable to analyze and to manage the influence of any factor which may raise the possibility of human errors. Out of lots factors, stress has been reported to have a significant influence on human performance. Therefore, this research aimed to develop a work-related stress management program which can guarantee Fitness-for-Duty (FFD) of the workers in nuclear power plants, especially those working in main control rooms. Major stress factors were elicited through literal surveys and classified into major categories such as demands, supports, and relationships. To manage those factors, a test and intervention program based on 4-level approaches was developed over the whole employment cycle including selection and screening of workers, job allocation, and job rotation. In addition, a managerial care program was introduced with the concept of Employee-Assistance-Program (EAP) program. Reviews on the program conducted by ex-operators in nuclear power plants showed responses in the affirmative, and suggested additional treatment to guarantee high performance of human workers, not in normal operations but also in emergency situations.

Keywords: human error, work performance, work stress, Fitness-For-Duty (FFD), Employee Assistance Program (EAP)

Procedia PDF Downloads 382
353 Two-Phase Sampling for Estimating a Finite Population Total in Presence of Missing Values

Authors: Daniel Fundi Murithi

Abstract:

Missing data is a real bane in many surveys. To overcome the problems caused by missing data, partial deletion, and single imputation methods, among others, have been proposed. However, problems such as discarding usable data and inaccuracy in reproducing known population parameters and standard errors are associated with them. For regression and stochastic imputation, it is assumed that there is a variable with complete cases to be used as a predictor in estimating missing values in the other variable, and the relationship between the two variables is linear, which might not be realistic in practice. In this project, we estimate population total in presence of missing values in two-phase sampling. Instead of regression or stochastic models, non-parametric model based regression model is used in imputing missing values. Empirical study showed that nonparametric model-based regression imputation is better in reproducing variance of population total estimate obtained when there were no missing values compared to mean, median, regression, and stochastic imputation methods. Although regression and stochastic imputation were better than nonparametric model-based imputation in reproducing population total estimates obtained when there were no missing values in one of the sample sizes considered, nonparametric model-based imputation may be used when the relationship between outcome and predictor variables is not linear.

Keywords: finite population total, missing data, model-based imputation, two-phase sampling

Procedia PDF Downloads 105
352 Estimating Anthropometric Dimensions for Saudi Males Using Artificial Neural Networks

Authors: Waleed Basuliman

Abstract:

Anthropometric dimensions are considered one of the important factors when designing human-machine systems. In this study, the estimation of anthropometric dimensions has been improved by using Artificial Neural Network (ANN) model that is able to predict the anthropometric measurements of Saudi males in Riyadh City. A total of 1427 Saudi males aged 6 to 60 years participated in measuring 20 anthropometric dimensions. These anthropometric measurements are considered important for designing the work and life applications in Saudi Arabia. The data were collected during eight months from different locations in Riyadh City. Five of these dimensions were used as predictors variables (inputs) of the model, and the remaining 15 dimensions were set to be the measured variables (Model’s outcomes). The hidden layers varied during the structuring stage, and the best performance was achieved with the network structure 6-25-15. The results showed that the developed Neural Network model was able to estimate the body dimensions of Saudi male population in Riyadh City. The network's mean absolute percentage error (MAPE) and the root mean squared error (RMSE) were found to be 0.0348 and 3.225, respectively. These results were found less, and then better, than the errors found in the literature. Finally, the accuracy of the developed neural network was evaluated by comparing the predicted outcomes with regression model. The ANN model showed higher coefficient of determination (R2) between the predicted and actual dimensions than the regression model.

Keywords: artificial neural network, anthropometric measurements, back-propagation

Procedia PDF Downloads 465
351 Permanent Reduction of Arc Flash Energy to Safe Limit on Line Side of 480 Volt Switchgear Incomer Breaker

Authors: Abid Khan

Abstract:

A recognized engineering challenge is related to personnel protection from fatal arc flash incident energy in the line side of the 480-volt switchgear incomer breakers during maintenance activities. The incident energy is typically high due to slow fault clearance, and it can be higher than the available personnel protective equipment (PPE) ratings. A fault in this section of the switchgear is cleared by breakers or fuses in the upstream higher voltage system (4160 Volt or higher). The current reflection in the higher voltage upstream system for a fault in the 480-volt switchgear is low, the clearance time is slower, and the inversely proportional incident energy is hence higher. The installation of overcurrent protection at a 480-volt system upstream of the incomer breaker will operate fast enough and trips the upstream higher voltage breaker when a fault develops at the incomer breaker. Therefore, fault current reduction as reflected in the upstream higher voltage system is eliminated. Since the fast overcurrent protection is permanently installed, it is always functional, does not require human interventions, and eliminates exposure to human errors. It is installed at the maintenance activities location, and its operations can be locally monitored by craftsmen during maintenance activities.

Keywords: arc flash, mitigation, maintenance switch, energy level

Procedia PDF Downloads 173
350 Establishment and Application of Numerical Simulation Model for Shot Peen Forming Stress Field Method

Authors: Shuo Tian, Xuepiao Bai, Jianqin Shang, Pengtao Gai, Yuansong Zeng

Abstract:

Shot peen forming is an essential forming process for aircraft metal wing panel. With the development of computer simulation technology, scholars have proposed a numerical simulation method of shot peen forming based on stress field. Three shot peen forming indexes of crater diameter, shot speed and surface coverage are required as simulation parameters in the stress field method. It is necessary to establish the relationship between simulation and experimental process parameters in order to simulate the deformation under different shot peen forming parameters. The shot peen forming tests of the 2024-T351 aluminum alloy workpieces were carried out using uniform test design method, and three factors of air pressure, feed rate and shot flow were selected. The second-order response surface model between simulation parameters and uniform test factors was established by stepwise regression method using MATLAB software according to the results. The response surface model was combined with the stress field method to simulate the shot peen forming deformation of the workpiece. Compared with the experimental results, the simulated values were smaller than the corresponding test values, the maximum and average errors were 14.8% and 9%, respectively.

Keywords: shot peen forming, process parameter, response surface model, numerical simulation

Procedia PDF Downloads 59
349 Variable vs. Fixed Window Width Code Correlation Reference Waveform Receivers for Multipath Mitigation in Global Navigation Satellite Systems with Binary Offset Carrier and Multiplexed Binary Offset Carrier Signals

Authors: Fahad Alhussein, Huaping Liu

Abstract:

This paper compares the multipath mitigation performance of code correlation reference waveform receivers with variable and fixed window width, for binary offset carrier and multiplexed binary offset carrier signals typically used in global navigation satellite systems. In the variable window width method, such width is iteratively reduced until the distortion on the discriminator with multipath is eliminated. This distortion is measured as the Euclidean distance between the actual discriminator (obtained with the incoming signal), and the local discriminator (generated with a local copy of the signal). The variable window width have shown better performance compared to the fixed window width. In particular, the former yields zero error for all delays for the BOC and MBOC signals considered, while the latter gives rather large nonzero errors for small delays in all cases. Due to its computational simplicity, the variable window width method is perfectly suitable for implementation in low-cost receivers.

Keywords: correlation reference waveform receivers, binary offset carrier, multiplexed binary offset carrier, global navigation satellite systems

Procedia PDF Downloads 108
348 Applying Dictogloss Technique to Improve Auditory Learners’ Writing Skills in Second Language Learning

Authors: Aji Budi Rinekso

Abstract:

There are some common problems that are often faced by students in writing. The problems are related to macro and micro skills of writing, such as incorrect spellings, inappropriate diction, grammatical errors, random ideas, and irrelevant supporting sentences. Therefore, it is needed a teaching technique that can solve those problems. Dictogloss technique is a teaching technique that involves listening practices. So, it is a suitable teaching technique for students with auditory learning style. Dictogloss technique comprises of four basic steps; (1) warm up, (2) dictation, (3) reconstruction and (4) analysis and correction. Warm up is when students find out about topics and do some preparatory vocabulary works. Then, dictation is when the students listen to texts read at normal speed by a teacher. The text is read by the teacher twice where at the first reading the students only listen to the teacher and at the second reading the students listen to the teacher again and take notes. Next, reconstruction is when the students discuss the information from the text read by the teacher and start to write a text. Lastly, analysis and correction are when the students check their writings and revise them. Dictogloss offers some advantages in relation to the efforts of improving writing skills. Through the use of dictogloss technique, students can solve their problems both on macro skills and micro skills. Easier to generate ideas and better writing mechanics are the benefits of dictogloss.

Keywords: auditory learners, writing skills, dictogloss technique, second language learning

Procedia PDF Downloads 122
347 Prediction of the Lateral Bearing Capacity of Short Piles in Clayey Soils Using Imperialist Competitive Algorithm-Based Artificial Neural Networks

Authors: Reza Dinarvand, Mahdi Sadeghian, Somaye Sadeghian

Abstract:

Prediction of the ultimate bearing capacity of piles (Qu) is one of the basic issues in geotechnical engineering. So far, several methods have been used to estimate Qu, including the recently developed artificial intelligence methods. In recent years, optimization algorithms have been used to minimize artificial network errors, such as colony algorithms, genetic algorithms, imperialist competitive algorithms, and so on. In the present research, artificial neural networks based on colonial competition algorithm (ANN-ICA) were used, and their results were compared with other methods. The results of laboratory tests of short piles in clayey soils with parameters such as pile diameter, pile buried length, eccentricity of load and undrained shear resistance of soil were used for modeling and evaluation. The results showed that ICA-based artificial neural networks predicted lateral bearing capacity of short piles with a correlation coefficient of 0.9865 for training data and 0.975 for test data. Furthermore, the results of the model indicated the superiority of ICA-based artificial neural networks compared to back-propagation artificial neural networks as well as the Broms and Hansen methods.

Keywords: artificial neural network, clayey soil, imperialist competition algorithm, lateral bearing capacity, short pile

Procedia PDF Downloads 121
346 Quantification of Lustre in Textile Fibers by Image Analysis

Authors: Neelesh Bharti Shukla, Suvankar Dutta, Esha Sharma, Shrikant Ralebhat, Gurudatt Krishnamurthy

Abstract:

A key component of the physical attribute of textile fibers is lustre. It is a complex phenomenon arising from the interaction of light with fibers, yarn and fabrics. It is perceived as the contrast difference between the bright areas (specular reflection) and duller backgrounds (diffused reflection). Lustre of fibers is affected by their surface structure, morphology, cross-section profile as well as the presence of any additives/registrants. Due to complexities in measurements, objective measurements such as gloss meter do not give reproducible quantification of lustre. Other instruments such as SAMBA hair systems are expensive. In light of this, lustre quantification has largely remained subjective, judged visually by experts, but prone to errors. In this development, a physics-based approach was conceptualized and demonstrated. We have developed an image analysis based technique to quantify visually observed differences in lustre of fibers. Cellulosic fibers, produced with different approaches, with visually different levels of lustre were photographed under controlled optics. These images were subsequently analyzed using a configured software system. The ratio of Intensity of light from bright (specular reflection) and dull (diffused reflection) areas was used to numerically represent lustre. In the next step, the set of samples that were not visually distinguishable easily were also evaluated by the technique and it was established that quantification of lustre is feasible.

Keywords: lustre, fibre, image analysis, measurement

Procedia PDF Downloads 151
345 The Effect of Damper Attachment on Tennis Racket Vibration: A Simulation Study

Authors: Kuangyou B. Cheng

Abstract:

Tennis is among the most popular sports worldwide. During ball-racket impact, substantial vibration transmitted to the hand/arm may be the cause of “tennis elbow”. Although it is common for athletes to attach a “vibration damper” to the spring-bed, the effect remains unclear. To avoid subjective factors and errors in data recording, the effect of damper attachment on racket handle end vibration was investigated with computer simulation. The tennis racket was modeled as a beam with free-free ends (similar to loosely holding the racket). Finite difference method with 40 segments was used to simulate ball-racket impact response. The effect of attaching a damper was modeled as having a segment with increased mass. It was found that the damper has the largest effect when installed at the spring-bed center. However, this is not a practical location due to interference with ball-racket impact. Vibration amplitude changed very slightly when the damper was near the top or bottom of the spring-bed. The damper works only slightly better at the bottom than at the top of the spring-bed. In addition, heavier dampers work better than lighter ones. These simulation results were comparable with experimental recordings in which the selection of damper locations was restricted by ball impact locations. It was concluded that mathematical model simulations were able to objectively investigate the effect of damper attachment on racket vibration. In addition, with very slight difference in grip end vibration amplitude when the damper was attached at the top or bottom of the spring-bed, whether the effect can really be felt by athletes is questionable.

Keywords: finite difference, impact, modeling, vibration amplitude

Procedia PDF Downloads 233
344 Parameter Estimation for the Mixture of Generalized Gamma Model

Authors: Wikanda Phaphan

Abstract:

Mixture generalized gamma distribution is a combination of two distributions: generalized gamma distribution and length biased generalized gamma distribution. These two distributions were presented by Suksaengrakcharoen and Bodhisuwan in 2014. The findings showed that probability density function (pdf) had fairly complexities, so it made problems in estimating parameters. The problem occurred in parameter estimation was that we were unable to calculate estimators in the form of critical expression. Thus, we will use numerical estimation to find the estimators. In this study, we presented a new method of the parameter estimation by using the expectation – maximization algorithm (EM), the conjugate gradient method, and the quasi-Newton method. The data was generated by acceptance-rejection method which is used for estimating α, β, λ and p. λ is the scale parameter, p is the weight parameter, α and β are the shape parameters. We will use Monte Carlo technique to find the estimator's performance. Determining the size of sample equals 10, 30, 100; the simulations were repeated 20 times in each case. We evaluated the effectiveness of the estimators which was introduced by considering values of the mean squared errors and the bias. The findings revealed that the EM-algorithm had proximity to the actual values determined. Also, the maximum likelihood estimators via the conjugate gradient and the quasi-Newton method are less precision than the maximum likelihood estimators via the EM-algorithm.

Keywords: conjugate gradient method, quasi-Newton method, EM-algorithm, generalized gamma distribution, length biased generalized gamma distribution, maximum likelihood method

Procedia PDF Downloads 200
343 Applicability of Cameriere’s Age Estimation Method in a Sample of Turkish Adults

Authors: Hatice Boyacioglu, Nursel Akkaya, Humeyra Ozge Yilanci, Hilmi Kansu, Nihal Avcu

Abstract:

The strong relationship between the reduction in the size of the pulp cavity and increasing age has been reported in the literature. This relationship can be utilized to estimate the age of an individual by measuring the pulp cavity size using dental radiographs as a non-destructive method. The purpose of this study is to develop a population specific regression model for age estimation in a sample of Turkish adults by applying Cameriere’s method on panoramic radiographs. The sample consisted of 100 panoramic radiographs of Turkish patients (40 men, 60 women) aged between 20 and 70 years. Pulp and tooth area ratios (AR) of the maxilla¬¬ry canines were measured by two maxillofacial radiologists and then the results were subjected to regression analysis. There were no statistically significant intra-observer and inter-observer differences. The correlation coefficient between age and the AR of the maxillary canines was -0.71 and the following regression equation was derived: Estimated Age = 77,365 – ( 351,193 × AR ). The mean prediction error was 4 years which is within acceptable errors limits for age estimation. This shows that the pulp/tooth area ratio is a useful variable for assessing age with reasonable accuracy. Based on the results of this research, it was concluded that Cameriere’s method is suitable for dental age estimation and it can be used for forensic procedures in Turkish adults. These instructions give you guidelines for preparing papers for conferences or journals.

Keywords: age estimation by teeth, forensic dentistry, panoramic radiograph, Cameriere's method

Procedia PDF Downloads 427
342 Militating Factors Against Building Information Modeling Adoption in Quantity Surveying Practice in South Africa

Authors: Kenneth O. Otasowie, Matthew Ikuabe, Clinton Aigbavboa, Ayodeji Oke

Abstract:

The quantity surveying (QS) profession is one of the professions in the construction industry, and it is saddled with the responsibility of measuring the number of materials as well as the workmanship required to get work done in the industry. This responsibility is vital to the success of a construction project as it determines if a project will be completed on time, within budget, and up to the required standard. However, the practice has been criticised severally for failure to accurately execute her responsibility. The need to reduce errors, inaccuracies and omissions has made the adoption of modern technologies such as building information modeling (BIM) inevitable in its practice. Nevertheless, there are barriers to the adoption of BIM in QS practice in South Africa (SA). Thus, this study aims to investigate these barriers. A survey design was adopted. A total number of one hundred and fifteen (115) questionnaires were administered to quantity surveyors in Guateng Province, SA, and ninety (90) were returned and found suitable for analysis. Collected data were analysed using percentage, mean item score, standard deviation, one-sample t-test, and Kruskal-Wallis. The findings show that lack of BIM expertise, lack of government enforcement, resistance to change, and no client demand for BIM are the most significant barriers to the adoption of BIM in QS practice. As a result, this study recommends that trainings on BIM technology be prioritised, and government must take the lead in BIM adoption in the country, particularly in public projects.

Keywords: barriers, BIM, quantity surveying practice, South Africa

Procedia PDF Downloads 78
341 Integral Form Solutions of the Linearized Navier-Stokes Equations without Deviatoric Stress Tensor Term in the Forward Modeling for FWI

Authors: Anyeres N. Atehortua Jimenez, J. David Lambraño, Juan Carlos Muñoz

Abstract:

Navier-Stokes equations (NSE), which describe the dynamics of a fluid, have an important application on modeling waves used for data inversion techniques as full waveform inversion (FWI). In this work a linearized version of NSE and its variables, neglecting deviatoric terms of stress tensor, is presented. In order to get a theoretical modeling of pressure p(x,t) and wave velocity profile c(x,t), a wave equation of visco-acoustic medium (VAE) is written. A change of variables p(x,t)=q(x,t)h(ρ), is made on the equation for the VAE leading to a well known Klein-Gordon equation (KGE) describing waves propagating in variable density medium (ρ) with dispersive term α^2(x). KGE is reduced to a Poisson equation and solved by proposing a specific function for α^2(x) accounting for the energy dissipation and dispersion. Finally, an integral form solution is derived for p(x,t), c(x,t) and kinematics variables like particle velocity v(x,t), displacement u(x,t) and bulk modulus function k_b(x,t). Further, it is compared this visco-acoustic formulation with another form broadly used in the geophysics; it is argued that this formalism is more general and, given its integral form, it may offer several advantages from the modern parallel computing point of view. Applications to minimize the errors in modeling for FWI applied to oils resources in geophysics are discussed.

Keywords: Navier-Stokes equations, modeling, visco-acoustic, inversion FWI

Procedia PDF Downloads 494
340 Evaluating Language Loss Effect on Autobiographical Memory by Examining Memory Phenomenology in Bilingual Speakers

Authors: Anastasia Sorokina

Abstract:

Graduate language loss or attrition has been well documented in individuals who migrate and become emersed in a different language environment. This phenomenon of first language (L1) attrition is an example of non-pathological (not due to trauma) and can manifest itself in frequent pauses, search for words, or grammatical errors. While the widely experienced loss of one’s first language might seem harmless, there is convincing evidence from the disciplines of Developmental Psychology, Bilingual Studies, and even Psychotherapy that language plays a crucial role in the memory of self. In fact, we remember, store, and share personal memories with the help of language. Dual-Coding Theory suggests that language memory code deterioration could lead to forgetting. Yet, no one has investigated a possible connection between language loss and memory. The present study aims to address this research gap by examining a corpus of 1,495 memories of Russian-English bilinguals who are on a continuum of L1 (first language) attrition. Since phenomenological properties capture how well a memory is remembered, the following descriptors were selected - vividness, ease of recall, emotional valence, personal significance, and confidence in the event. A series of linear regression statistical analyses were run to examine the possible negative effects of L1 attrition on autobiographical memory. The results revealed that L1 attrition might compromise perceived vividness and confidence in the event, which is indicative of memory deterioration. These findings suggest the importance of heritage language maintenance in immigrant communities who might be forced to assimilate as language loss might negatively affect the memory of self.

Keywords: L1 attrition, autobiographical memory, language loss, memory phenomenology, dual coding

Procedia PDF Downloads 85
339 Tuning of Kalman Filter Using Genetic Algorithm

Authors: Hesham Abdin, Mohamed Zakaria, Talaat Abd-Elmonaem, Alaa El-Din Sayed Hafez

Abstract:

Kalman filter algorithm is an estimator known as the workhorse of estimation. It has an important application in missile guidance, especially in lack of accurate data of the target due to noise or uncertainty. In this paper, a Kalman filter is used as a tracking filter in a simulated target-interceptor scenario with noise. It estimates the position, velocity, and acceleration of the target in the presence of noise. These estimations are needed for both proportional navigation and differential geometry guidance laws. A Kalman filter has a good performance at low noise, but a large noise causes considerable errors leads to performance degradation. Therefore, a new technique is required to overcome this defect using tuning factors to tune a Kalman filter to adapt increasing of noise. The values of the tuning factors are between 0.8 and 1.2, they have a specific value for the first half of range and a different value for the second half. they are multiplied by the estimated values. These factors have its optimum values and are altered with the change of the target heading. A genetic algorithm updates these selections to increase the maximum effective range which was previously reduced by noise. The results show that the selected factors have other benefits such as decreasing the minimum effective range that was increased earlier due to noise. In addition to, the selected factors decrease the miss distance for all ranges of this direction of the target, and expand the effective range which leads to increase probability of kill.

Keywords: proportional navigation, differential geometry, Kalman filter, genetic algorithm

Procedia PDF Downloads 486
338 An Historical Revision of Change and Configuration Management Process

Authors: Expedito Pinto De Paula Junior

Abstract:

Current systems such as artificial satellites, airplanes, automobiles, turbines, power systems and air traffic controls are becoming increasingly more complex and/or highly integrated as defined in SAE-ARP-4754A (Society Automotive Engineering - Certification considerations for highly-integrated or complex aircraft systems standard). Among other processes, the development of such systems requires careful Change and Configuration Management (CCM) to establish and maintain product integrity. Understand the maturity of CCM process based in historical approach is crucial for better implementation in hardware and software lifecycle. The sense of work organization, in all fields of development is directly related to the order and interrelation of the parties, changes in time, and record of these changes. Generally, is observed that engineers, administrators and managers invest more time in technical activities than in organization of work. More these professionals are focused in solving complex problems with a purely technical bias. CCM process is fundamental for development, production and operation of new products specially in the safety critical systems. The objective of this paper is open a discussion about the historical revision based in standards focus of CCM around the world in order to understand and reflect the importance across the years, the contribution of this process for technology evolution, to understand the mature of organizations in the system lifecycle project and the benefits of CCM to avoid errors and mistakes during the Lifecycle Product.

Keywords: changes, configuration management, historical, revision

Procedia PDF Downloads 180
337 Reliability of the Estimate of Earthwork Quantity Based on 3D-BIM

Authors: Jaechoul Shin, Juhwan Hwang

Abstract:

In case of applying the BIM method to the civil engineering in the area of free formed structure, we can expect comparatively high rate of construction productivity as it is in the building engineering area. In this research, we developed quantity calculation error applying it to earthwork and bridge construction (e.g. PSC-I type segmental girder bridge amd integrated bridge of steel I-girders and inverted-Tee bent cap), NATM (New Austrian Tunneling Method) tunnel construction, retaining wall construction, culvert construction and implemented BIM based 3D modeling quantity survey. we confirmed high reliability of the BIM-based method in structure work in which errors occurred in range between -6% ~ +5%. Especially, understanding of the problem and improvement of the existing 2D-CAD based of quantity calculation through rock type quantity calculation error in range of -14% ~ +13% of earthwork quantity calculation. It is benefit and applicability of BIM method in civil engineering. In addition, routine method for quantity of earthwork has the same error tolerance negligible for that of structure work. But, rock type's quantity calculated as the error appears significantly to the reliability of 2D-based volume calculation shows that the problem could be. Through the estimating quantity of earthwork based 3D-BIM, proposed method has better reliability than routine method. BIM, as well as the design, construction, maintenance levels of information when you consider the benefits of integration, the introduction of BIM design in civil engineering and the possibility of applying for the effectiveness was confirmed.

Keywords: BIM, 3D modeling, 3D-BIM, quantity of earthwork

Procedia PDF Downloads 419
336 Study of Side Effects of Myopia Contact Correction by Soft Lenses and Orthokeratology Lenses among Medical Students

Authors: K. Iu. Hrizhymalska, O. Ol. Andrushkova, I. Iu. Pshenychna

Abstract:

Aim. To study and copare the side effects of myopia contact correction by soft lenses and orthokeratology lenses among medical students. Patients and methods: 34 students (68 eyes) with moderate and severe myopia, who used contact correction of myopia for 2-4 years, were examined. Some of them used soft lenses, while others - orthokeratology lenses. Methods were used: biomicroscopy of the eye surface, Schirmer's test, Norn's test, survey regarding satisfaction with use. Results. Corneal vascularization along the limbus was noted in 4 (5%) eyes of the examined students. In 8 (11%) eyes, symptoms of mild dry eye disease were detected. 2 (3%) eyes showed signs of meibomitis. Allergic conjunctivitis was observed in 4 (5%) eyes, and a purulent corneal ulcer was present in 1 eye. Surveys have shown that orthokeratology lenses unlike soft lenses don't limit everyday activity (in sports, tourism, swimming etc.), they also don't cause discomfort during temperature changes and reduce existing symptoms of dry eye disease. Conclusion. Thus, myopia contact correction is one of the optimal options among students, which allows to expand physical and mental activity. However, taking into account the frequency of side effects in users of soft contact lenses, it is necessary to carry out prevention and treatment of myopia in medical students, follow the recommendations for use, instill preservative-free tear substitutes with trehalose when symptoms of dry eye appear. Also when side reactions occur, contact correction with soft lenses should be changed to orthokeratology lenses.

Keywords: correction, myopia, soft lenses, orthokeratology, specracles, cornea, dry eye, side effects, refractive errors

Procedia PDF Downloads 33
335 An Electrocardiography Deep Learning Model to Detect Atrial Fibrillation on Clinical Application

Authors: Jui-Chien Hsieh

Abstract:

Background:12-lead electrocardiography(ECG) is one of frequently-used tools to detect atrial fibrillation (AF), which might degenerate into life-threaten stroke, in clinical Practice. Based on this study, the AF detection by the clinically-used 12-lead ECG device has only 0.73~0.77 positive predictive value (ppv). Objective: It is on great demand to develop a new algorithm to improve the precision of AF detection using 12-lead ECG. Due to the progress on artificial intelligence (AI), we develop an ECG deep model that has the ability to recognize AF patterns and reduce false-positive errors. Methods: In this study, (1) 570-sample 12-lead ECG reports whose computer interpretation by the ECG device was AF were collected as the training dataset. The ECG reports were interpreted by 2 senior cardiologists, and confirmed that the precision of AF detection by the ECG device is 0.73.; (2) 88 12-lead ECG reports whose computer interpretation generated by the ECG device was AF were used as test dataset. Cardiologist confirmed that 68 cases of 88 reports were AF, and others were not AF. The precision of AF detection by ECG device is about 0.77; (3) A parallel 4-layer 1 dimensional convolutional neural network (CNN) was developed to identify AF based on limb-lead ECGs and chest-lead ECGs. Results: The results indicated that this model has better performance on AF detection than traditional computer interpretation of the ECG device in 88 test samples with 0.94 ppv, 0.98 sensitivity, 0.80 specificity. Conclusions: As compared to the clinical ECG device, this AI ECG model promotes the precision of AF detection from 0.77 to 0.94, and can generate impacts on clinical applications.

Keywords: 12-lead ECG, atrial fibrillation, deep learning, convolutional neural network

Procedia PDF Downloads 92
334 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights

Authors: Nelson Bii, Christopher Ouma, John Odhiambo

Abstract:

Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.

Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths

Procedia PDF Downloads 112
333 Learner's Difficulties Acquiring English: The Case of Native Speakers of Rio de La Plata Spanish Towards Justifying the Need for Corpora

Authors: Maria Zinnia Bardas Hoffmann

Abstract:

Contrastive Analysis (CA) is the systematic comparison between two languages. It stems from the notion that errors are caused by interference of the L1 system in the acquisition process of an L2. CA represents a useful tool to understand the nature of learning and acquisition. Also, this particular method promises a path to un-derstand the nature of underlying cognitive processes, even when other factors such as intrinsic motivation and teaching strategies were found to best explain student’s problems in acquisition. CA study is justified not only from the need to get a deeper understanding of the nature of SLA, but as an invaluable source to provide clues, at a cognitive level, for those general processes involved in rule formation and abstract thought. It is relevant for cross disciplinary studies and the fields of Computational Thought, Natural Language processing, Applied Linguistics, Cognitive Linguistics and Math Theory. That being said, this paper intends to address here as well its own set of constraints and limitations. Finally, this paper: (a) aims at identifying some of the difficulties students may find in their learning process due to the nature of their specific variety of L1, Rio de la Plata Spanish (RPS), (b) represents an attempt to discuss the necessity for specific models to approach CA.

Keywords: second language acquisition, applied linguistics, contrastive analysis, applied contrastive analysis English language department, meta-linguistic rules, cross-linguistics studies, computational thought, natural language processing

Procedia PDF Downloads 117
332 Automatic Registration of Rail Profile Based Local Maximum Curvature Entropy

Authors: Hao Wang, Shengchun Wang, Weidong Wang

Abstract:

On the influence of train vibration and environmental noise on the measurement of track wear, we proposed a method for automatic extraction of circular arc on the inner or outer side of the rail waist and achieved the high-precision registration of rail profile. Firstly, a polynomial fitting method based on truncated residual histogram was proposed to find the optimal fitting curve of the profile and reduce the influence of noise on profile curve fitting. Then, based on the curvature distribution characteristics of the fitting curve, the interval search algorithm based on dynamic window’s maximum curvature entropy was proposed to realize the automatic segmentation of small circular arc. At last, we fit two circle centers as matching reference points based on small circular arcs on both sides and realized the alignment from the measured profile to the standard designed profile. The static experimental results show that the mean and standard deviation of the method are controlled within 0.01mm with small measurement errors and high repeatability. The dynamic test also verified the repeatability of the method in the train-running environment, and the dynamic measurement deviation of rail wear is within 0.2mm with high repeatability.

Keywords: curvature entropy, profile registration, rail wear, structured light, train-running

Procedia PDF Downloads 235
331 Design of a Real Time Closed Loop Simulation Test Bed on a General Purpose Operating System: Practical Approaches

Authors: Pratibha Srivastava, Chithra V. J., Sudhakar S., Nitin K. D.

Abstract:

A closed-loop system comprises of a controller, a response system, and an actuating system. The controller, which is the system under test for us, excites the actuators based on feedback from the sensors in a periodic manner. The sensors should provide the feedback to the System Under Test (SUT) within a deterministic time post excitation of the actuators. Any delay or miss in the generation of response or acquisition of excitation pulses may lead to control loop controller computation errors, which can be catastrophic in certain cases. Such systems categorised as hard real-time systems that need special strategies. The real-time operating systems available in the market may be the best solutions for such kind of simulations, but they pose limitations like the availability of the X Windows system, graphical interfaces, other user tools. In this paper, we present strategies that can be used on a general purpose operating system (Bare Linux Kernel) to achieve a deterministic deadline and hence have the added advantages of a GPOS with real-time features. Techniques shall be discussed how to make the time-critical application run with the highest priority in an uninterrupted manner, reduced network latency for distributed architecture, real-time data acquisition, data storage, and retrieval, user interactions, etc.

Keywords: real time data acquisition, real time kernel preemption, scheduling, network latency

Procedia PDF Downloads 114
330 Quality of Age Reporting from Tanzania 2012 Census Results: An Assessment Using Whipple’s Index, Myer’s Blended Index, and Age-Sex Accuracy Index

Authors: A. Sathiya Susuman, Hamisi F. Hamisi

Abstract:

Background: Many socio-economic and demographic data are age-sex attributed. However, a variety of irregularities and misstatement are noted with respect to age-related data and less to sex data because of its biological differences between the genders. Noting the misstatement/misreporting of age data regardless of its significance importance in demographics and epidemiological studies, this study aims at assessing the quality of 2012 Tanzania Population and Housing Census Results. Methods: Data for the analysis are downloaded from Tanzania National Bureau of Statistics. Age heaping and digit preference were measured using summary indices viz., Whipple’s index, Myers’ blended index, and Age-Sex Accuracy index. Results: The recorded Whipple’s index for both sexes was 154.43; male has the lowest index of about 152.65 while female has the highest index of about 156.07. For Myers’ blended index, the preferences were at digits ‘0’ and ‘5’ while avoidance were at digits ‘1’ and ‘3’ for both sexes. Finally, Age-sex index stood at 59.8 where sex ratio score was 5.82 and age ratio scores were 20.89 and 21.4 for males and female respectively. Conclusion: The evaluation of the 2012 PHC data using the demographic techniques has qualified the data inaccurate as the results of systematic heaping and digit preferences/avoidances. Thus, innovative methods in data collection along with measuring and minimizing errors using statistical techniques should be used to ensure accuracy of age data.

Keywords: age heaping, digit preference/avoidance, summary indices, Whipple’s index, Myer’s index, age-sex accuracy index

Procedia PDF Downloads 447
329 Implications of Climate Change and World Uncertainty for Gender Inequality: Global Evidence

Authors: Kashif Nesar Rather, Mantu Kumar Mahalik

Abstract:

The discourse surrounding climate change has gained considerable traction, with a discernible emphasis on its nuanced and consequential impact on gender inequality. Concurrently, escalating global tensions are contributing to heightened uncertainty, potentially exerting influence on gender disparities. Within this framework, this study attempts to empirically investigate the implications of climate change and world uncertainty on the gender inequality for a balanced panel of 100 economies between 1995 to 2021. The estimated models also control for the effects of globalisation, economic growth, and education expenditure. The panel cointegration tests establish a significant long-run relationship between the variables of the study. Furthermore, the PMG-ARDL (Panel mean group-Autoregressive distributed lag model) estimation technique confirms that both climate change and world uncertainty perpetuate the global gender inequalities. Additionally, the results establish that globalisation, economic growth, and education expenditure exert a mitigating influence on gender inequality, signifying their role in diminishing gender disparities. These findings are further confirmed by the FGLS (Feasible Generalized Least Squares) and DKSE (Driscoll-Kraay Standard Errors) regression methods. Potential policy implications for mitigating the detrimental gender ramifications stemming from climate change and rising world uncertainties are also discussed.

Keywords: gender inequality, world uncertainty, climate change, globalisation., ecological footprint

Procedia PDF Downloads 11
328 Analysis of Cascade Control Structure in Train Dynamic Braking System

Authors: B. Moaveni, S. Morovati

Abstract:

In recent years, increasing the usage of railway transportations especially in developing countries caused more attention to control systems railway vehicles. Consequently, designing and implementing the modern control systems to improve the operating performance of trains and locomotives become one of the main concerns of researches. Dynamic braking systems is an important safety system which controls the amount of braking torque generated by traction motors, to keep the adhesion coefficient between the wheel-sets and rail road in optimum bound. Adhesion force has an important role to control the braking distance and prevent the wheels from slipping during the braking process. Cascade control structure is one of the best control methods for the wide range of industrial plants in the presence of disturbances and errors. This paper presents cascade control structure based on two forward simple controllers with two feedback loops to control the slip ratio and braking torque. In this structure, the inner loop controls the angular velocity and the outer loop control the longitudinal velocity of the locomotive that its dynamic is slower than the dynamic of angular velocity. This control structure by controlling the torque of DC traction motors, tries to track the desired velocity profile to access the predefined braking distance and to control the slip ratio. Simulation results are employed to show the effectiveness of the introduced methodology in dynamic braking system.

Keywords: cascade control, dynamic braking system, DC traction motors, slip control

Procedia PDF Downloads 339
327 Use of a New Multiplex Quantitative Polymerase Chain Reaction Based Assay for Simultaneous Detection of Neisseria Meningitidis, Escherichia Coli K1, Streptococcus agalactiae, and Streptococcus pneumoniae

Authors: Nastaran Hemmati, Farhad Nikkhahi, Amir Javadi, Sahar Eskandarion, Seyed Mahmuod Amin Marashi

Abstract:

Neisseria meningitidis, Escherichia coli K, Streptococcus agalactiae, and Streptococcus pneumoniae cause 90% of bacterial meningitis. Almost all infected people die or have irreversible neurological complications. Therefore, it is essential to have a diagnostic kit with the ability to quickly detect these fatal infections. The project involved 212 patients from whom cerebrospinal fluid samples were obtained. After total genome extraction and performing multiplex quantitative polymerase chain reaction (qPCR), the presence or absence of each infectious factor was determined by comparing with standard strains. The specificity, sensitivity, positive predictive value, and negative predictive value calculated were 100%, 92.9%, 50%, and 100%, respectively. So, due to the high specificity and sensitivity of the designed primers, they can be used instead of bacterial culture that takes at least 24 to 48 hours. The remarkable benefit of this method is associated with the speed (up to 3 hours) at which the procedure could be completed. It is also worth noting that this method can reduce the personnel unintentional errors which may occur in the laboratory. On the other hand, as this method simultaneously identifies four common factors that cause bacterial meningitis, it could be used as an auxiliary method diagnostic technique in laboratories particularly in cases of emergency medicine.

Keywords: cerebrospinal fluid, meningitis, quantitative polymerase chain reaction, simultaneous detection, diagnosis testing

Procedia PDF Downloads 86
326 Artificial Neural Network Modeling and Genetic Algorithm Based Optimization of Hydraulic Design Related to Seepage under Concrete Gravity Dams on Permeable Soils

Authors: Muqdad Al-Juboori, Bithin Datta

Abstract:

Hydraulic structures such as gravity dams are classified as essential structures, and have the vital role in providing strong and safe water resource management. Three major aspects must be considered to achieve an effective design of such a structure: 1) The building cost, 2) safety, and 3) accurate analysis of seepage characteristics. Due to the complexity and non-linearity relationships of the seepage process, many approximation theories have been developed; however, the application of these theories results in noticeable errors. The analytical solution, which includes the difficult conformal mapping procedure, could be applied for a simple and symmetrical problem only. Therefore, the objectives of this paper are to: 1) develop a surrogate model based on numerical simulated data using SEEPW software to approximately simulate seepage process related to a hydraulic structure, 2) develop and solve a linked simulation-optimization model based on the developed surrogate model to describe the seepage occurring under a concrete gravity dam, in order to obtain optimum and safe design at minimum cost. The result shows that the linked simulation-optimization model provides an efficient and optimum design of concrete gravity dams.

Keywords: artificial neural network, concrete gravity dam, genetic algorithm, seepage analysis

Procedia PDF Downloads 204