Search results for: stochastic errors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1359

Search results for: stochastic errors

429 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour

Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani

Abstract:

In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.

Keywords: video tracking, particle filter, greedy snake, neural network

Procedia PDF Downloads 324
428 Survey of Neonatologists’ Burnout on a Neonatal Surgical Unit: Audit Study from Cairo University Specialized Pediatric Hospital

Authors: Mahmoud Tarek, Alaa Obeida, Mai Magdy, Khalid Hussein, Aly Shalaby

Abstract:

Background: More doctors are complaining of burnout than before, Burnout is a state of physical and mental exhaustion caused by the doctor’s lifestyle, unfortunately, Medical errors are also more likely in those suffering from burnout and these may result in malpractice suits. Methodology: It is a retrospective audit of burnout response on all neonatologists over a 9 months period. We gathered data using burnout questionnaire, it was obtained from 23 physicians, the physicians divided into 5 categories according to the final score of the 28 questions in the questionnaire. Category 1 with score from 28-38 with almost no work stress, category 2 with score (38-50) who express a low amount of job related stress, category 3 with score (51-70) with moderate amount of stress, category 4 with score (71-90) those express a high amount of job stress and begun to burnout, category 5 with score (91 and above) who are under a dangerous amount of stress and advanced stage of burnout. Results: 33 neonatologists have received the questionnaire, 23 responses were sent back with a response rate of 69.6%. The results showed that 61% of physicians fall in category 4, 31% of the physician in category 5, while 8% of physicians equally distributed between category 2 and 3 (4% each of them). On the other hand, there is no physician present in category 1. Conclusion: Burnout is prevalent in SNICUs, So interventions to minimize burnout prevalence may be of greater importance as this may be reflected indirectly on medical conditions of the patients and physicians, efforts should be done to decrease this high rate of burnout.

Keywords: Cairo, work overload, exhaustion, surgery, neonatal ICU

Procedia PDF Downloads 187
427 Modelling Patient Condition-Based Demand for Managing Hospital Inventory

Authors: Esha Saha, Pradip Kumar Ray

Abstract:

A hospital inventory comprises of a large number and great variety of items for the proper treatment and care of patients, such as pharmaceuticals, medical equipment, surgical items, etc. Improper management of these items, i.e. stockouts, may lead to delay in treatment or other fatal consequences, even death of the patient. So, generally the hospitals tend to overstock items to avoid the risk of stockout which leads to unnecessary investment of money, difficulty in storing, more expiration and wastage, etc. Thus, in such challenging environment, it is necessary for hospitals to follow an inventory policy considering the stochasticity of demand in a hospital. Statistical analysis captures the correlation of patient condition based on bed occupancy with the patient demand which changes stochastically. Due to the dependency on bed occupancy, the markov model is developed that helps to map the changes in demand of hospital inventory based on the changes in the patient condition represented by the movements of bed occupancy states (acute care state, rehabilitative state and long-care state) during the length-of-stay of patient in a hospital. An inventory policy is developed for a hospital based on the fulfillment of patient demand with the objective of minimizing the frequency and quantity of placement of orders of inventoried items. The analytical structure of the model based on probability calculation is provided to show the optimal inventory-related decisions. A case-study is illustrated in this paper for the development of hospital inventory model based on patient demand for multiple inpatient pharmaceutical items. A sensitivity analysis is conducted to investigate the impact of inventory-related parameters on the developed optimal inventory policy. Therefore, the developed model and solution approach may help the hospital managers and pharmacists in managing the hospital inventory in case of stochastic demand of inpatient pharmaceutical items.

Keywords: bed occupancy, hospital inventory, markov model, patient condition, pharmaceutical items

Procedia PDF Downloads 305
426 Biosynthesis of Silver Nanoparticles from Leaf Extract of Tithonia diversifolia and Its Antimicrobial Properties

Authors: Babatunde Oluwole Ogunsile, Omosola Monisola Fasoranti

Abstract:

High costs and toxicological hazards associated with the physicochemical methods of producing nanoparticles have limited their widespread use in clinical and biomedical applications. An ethically sound alternative is the utilization of plant bioresources as a low cost and eco–friendly biological approach. Silver nanoparticles (AgNPs) were synthesized from aqueous leaf extract of Tithonia diversifolia plant. The UV-Vis Spectrophotometer was used to monitor the formation of the AgNPs at different time intervals and different ratios of plant extract to the AgNO₃ solution. The biosynthesized AgNPs were characterized by FTIR, X-ray Diffraction (XRD) and Scanning Electron Microscope (SEM). Antimicrobial activities of the AgNPs were investigated against ten human pathogens using agar well diffusion method. The AgNPs yields were modeled using a second-order factorial design. The result showed that the rate of formation of the AgNPs increased with respect to time while the optimum ratio of plant extract to the AgNO₃ solution was 1:1. The hydroxyl group was strongly involved in the bioreduction of the silver salt as indicated by the FTIR spectra. The synthesized AgNPs were crystalline in nature, with a uniformly distributed network of the web-like structure. The factorial model predicted the nanoparticles yields with minimal errors. The nanoparticles were active against all the tested pathogens and thus have great potentials as antimicrobial agents.

Keywords: antimicrobial activities, green synthesis, silver nanoparticles, Tithonia diversifolia

Procedia PDF Downloads 127
425 Camel Mortalities Due to Accidental Intoxcation with Ionophore

Authors: M. A. Abdelfattah, F. K. Waleed

Abstract:

Anticoccidials were utilized widely in veterinary practice for the avoidance of coccidiosis in poultry and assume a huge job as development promotants in ruminants. Ionophore harming is every now and again happens because of accidental access to medicated feed, errors in feed mixing, incorrect dosage calculation or misuse in non-recommended species. Camels on several farms in Eastern area of Saudi Arabia were accidently fed with a feed pellet containing 13 ppm salinomycin. One hundred and sixty-three camels died with mortality rate of 100%. The poisoning was clinically characterized by restlessness with tail lift to the top, jerk in the muscles of legs and thighs, excessive sweating, frequent setting and standing with body imbalance, lateral and sternal recumbences with the legs stretched back, eye tears with dilated pupil, vomiting of the stomach content, loss of consciousness and death of some of them. Feed analysis indicated the presence of salinomycin in pelleted feed in a range of 13 mg/kg-47 mg/kg. Necropsy findings and histopathological examinations were presented. Regulations and legal implications concerning with sale of contaminated feed in Saudi market are discussed in the light of feed law and by-law. The necessity for an effective implication of regulation concerning application of quality assurance systems based on the principles of Good Manufacturing Practice (GMP) and the application of Hazard Analysis of Critical Control Point (HACCP) during feed production is necessary to avoid feed accident.

Keywords: medicated feed, salinomycin, anticoccidial, camel, toxicity

Procedia PDF Downloads 95
424 Psychodidactic Strategies to Facilitate Flow of Logical Thinking in Preparation of Academic Documents

Authors: Deni Stincer Gomez, Zuraya Monroy Nasr, Luis Pérez Alvarez

Abstract:

The preparation of academic documents such as thesis, articles and research projects is one of the requirements of the higher educational level. These documents demand the implementation of logical argumentative thinking which is experienced and executed with difficulty. To mitigate the effect of these difficulties this study designed a thesis seminar, with which the authors have seven years of experience. It is taught in a graduate program in Psychology at the National Autonomous University of Mexico. In this study the authors use the Toulmin model as a mental heuristic and for the application of a set of psychodidactic strategies that facilitate the elaboration of the plot and culmination of the thesis. The efficiency in obtaining the degree in the groups exposed to the seminar has increased by 94% compared to the 10% that existed in the generations that were not exposed to the seminar. In this article the authors will emphasize the psychodidactic strategies used. The Toulmin model alone does not guarantee the success achieved. A set of actions of a psychological nature (almost psychotherapeutic) and didactics of the teacher also seem to contribute. These are actions that derive from an understanding of the psychological, epistemological and ontogenetic obstacles and the most frequent errors in which thought tends to fall when it is demanded a logical course. The authors have grouped the strategies into three groups: 1) strategies to facilitate logical thinking, 2) strategies to strengthen the scientific self and 3) strategies to facilitate the act of writing the text. In this work the authors delve into each of them.

Keywords: psychodidactic strategies, logical thinking, academic documents, Toulmin model

Procedia PDF Downloads 165
423 Behavior of Cold Formed Steel in Trusses

Authors: Reinhard Hermawan Lasut, Henki Wibowo Ashadi

Abstract:

The use of materials in Indonesia's construction sector requires engineers and practitioners to develop efficient construction technology, one of the materials used in cold-formed steel. Generally, the use of cold-formed steel is used in the construction of roof trusses found in houses or factories. The failure of the roof truss structure causes errors in the calculation analysis in the form of cross-sectional dimensions or frame configuration. The roof truss structure, vertical distance effect to the span length at the edge of the frame carries the compressive load. If the span is too long, local buckling will occur which causes problems in the frame strength. The model analysis uses various shapes of roof trusses, span lengths and angles with analysis of the structural stiffness matrix method. Model trusses with one-fifth shortened span and one-sixth shortened span also The trusses model is reviewed with increasing angles. It can be concluded that the trusses model by shortening the span in the compression area can reduce deflection and the model by increasing the angle does not get good results because the higher the roof, the heavier the load carried by the roof so that the force is not channeled properly. The shape of the truss must be calculated correctly so the truss is able to withstand the working load so that there is no structural failure.

Keywords: cold-formed, trusses, deflection, stiffness matrix method

Procedia PDF Downloads 147
422 Quantification of Glucosinolates in Turnip Greens and Turnip Tops by Near-Infrared Spectroscopy

Authors: S. Obregon-Cano, R. Moreno-Rojas, E. Cartea-Gonzalez, A. De Haro-Bailon

Abstract:

The potential of near-infrared spectroscopy (NIRS) for screening the total glucosinolate (t-GSL) content, and also, the aliphatic glucosinolates gluconapin (GNA), progoitrin (PRO) and glucobrassicanapin (GBN) in turnip greens and turnip tops was assessed. This crop is grown for edible leaves and stems for human consumption. The reference values for glucosinolates, as they were obtained by high performance liquid chromatography on the vegetable samples, were regressed against different spectral transformations by modified partial least-squares (MPLS) regression (calibration set of samples n= 350). The resulting models were satisfactory, with calibration coefficient values from 0.72 (GBN) to 0.98 (tGSL). The predictive ability of the equations obtained was tested using a set of samples (n=70) independent of the calibration set. The determination coefficients and prediction errors (SEP) obtained in the external validation were: GNA=0.94 (SEP=3.49); PRO=0.41 (SEP=1.08); GBN=0.55 (SEP=0.60); tGSL=0.96 (SEP=3.28). These results show that the equations developed for total glucosinolates, as well as for gluconapin can be used for screening these compounds in the leaves and stems of this species. In addition, the progoitrin and glucobrassicanapin equations obtained can be used to identify those samples with high, medium and low contents. The calibration equations obtained were accurate enough for a fast, non-destructive and reliable analysis of the content in GNA and tGSL directly from NIR spectra. The equations for PRO and GBN can be employed to identify samples with high, medium and low contents.

Keywords: brassica rapa, glucosinolates, gluconapin, NIRS, turnip greens

Procedia PDF Downloads 124
421 Alternative General Formula to Estimate and Test Influences of Early Diagnosis on Cancer Survival

Authors: Li Yin, Xiaoqin Wang

Abstract:

Background and purpose: Cancer diagnosis is part of a complex stochastic process, in which patients' personal and social characteristics influence the choice of diagnosing methods, diagnosing methods, in turn, influence the initial assessment of cancer stage, the initial assessment, in turn, influences the choice of treating methods, and treating methods in turn influence cancer outcomes such as cancer survival. To evaluate diagnosing methods, one needs to estimate and test the causal effect of a regime of cancer diagnosis and treatments. Recently, Wang and Yin (Annals of statistics, 2020) derived a new general formula, which expresses these causal effects in terms of the point effects of treatments in single-point causal inference. As a result, it is possible to estimate and test these causal effects via point effects. The purpose of the work is to estimate and test causal effects under various regimes of cancer diagnosis and treatments via point effects. Challenges and solutions: The cancer stage has influences from earlier diagnosis as well as on subsequent treatments. As a consequence, it is highly difficult to estimate and test the causal effects via standard parameters, that is, the conditional survival given all stationary covariates, diagnosing methods, cancer stage and prognosis factors, treating methods. Instead of standard parameters, we use the point effects of cancer diagnosis and treatments to estimate and test causal effects under various regimes of cancer diagnosis and treatments. We are able to use familiar methods in the framework of single-point causal inference to accomplish the task. Achievements: we have applied this method to stomach cancer survival from a clinical study in Sweden. We have studied causal effects under various regimes, including the optimal regime of diagnosis and treatments and the effect moderation of the causal effect by age and gender.

Keywords: cancer diagnosis, causal effect, point effect, G-formula, sequential causal effect

Procedia PDF Downloads 177
420 Root Mean Square-Based Method for Fault Diagnosis and Fault Detection and Isolation of Current Fault Sensor in an Induction Machine

Authors: Ahmad Akrad, Rabia Sehab, Fadi Alyoussef

Abstract:

Nowadays, induction machines are widely used in industry thankful to their advantages comparing to other technologies. Indeed, there is a big demand because of their reliability, robustness and cost. The objective of this paper is to deal with diagnosis, detection and isolation of faults in a three-phase induction machine. Among the faults, Inter-turn short-circuit fault (ITSC), current sensors fault and single-phase open circuit fault are selected to deal with. However, a fault detection method is suggested using residual errors generated by the root mean square (RMS) of phase currents. The application of this method is based on an asymmetric nonlinear model of Induction Machine considering the winding fault of the three axes frame state space. In addition, current sensor redundancy and sensor fault detection and isolation (FDI) are adopted to ensure safety operation of induction machine drive. Finally, a validation is carried out by simulation in healthy and faulty operation modes to show the benefit of the proposed method to detect and to locate with, a high reliability, the three types of faults.

Keywords: induction machine, asymmetric nonlinear model, fault diagnosis, inter-turn short-circuit fault, root mean square, current sensor fault, fault detection and isolation

Procedia PDF Downloads 173
419 Development of a Work-Related Stress Management Program Guaranteeing Fitness-For-Duty for Human Error Prevention

Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee

Abstract:

Human error is one of the most dreaded factors that may result in unexpected accidents, especially in nuclear power plants. For accident prevention, it is quite indispensable to analyze and to manage the influence of any factor which may raise the possibility of human errors. Out of lots factors, stress has been reported to have a significant influence on human performance. Therefore, this research aimed to develop a work-related stress management program which can guarantee Fitness-for-Duty (FFD) of the workers in nuclear power plants, especially those working in main control rooms. Major stress factors were elicited through literal surveys and classified into major categories such as demands, supports, and relationships. To manage those factors, a test and intervention program based on 4-level approaches was developed over the whole employment cycle including selection and screening of workers, job allocation, and job rotation. In addition, a managerial care program was introduced with the concept of Employee-Assistance-Program (EAP) program. Reviews on the program conducted by ex-operators in nuclear power plants showed responses in the affirmative, and suggested additional treatment to guarantee high performance of human workers, not in normal operations but also in emergency situations.

Keywords: human error, work performance, work stress, Fitness-For-Duty (FFD), Employee Assistance Program (EAP)

Procedia PDF Downloads 390
418 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters

Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev

Abstract:

Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.

Keywords: lexicon of disasters, modelling, Petri nets, text annotation, social disasters

Procedia PDF Downloads 187
417 Estimating Anthropometric Dimensions for Saudi Males Using Artificial Neural Networks

Authors: Waleed Basuliman

Abstract:

Anthropometric dimensions are considered one of the important factors when designing human-machine systems. In this study, the estimation of anthropometric dimensions has been improved by using Artificial Neural Network (ANN) model that is able to predict the anthropometric measurements of Saudi males in Riyadh City. A total of 1427 Saudi males aged 6 to 60 years participated in measuring 20 anthropometric dimensions. These anthropometric measurements are considered important for designing the work and life applications in Saudi Arabia. The data were collected during eight months from different locations in Riyadh City. Five of these dimensions were used as predictors variables (inputs) of the model, and the remaining 15 dimensions were set to be the measured variables (Model’s outcomes). The hidden layers varied during the structuring stage, and the best performance was achieved with the network structure 6-25-15. The results showed that the developed Neural Network model was able to estimate the body dimensions of Saudi male population in Riyadh City. The network's mean absolute percentage error (MAPE) and the root mean squared error (RMSE) were found to be 0.0348 and 3.225, respectively. These results were found less, and then better, than the errors found in the literature. Finally, the accuracy of the developed neural network was evaluated by comparing the predicted outcomes with regression model. The ANN model showed higher coefficient of determination (R2) between the predicted and actual dimensions than the regression model.

Keywords: artificial neural network, anthropometric measurements, back-propagation

Procedia PDF Downloads 471
416 Permanent Reduction of Arc Flash Energy to Safe Limit on Line Side of 480 Volt Switchgear Incomer Breaker

Authors: Abid Khan

Abstract:

A recognized engineering challenge is related to personnel protection from fatal arc flash incident energy in the line side of the 480-volt switchgear incomer breakers during maintenance activities. The incident energy is typically high due to slow fault clearance, and it can be higher than the available personnel protective equipment (PPE) ratings. A fault in this section of the switchgear is cleared by breakers or fuses in the upstream higher voltage system (4160 Volt or higher). The current reflection in the higher voltage upstream system for a fault in the 480-volt switchgear is low, the clearance time is slower, and the inversely proportional incident energy is hence higher. The installation of overcurrent protection at a 480-volt system upstream of the incomer breaker will operate fast enough and trips the upstream higher voltage breaker when a fault develops at the incomer breaker. Therefore, fault current reduction as reflected in the upstream higher voltage system is eliminated. Since the fast overcurrent protection is permanently installed, it is always functional, does not require human interventions, and eliminates exposure to human errors. It is installed at the maintenance activities location, and its operations can be locally monitored by craftsmen during maintenance activities.

Keywords: arc flash, mitigation, maintenance switch, energy level

Procedia PDF Downloads 182
415 Establishment and Application of Numerical Simulation Model for Shot Peen Forming Stress Field Method

Authors: Shuo Tian, Xuepiao Bai, Jianqin Shang, Pengtao Gai, Yuansong Zeng

Abstract:

Shot peen forming is an essential forming process for aircraft metal wing panel. With the development of computer simulation technology, scholars have proposed a numerical simulation method of shot peen forming based on stress field. Three shot peen forming indexes of crater diameter, shot speed and surface coverage are required as simulation parameters in the stress field method. It is necessary to establish the relationship between simulation and experimental process parameters in order to simulate the deformation under different shot peen forming parameters. The shot peen forming tests of the 2024-T351 aluminum alloy workpieces were carried out using uniform test design method, and three factors of air pressure, feed rate and shot flow were selected. The second-order response surface model between simulation parameters and uniform test factors was established by stepwise regression method using MATLAB software according to the results. The response surface model was combined with the stress field method to simulate the shot peen forming deformation of the workpiece. Compared with the experimental results, the simulated values were smaller than the corresponding test values, the maximum and average errors were 14.8% and 9%, respectively.

Keywords: shot peen forming, process parameter, response surface model, numerical simulation

Procedia PDF Downloads 62
414 Variable vs. Fixed Window Width Code Correlation Reference Waveform Receivers for Multipath Mitigation in Global Navigation Satellite Systems with Binary Offset Carrier and Multiplexed Binary Offset Carrier Signals

Authors: Fahad Alhussein, Huaping Liu

Abstract:

This paper compares the multipath mitigation performance of code correlation reference waveform receivers with variable and fixed window width, for binary offset carrier and multiplexed binary offset carrier signals typically used in global navigation satellite systems. In the variable window width method, such width is iteratively reduced until the distortion on the discriminator with multipath is eliminated. This distortion is measured as the Euclidean distance between the actual discriminator (obtained with the incoming signal), and the local discriminator (generated with a local copy of the signal). The variable window width have shown better performance compared to the fixed window width. In particular, the former yields zero error for all delays for the BOC and MBOC signals considered, while the latter gives rather large nonzero errors for small delays in all cases. Due to its computational simplicity, the variable window width method is perfectly suitable for implementation in low-cost receivers.

Keywords: correlation reference waveform receivers, binary offset carrier, multiplexed binary offset carrier, global navigation satellite systems

Procedia PDF Downloads 119
413 Analysis of Two-Echelon Supply Chain with Perishable Items under Stochastic Demand

Authors: Saeed Poormoaied

Abstract:

Perishability and developing an intelligent control policy for perishable items are the major concerns of marketing managers in a supply chain. In this study, we address a two-echelon supply chain problem for perishable items with a single vendor and a single buyer. The buyer adopts an aged-based continuous review policy which works by taking both the stock level and the aging process of items into account. The vendor works under the warehouse framework, where its lot size is determined with respect to the batch size of the buyer. The model holds for a positive and fixed lead time for the buyer, and zero lead time for the vendor. The demand follows a Poisson process and any unmet demand is lost. We provide exact analytic expressions for the operational characteristics of the system by using the renewal reward theorem. Items have a fixed lifetime after which they become unusable and are disposed of from the buyer's system. The age of items starts when they are unpacked and ready for the consumption at the buyer. When items are held by the vendor, there is no aging process which results in no perishing at the vendor's site. The model is developed under the centralized framework, which takes the expected profit of both vendor and buyer into consideration. The goal is to determine the optimal policy parameters under the service level constraint at the retailer's site. A sensitivity analysis is performed to investigate the effect of the key input parameters on the expected profit and order quantity in the supply chain. The efficiency of the proposed age-based policy is also evaluated through a numerical study. Our results show that when the unit perishing cost is negligible, a significant cost saving is achieved.

Keywords: two-echelon supply chain, perishable items, age-based policy, renewal reward theorem

Procedia PDF Downloads 129
412 Applying Dictogloss Technique to Improve Auditory Learners’ Writing Skills in Second Language Learning

Authors: Aji Budi Rinekso

Abstract:

There are some common problems that are often faced by students in writing. The problems are related to macro and micro skills of writing, such as incorrect spellings, inappropriate diction, grammatical errors, random ideas, and irrelevant supporting sentences. Therefore, it is needed a teaching technique that can solve those problems. Dictogloss technique is a teaching technique that involves listening practices. So, it is a suitable teaching technique for students with auditory learning style. Dictogloss technique comprises of four basic steps; (1) warm up, (2) dictation, (3) reconstruction and (4) analysis and correction. Warm up is when students find out about topics and do some preparatory vocabulary works. Then, dictation is when the students listen to texts read at normal speed by a teacher. The text is read by the teacher twice where at the first reading the students only listen to the teacher and at the second reading the students listen to the teacher again and take notes. Next, reconstruction is when the students discuss the information from the text read by the teacher and start to write a text. Lastly, analysis and correction are when the students check their writings and revise them. Dictogloss offers some advantages in relation to the efforts of improving writing skills. Through the use of dictogloss technique, students can solve their problems both on macro skills and micro skills. Easier to generate ideas and better writing mechanics are the benefits of dictogloss.

Keywords: auditory learners, writing skills, dictogloss technique, second language learning

Procedia PDF Downloads 129
411 Prediction of the Lateral Bearing Capacity of Short Piles in Clayey Soils Using Imperialist Competitive Algorithm-Based Artificial Neural Networks

Authors: Reza Dinarvand, Mahdi Sadeghian, Somaye Sadeghian

Abstract:

Prediction of the ultimate bearing capacity of piles (Qu) is one of the basic issues in geotechnical engineering. So far, several methods have been used to estimate Qu, including the recently developed artificial intelligence methods. In recent years, optimization algorithms have been used to minimize artificial network errors, such as colony algorithms, genetic algorithms, imperialist competitive algorithms, and so on. In the present research, artificial neural networks based on colonial competition algorithm (ANN-ICA) were used, and their results were compared with other methods. The results of laboratory tests of short piles in clayey soils with parameters such as pile diameter, pile buried length, eccentricity of load and undrained shear resistance of soil were used for modeling and evaluation. The results showed that ICA-based artificial neural networks predicted lateral bearing capacity of short piles with a correlation coefficient of 0.9865 for training data and 0.975 for test data. Furthermore, the results of the model indicated the superiority of ICA-based artificial neural networks compared to back-propagation artificial neural networks as well as the Broms and Hansen methods.

Keywords: artificial neural network, clayey soil, imperialist competition algorithm, lateral bearing capacity, short pile

Procedia PDF Downloads 130
410 Quantification of Lustre in Textile Fibers by Image Analysis

Authors: Neelesh Bharti Shukla, Suvankar Dutta, Esha Sharma, Shrikant Ralebhat, Gurudatt Krishnamurthy

Abstract:

A key component of the physical attribute of textile fibers is lustre. It is a complex phenomenon arising from the interaction of light with fibers, yarn and fabrics. It is perceived as the contrast difference between the bright areas (specular reflection) and duller backgrounds (diffused reflection). Lustre of fibers is affected by their surface structure, morphology, cross-section profile as well as the presence of any additives/registrants. Due to complexities in measurements, objective measurements such as gloss meter do not give reproducible quantification of lustre. Other instruments such as SAMBA hair systems are expensive. In light of this, lustre quantification has largely remained subjective, judged visually by experts, but prone to errors. In this development, a physics-based approach was conceptualized and demonstrated. We have developed an image analysis based technique to quantify visually observed differences in lustre of fibers. Cellulosic fibers, produced with different approaches, with visually different levels of lustre were photographed under controlled optics. These images were subsequently analyzed using a configured software system. The ratio of Intensity of light from bright (specular reflection) and dull (diffused reflection) areas was used to numerically represent lustre. In the next step, the set of samples that were not visually distinguishable easily were also evaluated by the technique and it was established that quantification of lustre is feasible.

Keywords: lustre, fibre, image analysis, measurement

Procedia PDF Downloads 157
409 The Effect of Damper Attachment on Tennis Racket Vibration: A Simulation Study

Authors: Kuangyou B. Cheng

Abstract:

Tennis is among the most popular sports worldwide. During ball-racket impact, substantial vibration transmitted to the hand/arm may be the cause of “tennis elbow”. Although it is common for athletes to attach a “vibration damper” to the spring-bed, the effect remains unclear. To avoid subjective factors and errors in data recording, the effect of damper attachment on racket handle end vibration was investigated with computer simulation. The tennis racket was modeled as a beam with free-free ends (similar to loosely holding the racket). Finite difference method with 40 segments was used to simulate ball-racket impact response. The effect of attaching a damper was modeled as having a segment with increased mass. It was found that the damper has the largest effect when installed at the spring-bed center. However, this is not a practical location due to interference with ball-racket impact. Vibration amplitude changed very slightly when the damper was near the top or bottom of the spring-bed. The damper works only slightly better at the bottom than at the top of the spring-bed. In addition, heavier dampers work better than lighter ones. These simulation results were comparable with experimental recordings in which the selection of damper locations was restricted by ball impact locations. It was concluded that mathematical model simulations were able to objectively investigate the effect of damper attachment on racket vibration. In addition, with very slight difference in grip end vibration amplitude when the damper was attached at the top or bottom of the spring-bed, whether the effect can really be felt by athletes is questionable.

Keywords: finite difference, impact, modeling, vibration amplitude

Procedia PDF Downloads 243
408 Parameter Estimation for the Mixture of Generalized Gamma Model

Authors: Wikanda Phaphan

Abstract:

Mixture generalized gamma distribution is a combination of two distributions: generalized gamma distribution and length biased generalized gamma distribution. These two distributions were presented by Suksaengrakcharoen and Bodhisuwan in 2014. The findings showed that probability density function (pdf) had fairly complexities, so it made problems in estimating parameters. The problem occurred in parameter estimation was that we were unable to calculate estimators in the form of critical expression. Thus, we will use numerical estimation to find the estimators. In this study, we presented a new method of the parameter estimation by using the expectation – maximization algorithm (EM), the conjugate gradient method, and the quasi-Newton method. The data was generated by acceptance-rejection method which is used for estimating α, β, λ and p. λ is the scale parameter, p is the weight parameter, α and β are the shape parameters. We will use Monte Carlo technique to find the estimator's performance. Determining the size of sample equals 10, 30, 100; the simulations were repeated 20 times in each case. We evaluated the effectiveness of the estimators which was introduced by considering values of the mean squared errors and the bias. The findings revealed that the EM-algorithm had proximity to the actual values determined. Also, the maximum likelihood estimators via the conjugate gradient and the quasi-Newton method are less precision than the maximum likelihood estimators via the EM-algorithm.

Keywords: conjugate gradient method, quasi-Newton method, EM-algorithm, generalized gamma distribution, length biased generalized gamma distribution, maximum likelihood method

Procedia PDF Downloads 205
407 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model

Authors: Donatella Giuliani

Abstract:

In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.

Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation

Procedia PDF Downloads 202
406 Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network

Authors: P. Karthick, K. Mahesh

Abstract:

Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video.

Keywords: video compression, K-means clustering, convolutional neural network, generative adversarial network, singular value decomposition, pixel visualization, stochastic gradient descent, frame per second extraction, RGB channel extraction, self-detection and deciding system

Procedia PDF Downloads 168
405 Applicability of Cameriere’s Age Estimation Method in a Sample of Turkish Adults

Authors: Hatice Boyacioglu, Nursel Akkaya, Humeyra Ozge Yilanci, Hilmi Kansu, Nihal Avcu

Abstract:

The strong relationship between the reduction in the size of the pulp cavity and increasing age has been reported in the literature. This relationship can be utilized to estimate the age of an individual by measuring the pulp cavity size using dental radiographs as a non-destructive method. The purpose of this study is to develop a population specific regression model for age estimation in a sample of Turkish adults by applying Cameriere’s method on panoramic radiographs. The sample consisted of 100 panoramic radiographs of Turkish patients (40 men, 60 women) aged between 20 and 70 years. Pulp and tooth area ratios (AR) of the maxilla¬¬ry canines were measured by two maxillofacial radiologists and then the results were subjected to regression analysis. There were no statistically significant intra-observer and inter-observer differences. The correlation coefficient between age and the AR of the maxillary canines was -0.71 and the following regression equation was derived: Estimated Age = 77,365 – ( 351,193 × AR ). The mean prediction error was 4 years which is within acceptable errors limits for age estimation. This shows that the pulp/tooth area ratio is a useful variable for assessing age with reasonable accuracy. Based on the results of this research, it was concluded that Cameriere’s method is suitable for dental age estimation and it can be used for forensic procedures in Turkish adults. These instructions give you guidelines for preparing papers for conferences or journals.

Keywords: age estimation by teeth, forensic dentistry, panoramic radiograph, Cameriere's method

Procedia PDF Downloads 432
404 Militating Factors Against Building Information Modeling Adoption in Quantity Surveying Practice in South Africa

Authors: Kenneth O. Otasowie, Matthew Ikuabe, Clinton Aigbavboa, Ayodeji Oke

Abstract:

The quantity surveying (QS) profession is one of the professions in the construction industry, and it is saddled with the responsibility of measuring the number of materials as well as the workmanship required to get work done in the industry. This responsibility is vital to the success of a construction project as it determines if a project will be completed on time, within budget, and up to the required standard. However, the practice has been criticised severally for failure to accurately execute her responsibility. The need to reduce errors, inaccuracies and omissions has made the adoption of modern technologies such as building information modeling (BIM) inevitable in its practice. Nevertheless, there are barriers to the adoption of BIM in QS practice in South Africa (SA). Thus, this study aims to investigate these barriers. A survey design was adopted. A total number of one hundred and fifteen (115) questionnaires were administered to quantity surveyors in Guateng Province, SA, and ninety (90) were returned and found suitable for analysis. Collected data were analysed using percentage, mean item score, standard deviation, one-sample t-test, and Kruskal-Wallis. The findings show that lack of BIM expertise, lack of government enforcement, resistance to change, and no client demand for BIM are the most significant barriers to the adoption of BIM in QS practice. As a result, this study recommends that trainings on BIM technology be prioritised, and government must take the lead in BIM adoption in the country, particularly in public projects.

Keywords: barriers, BIM, quantity surveying practice, South Africa

Procedia PDF Downloads 83
403 Integral Form Solutions of the Linearized Navier-Stokes Equations without Deviatoric Stress Tensor Term in the Forward Modeling for FWI

Authors: Anyeres N. Atehortua Jimenez, J. David Lambraño, Juan Carlos Muñoz

Abstract:

Navier-Stokes equations (NSE), which describe the dynamics of a fluid, have an important application on modeling waves used for data inversion techniques as full waveform inversion (FWI). In this work a linearized version of NSE and its variables, neglecting deviatoric terms of stress tensor, is presented. In order to get a theoretical modeling of pressure p(x,t) and wave velocity profile c(x,t), a wave equation of visco-acoustic medium (VAE) is written. A change of variables p(x,t)=q(x,t)h(ρ), is made on the equation for the VAE leading to a well known Klein-Gordon equation (KGE) describing waves propagating in variable density medium (ρ) with dispersive term α^2(x). KGE is reduced to a Poisson equation and solved by proposing a specific function for α^2(x) accounting for the energy dissipation and dispersion. Finally, an integral form solution is derived for p(x,t), c(x,t) and kinematics variables like particle velocity v(x,t), displacement u(x,t) and bulk modulus function k_b(x,t). Further, it is compared this visco-acoustic formulation with another form broadly used in the geophysics; it is argued that this formalism is more general and, given its integral form, it may offer several advantages from the modern parallel computing point of view. Applications to minimize the errors in modeling for FWI applied to oils resources in geophysics are discussed.

Keywords: Navier-Stokes equations, modeling, visco-acoustic, inversion FWI

Procedia PDF Downloads 503
402 Evaluating Language Loss Effect on Autobiographical Memory by Examining Memory Phenomenology in Bilingual Speakers

Authors: Anastasia Sorokina

Abstract:

Graduate language loss or attrition has been well documented in individuals who migrate and become emersed in a different language environment. This phenomenon of first language (L1) attrition is an example of non-pathological (not due to trauma) and can manifest itself in frequent pauses, search for words, or grammatical errors. While the widely experienced loss of one’s first language might seem harmless, there is convincing evidence from the disciplines of Developmental Psychology, Bilingual Studies, and even Psychotherapy that language plays a crucial role in the memory of self. In fact, we remember, store, and share personal memories with the help of language. Dual-Coding Theory suggests that language memory code deterioration could lead to forgetting. Yet, no one has investigated a possible connection between language loss and memory. The present study aims to address this research gap by examining a corpus of 1,495 memories of Russian-English bilinguals who are on a continuum of L1 (first language) attrition. Since phenomenological properties capture how well a memory is remembered, the following descriptors were selected - vividness, ease of recall, emotional valence, personal significance, and confidence in the event. A series of linear regression statistical analyses were run to examine the possible negative effects of L1 attrition on autobiographical memory. The results revealed that L1 attrition might compromise perceived vividness and confidence in the event, which is indicative of memory deterioration. These findings suggest the importance of heritage language maintenance in immigrant communities who might be forced to assimilate as language loss might negatively affect the memory of self.

Keywords: L1 attrition, autobiographical memory, language loss, memory phenomenology, dual coding

Procedia PDF Downloads 94
401 Tuning of Kalman Filter Using Genetic Algorithm

Authors: Hesham Abdin, Mohamed Zakaria, Talaat Abd-Elmonaem, Alaa El-Din Sayed Hafez

Abstract:

Kalman filter algorithm is an estimator known as the workhorse of estimation. It has an important application in missile guidance, especially in lack of accurate data of the target due to noise or uncertainty. In this paper, a Kalman filter is used as a tracking filter in a simulated target-interceptor scenario with noise. It estimates the position, velocity, and acceleration of the target in the presence of noise. These estimations are needed for both proportional navigation and differential geometry guidance laws. A Kalman filter has a good performance at low noise, but a large noise causes considerable errors leads to performance degradation. Therefore, a new technique is required to overcome this defect using tuning factors to tune a Kalman filter to adapt increasing of noise. The values of the tuning factors are between 0.8 and 1.2, they have a specific value for the first half of range and a different value for the second half. they are multiplied by the estimated values. These factors have its optimum values and are altered with the change of the target heading. A genetic algorithm updates these selections to increase the maximum effective range which was previously reduced by noise. The results show that the selected factors have other benefits such as decreasing the minimum effective range that was increased earlier due to noise. In addition to, the selected factors decrease the miss distance for all ranges of this direction of the target, and expand the effective range which leads to increase probability of kill.

Keywords: proportional navigation, differential geometry, Kalman filter, genetic algorithm

Procedia PDF Downloads 491
400 Mathematical Modeling of the Operating Process and a Method to Determine the Design Parameters in an Electromagnetic Hammer Using Solenoid Electromagnets

Authors: Song Hyok Choe

Abstract:

This study presented a method to determine the optimum design parameters based on a mathematical model of the operating process in a manual electromagnetic hammer using solenoid electromagnets. The operating process of the electromagnetic hammer depends on the circuit scheme of the power controller. Mathematical modeling of the operating process was carried out by considering the energy transfer process in the forward and reverse windings and the electromagnetic force acting on the impact and brake pistons. Using the developed mathematical model, the initial design data of a manual electromagnetic hammer proposed in this paper are encoded and analyzed in Matlab. On the other hand, a measuring experiment was carried out by using a measurement device to check the accuracy of the developed mathematical model. The relative errors of the analytical results for measured stroke distance of the impact piston, peak value of forward stroke current and peak value of reverse stroke current were −4.65%, 9.08% and 9.35%, respectively. Finally, it was shown that the mathematical model of the operating process of an electromagnetic hammer is relatively accurate, and it can be used to determine the design parameters of the electromagnetic hammer. Therefore, the design parameters that can provide the required impact energy in the manual electromagnetic hammer were determined using a mathematical model developed. The proposed method will be used for the further design and development of the various types of percussion rock drills.

Keywords: solenoid electromagnet, electromagnetic hammer, stone processing, mathematical modeling

Procedia PDF Downloads 16