Search results for: likelihood estimation method
19257 Optimization Techniques for Microwave Structures
Authors: Malika Ourabia
Abstract:
A new and efficient method is presented for the analysis of arbitrarily shaped discontinuities. The discontinuities is characterized using a hybrid spectral/numerical technique. This structure presents an arbitrary number of ports, each one with different orientation and dimensions. This article presents a hybrid method based on multimode contour integral and mode matching techniques. The process is based on segmentation and dividing the structure into key building blocks. We use the multimode contour integral method to analyze the blocks including irregular shape discontinuities. Finally, the multimode scattering matrix of the whole structure can be found by cascading the blocks. Therefore, the new method is suitable for analysis of a wide range of waveguide problems. Therefore, the present approach can be applied easily to the analysis of any multiport junctions and cascade blocks. The accuracy of the method is validated comparing with results for several complex problems found in the literature. CPU times are also included to show the efficiency of the new method proposed.Keywords: segmentation, s parameters, simulation, optimization
Procedia PDF Downloads 53019256 Enhancing goal Achivement through Improved Communication Skills
Abstract:
An extensive body of research studies suggest that students, teachers, and supervisors can enhance the likelihood of reaching their goals by improving their communication skills. It is highly important to learn how and when to provide different kinds of feedback, e.g. anticipatory, corrective and positive) will gain better result and higher morale. The purpose of this mixed methods research is twofold: 1) To find out what factors affect effective communication among different stakeholders and how these factors affect student learning 2) What are the good practices for improving communication among different stakeholders and improve student achievement. This presentation first begins with an introduction to the recent research on Marshall’s Nonviolent Communication Techniques (NVC), including four important components: observations, feelings, needs, requests. These techniques can be effectively applied at all levels of communication. To develop an in-depth understanding of the relationship among different techniques within, this research collected, compared, and combined qualitative and quantitative data to better improve communication and support student learning.Keywords: communication, education, language learning, goal achievement, academic success
Procedia PDF Downloads 7319255 Optimization of Surface Roughness by Taguchi’s Method for Turning Process
Authors: Ashish Ankus Yerunkar, Ravi Terkar
Abstract:
Study aimed at evaluating the best process environment which could simultaneously satisfy requirements of both quality as well as productivity with special emphasis on reduction of cutting tool flank wear, because reduction in flank wear ensures increase in tool life. The predicted optimal setting ensured minimization of surface roughness. Purpose of this paper is focused on the analysis of optimum cutting conditions to get lowest surface roughness in turning SCM 440 alloy steel by Taguchi method. Design for the experiment was done using Taguchi method and 18 experiments were designed by this process and experiments conducted. The results are analyzed using ANOVA method. Taguchi method has depicted that the depth of cut has significant role to play in producing lower surface roughness followed by feed. The Cutting speed has lesser role on surface roughness from the tests. The vibrations of the machine tool, tool chattering are the other factors which may contribute poor surface roughness to the results and such factors ignored for analyses. The inferences by this method will be useful to other researches for similar type of study and may be vital for further research on tool vibrations, cutting forces etc.Keywords: surface roughness (ra), machining, dry turning, taguchi method, turning process, anova method, mahr perthometer
Procedia PDF Downloads 36719254 Permanent Magnet Machine Can Be a Vibration Sensor for Itself
Authors: M. Barański
Abstract:
The article presents a new vibration diagnostic method designed to (PM) machines with permanent magnets. Those devices are commonly used in small wind and water systems or vehicles drives. The author’s method is very innovative and unique. Specific structural properties of PM machines are used in this method - electromotive force (EMF) generated due to vibrations. There was analysed number of publications which describe vibration diagnostic methods and tests of electrical PM machines and there was no method found to determine the technical condition of such machine basing on their own signals. In this article, the method genesis, the similarity of machines with permanent magnet to vibration sensor and simulation and laboratory tests results will be discussed. The method of determination the technical condition of electrical machine with permanent magnets basing on its own signals is the subject of patent application No P.405669, and it is the main thesis of author’s doctoral dissertation.Keywords: vibrations, generator, permanent magnet, traction drive, electrical vehicle
Procedia PDF Downloads 36719253 Enhanced Calibration Map for a Four-Hole Probe for Measuring High Flow Angles
Authors: Jafar Mortadha, Imran Qureshi
Abstract:
This research explains and compares the modern techniques used for measuring the flow angles of a flowing fluid with the traditional technique of using multi-hole pressure probes. In particular, the focus of the study is on four-hole probes, which offer great reliability and benefits in several applications where the use of modern measurement techniques is either inconvenient or impractical. Due to modern advancements in manufacturing, small multi-hole pressure probes can be made with high precision, which eliminates the need for calibrating every manufactured probe. This study aims to improve the range of calibration maps for a four-hole probe to allow high flow angles to be measured accurately. The research methodology comprises a literature review of the successful calibration definitions that have been implemented on five-hole probes. These definitions are then adapted and applied on a four-hole probe using a set of raw pressures data. A comparison of the different definitions will be carried out in Matlab and the results will be analyzed to determine the best calibration definition. Taking simplicity of implementation into account as well as the reliability of flow angles estimation, an adapted technique from a research paper written in 2002 offered the most promising outcome. Consequently, the method is seen as a good enhancement for four-hole probes and it can substitute for the existing calibration definitions that offer less accuracy.Keywords: calibration definitions, calibration maps, flow measurement techniques, four-hole probes, multi-hole pressure probes
Procedia PDF Downloads 29719252 Well-Being Inequality Using Superimposing Satisfaction Waves: Heisenberg Uncertainty in Behavioral Economics and Econometrics
Authors: Okay Gunes
Abstract:
In this article, for the first time in the literature for this subject we propose a new method for the measuring of well-being inequality through a model composed of superimposing satisfaction waves. The displacement of households’ satisfactory state (i.e. satisfaction) is defined in a satisfaction string. The duration of the satisfactory state for a given period of time is measured in order to determine the relationship between utility and total satisfactory time, itself dependent on the density and tension of each satisfaction string. Thus, individual cardinal total satisfaction values are computed by way of a one-dimensional form for scalar sinusoidal (harmonic) moving wave function, using satisfaction waves with varying amplitudes and frequencies which allow us to measure well-being inequality. One advantage to using satisfaction waves is the ability to show that individual utility and consumption amounts would probably not commute; hence it is impossible to measure or to know simultaneously the values of these observables from the dataset. Thus, we crystallize the problem by using a Heisenberg-type uncertainty resolution for self-adjoint economic operators. We propose to eliminate any estimation bias by correlating the standard deviations of selected economic operators; this is achieved by replacing the aforementioned observed uncertainties with households’ perceived uncertainties (i.e. corrected standard deviations) obtained through the logarithmic psychophysical law proposed by Weber and Fechner.Keywords: Heisenberg uncertainty principle, superimposing satisfaction waves, Weber–Fechner law, well-being inequality
Procedia PDF Downloads 44119251 Application of a Modified Crank-Nicolson Method in Metallurgy
Authors: Kobamelo Mashaba
Abstract:
The molten slag has a high substantial temperatures range between 1723-1923, carrying a huge amount of useful energy for reducing energy consumption and CO₂ emissions under the heat recovery process. Therefore in this study, we investigated the performance of the modified crank Nicolson method for a delayed partial differential equation on the heat recovery of molten slag in the metallurgical mining environment. It was proved that the proposed method converges quickly compared to the classic method with the existence of a unique solution. It was inferred from numerical result that the proposed methodology is more viable and profitable for the mining industry.Keywords: delayed partial differential equation, modified Crank-Nicolson Method, molten slag, heat recovery, parabolic equation
Procedia PDF Downloads 10219250 The Gasoil Hydrofining Kinetics Constants Identification
Authors: C. Patrascioiu, V. Matei, N. Nicolae
Abstract:
The paper describes the experiments and the kinetic parameters calculus of the gasoil hydrofining. They are presented experimental results of gasoil hidrofining using Mo and promoted with Ni on aluminum support catalyst. The authors have adapted a kinetic model gasoil hydrofining. Using this proposed kinetic model and the experimental data they have calculated the parameters of the model. The numerical calculus is based on minimizing the difference between the experimental sulf concentration and kinetic model estimation.Keywords: hydrofining, kinetic, modeling, optimization
Procedia PDF Downloads 43819249 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation Using Physics-Informed Neural Network
Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy
Abstract:
The physics-informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on a strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary conditions to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of the Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful in studying various optical phenomena.Keywords: deep learning, optical soliton, physics informed neural network, partial differential equation
Procedia PDF Downloads 7019248 Impact of Mhealth Tools on Psycho-Social Predictors of Behaviour Regarding Contraceptive Use
Authors: Preeti Tiwari, Jay Wood, Duncan Babbage
Abstract:
Family planning plays a role in saving lives across the globe by preventing unwanted pregnancies. The purpose of this multidisciplinary research was to determine the impact of mHealth tools have on psychosocial determinants of behaviour for family planning. The present study examines a topic that is very relevant in times where human-technology interaction is at its peak. It is probably one of the first studies that have investigated the impact of mobile phone technology on the underlying mechanisms of behaviour change for family planning using primary data. To examine the association between exposure to mHealth tools and predictors of behaviour, data was collected from mHealth intervention areas in India. A post-intervention quasi-experimental study with a 2x2 factorial design was conducted among 831 men and women from the state of Bihar. The quantitative data analysis evaluated the extent of influence that predictors of behaviour (beliefs, social norms, perceived behaviour control, and outcome behaviour) have on a woman’s decisions about family planning. The results indicated an association between exposure to mHealth tools and improved communication about family planning among various family members after receiving health information from a health worker (H1). A relationship between exposure to mHealth tools and increased support women received from their husbands and extended family (mothers-in-law specifically) and peers (H2) was also found. A further result showed that knowledge about family planning was greater among users of family planning (H4). mHealth tools empower women to communicate with family members. This has important implications for developing mobile phone-based tools, as they can be used as a crucial communication channel that can be an effective method of increasing communication among family members about contraceptives. Thus, it can be implied that where women feel nervous talking about contraception, the successful application of mHealth tools can strengthen the interactivity of the health communication and could increase the likelihood of using contraception. However, while it may improve health communication that can inform health decisions, it may be insufficient on its own to cause behaviour change.Keywords: contraceptive, e-health, psycho-social, women
Procedia PDF Downloads 12219247 Implicit Off-Grid Block Method for Solving Fourth and Fifth Order Ordinary Differential Equations Directly
Authors: Olusola Ezekiel Abolarin, Gift E. Noah
Abstract:
This research work considered an innovative procedure to numerically approximate higher-order Initial value problems (IVP) of ordinary differential equations (ODE) using the Legendre polynomial as the basis function. The proposed method is a half-step, self-starting Block integrator employed to approximate fourth and fifth order IVPs without reduction to lower order. The method was developed through a collocation and interpolation approach. The basic properties of the method, such as convergence, consistency and stability, were well investigated. Several test problems were considered, and the results compared favorably with both exact solutions and other existing methods.Keywords: initial value problem, ordinary differential equation, implicit off-grid block method, collocation, interpolation
Procedia PDF Downloads 8519246 First Order Reversal Curve Method for Characterization of Magnetic Nanostructures
Authors: Bashara Want
Abstract:
One of the key factors limiting the performance of magnetic memory is that the coercivity has a distribution with finite width, and the reversal starts at the weakest link in the distribution. So one must first know the distribution of coercivities in order to learn how to reduce the width of distribution and increase the coercivity field to obtain a system with narrow width. First Order Reversal Curve (FORC) method characterizes a system with hysteresis via the distribution of local coercivities and, in addition, the local interaction field. The method is more versatile than usual conventional major hysteresis loops that give only the statistical behaviour of the magnetic system. The FORC method will be presented and discussed at the conference.Keywords: magnetic materials, hysteresis, first-order reversal curve method, nanostructures
Procedia PDF Downloads 8319245 Inverse Scattering of Two-Dimensional Objects Using an Enhancement Method
Authors: A.R. Eskandari, M.R. Eskandari
Abstract:
A 2D complete identification algorithm for dielectric and multiple objects immersed in air is presented. The employed technique consists of initially retrieving the shape and position of the scattering object using a linear sampling method and then determining the electric permittivity and conductivity of the scatterer using adjoint sensitivity analysis. This inversion algorithm results in high computational speed and efficiency, and it can be generalized for any scatterer structure. Also, this method is robust with respect to noise. The numerical results clearly show that this hybrid approach provides accurate reconstructions of various objects.Keywords: inverse scattering, microwave imaging, two-dimensional objects, Linear Sampling Method (LSM)
Procedia PDF Downloads 38719244 Security Risks Assessment: A Conceptualization and Extension of NFC Touch-And-Go Application
Authors: Ku Aina Afiqah Ku Adzman, Manmeet Mahinderjit Singh, Zarul Fitri Zaaba
Abstract:
NFC operates on low-range 13.56 MHz frequency within a distance from 4cm to 10cm, and the applications can be categorized as touch and go, touch and confirm, touch and connect, and touch and explore. NFC applications are vulnerable to various security and privacy attacks such due to its physical nature; unprotected data stored in NFC tag and insecure communication between its applications. This paper aims to determine the likelihood of security risks happening in an NFC technology and application. We present an NFC technology taxonomy covering NFC standards, types of application and various security and privacy attack. Based on observations and the survey presented to evaluate the risk assessment within the touch and go application demonstrates two security attacks that are high risks namely data corruption and DOS attacks. After the risks are determined, risk countermeasures by using AHP is adopted. The guideline and solutions to these two high risks, attacks are later applied to a secure NFC-enabled Smartphone Attendance System.Keywords: Near Field Communication (NFC), risk assessment, multi-criteria decision making, Analytical Hierarchy Process (AHP)
Procedia PDF Downloads 30219243 Rain Gauges Network Optimization in Southern Peninsular Malaysia
Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno
Abstract:
Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.Keywords: geostatistics, simulated annealing, semivariogram, optimization
Procedia PDF Downloads 30419242 A New Reliability Allocation Method Based on Fuzzy Numbers
Authors: Peng Li, Chuanri Li, Tao Li
Abstract:
Reliability allocation is quite important during early design and development stages for a system to apportion its specified reliability goal to subsystems. This paper improves the reliability fuzzy allocation method and gives concrete processes on determining the factor set, the factor weight set, judgment set, and multi-grade fuzzy comprehensive evaluation. To determine the weight of factor set, the modified trapezoidal numbers are proposed to reduce errors caused by subjective factors. To decrease the fuzziness in the fuzzy division, an approximation method based on linear programming is employed. To compute the explicit values of fuzzy numbers, centroid method of defuzzification is considered. An example is provided to illustrate the application of the proposed reliability allocation method based on fuzzy arithmetic.Keywords: reliability allocation, fuzzy arithmetic, allocation weight, linear programming
Procedia PDF Downloads 34419241 Comparative Study between Classical P-Q Method and Modern Fuzzy Controller Method to Improve the Power Quality of an Electrical Network
Authors: A. Morsli, A. Tlemçani, N. Ould Cherchali, M. S. Boucherit
Abstract:
This article presents two methods for the compensation of harmonics generated by a nonlinear load. The first is the classic method P-Q. The second is the controller by modern method of artificial intelligence specifically fuzzy logic. Both methods are applied to an Active Power Filter shunt (APFs) based on a three-phase voltage converter at five levels NPC topology. In calculating the harmonic currents of reference, we use the algorithm P-Q and pulse generation, we use the intersective PWM. For flexibility and dynamics, we use fuzzy logic. The results give us clear that the rate of Harmonic Distortion issued by fuzzy logic is better than P-Q.Keywords: fuzzy logic controller, P-Q method, pulse width modulation (PWM), shunt active power filter (sAPF), total harmonic distortion (THD)
Procedia PDF Downloads 54919240 Estimation and Forecasting Debris Flow Phenomena on the Highway of the 'TRACECA' Corridor
Authors: Levan Tsulukidze
Abstract:
The paper considers debris flow phenomena and forecasting of them in the corridor of ‘TRACECA’ on the example of river Naokhrevistkali, as well as the debris flow -type channel passing between the villages of Vale-2 and Naokhrevi. As a result of expeditionary and reconnaissance investigations, as well as using empiric dependencies, the debris flow expenditure has been estimated in case of different debris flow provisions.Keywords: debris flow, Traceca corridor, forecasting, river Naokhrevistkali
Procedia PDF Downloads 35619239 A Review of Methods for Handling Missing Data in the Formof Dropouts in Longitudinal Clinical Trials
Abstract:
Much clinical trials data-based research are characterized by the unavoidable problem of dropout as a result of missing or erroneous values. This paper aims to review some of the various techniques to address the dropout problems in longitudinal clinical trials. The fundamental concepts of the patterns and mechanisms of dropout are discussed. This study presents five general techniques for handling dropout: (1) Deletion methods; (2) Imputation-based methods; (3) Data augmentation methods; (4) Likelihood-based methods; and (5) MNAR-based methods. Under each technique, several methods that are commonly used to deal with dropout are presented, including a review of the existing literature in which we examine the effectiveness of these methods in the analysis of incomplete data. Two application examples are presented to study the potential strengths or weaknesses of some of the methods under certain dropout mechanisms as well as to assess the sensitivity of the modelling assumptions.Keywords: incomplete longitudinal clinical trials, missing at random (MAR), imputation, weighting methods, sensitivity analysis
Procedia PDF Downloads 41619238 Implicit Eulerian Fluid-Structure Interaction Method for the Modeling of Highly Deformable Elastic Membranes
Authors: Aymen Laadhari, Gábor Székely
Abstract:
This paper is concerned with the development of a fully implicit and purely Eulerian fluid-structure interaction method tailored for the modeling of the large deformations of elastic membranes in a surrounding Newtonian fluid. We consider a simplified model for the mechanical properties of the membrane, in which the surface strain energy depends on the membrane stretching. The fully Eulerian description is based on the advection of a modified surface tension tensor, and the deformations of the membrane are tracked using a level set strategy. The resulting nonlinear problem is solved by a Newton-Raphson method, featuring a quadratic convergence behavior. A monolithic solver is implemented, and we report several numerical experiments aimed at model validation and illustrating the accuracy of the presented method. We show that stability is maintained for significantly larger time steps.Keywords: finite element method, implicit, level set, membrane, Newton method
Procedia PDF Downloads 30419237 An Efficient Algorithm of Time Step Control for Error Correction Method
Authors: Youngji Lee, Yonghyeon Jeon, Sunyoung Bu, Philsu Kim
Abstract:
The aim of this paper is to construct an algorithm of time step control for the error correction method most recently developed by one of the authors for solving stiff initial value problems. It is achieved with the generalized Chebyshev polynomial and the corresponding error correction method. The main idea of the proposed scheme is in the usage of the duplicated node points in the generalized Chebyshev polynomials of two different degrees by adding necessary sample points instead of re-sampling all points. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. Two stiff problems are numerically solved to assess the effectiveness of the proposed scheme.Keywords: stiff initial value problem, error correction method, generalized Chebyshev polynomial, node points
Procedia PDF Downloads 57419236 One Species into Five: Nucleo-Mito Barcoding Reveals Cryptic Species in 'Frankliniella Schultzei Complex': Vector for Tospoviruses
Authors: Vikas Kumar, Kailash Chandra, Kaomud Tyagi
Abstract:
The insect order Thysanoptera includes small insects commonly called thrips. As insect vectors, only thrips are capable of Tospoviruses transmission (genus Tospovirus, family Bunyaviridae) affecting various crops. Currently, fifteen species of subfamily Thripinae (Thripidae) have been reported as vectors for tospoviruses. Frankliniella schultzei, which is reported as act as a vector for at least five tospovirses, have been suspected to be a species complex with more than one species. It is one of the historical unresolved issues where, two species namely, F. schultzei Trybom and F. sulphurea Schmutz were erected from South Africa and Srilanaka respectively. These two species were considered to be valid until 1968 when sulphurea was treated as colour morph (pale form) and synonymised under schultzei (dark form) However, these two have been considered as valid species by some of the thrips workers. Parallel studies have indicated that brown form of schultzei is a vector for tospoviruses while yellow form is a non-vector. However, recent studies have shown that yellow populations have also been documented as vectors. In view of all these facts, it is highly important to have a clear understanding whether these colour forms represent true species or merely different populations with different vector carrying capacities and whether there is some hidden diversity in 'Frankliniella schultzei species complex'. In this study, we aim to study the 'Frankliniella schultzei species complex' with molecular spectacles with DNA data from India and Australia and Africa. A total of fifty-five specimens was collected from diverse locations in India and Australia. We generated molecular data using partial fragments of mitochondrial cytochrome c oxidase I gene (mtCOI) and 28S rRNA gene. For COI dataset, there were seventy-four sequences, out of which data on fifty-five was generated in the current study and others were retrieved from NCBI. All the four different tree construction methods: neighbor-joining, maximum parsimony, maximum likelihood and Bayesian analysis, yielded the same tree topology and produced five cryptic species with high genetic divergence. For, rDNA, there were forty-five sequences, out of which data on thirty-nine was generated in the current study and others were retrieved from NCBI. The four tree building methods yielded four cryptic species with high bootstrap support value/posterior probability. Here we could not retrieve one cryptic species from South Africa as we could not generate data on rDNA from South Africa and sequence for rDNA from African region were not available in the database. The results of multiple species delimitation methods (barcode index numbers, automatic barcode gap discovery, general mixed Yule-coalescent, and Poisson-tree-processes) also supported the phylogenetic data and produced 5 and 4 Molecular Operational Taxonomic Units (MOTUs) for mtCOI and 28S dataset respectively. These results of our study indicate the likelihood that F. sulphurea may be a valid species, however, more morphological and molecular data is required on specimens from type localities of these two species and comparison with type specimens.Keywords: DNA barcoding, species complex, thrips, species delimitation
Procedia PDF Downloads 12919235 Backstepping Design and Fractional Differential Equation of Chaotic System
Authors: Ayub Khan, Net Ram Garg, Geeta Jain
Abstract:
In this paper, backstepping method is proposed to synchronize two fractional-order systems. The simulation results show that this method can effectively synchronize two chaotic systems.Keywords: backstepping method, fractional order, synchronization, chaotic system
Procedia PDF Downloads 45919234 The Effectiveness of Gamified Learning on Student Learning in Computer Science Education: A Systematic Review (2010-2018)
Authors: Shurui Bai, Biyun Huang, Khe Foon Hew
Abstract:
Gamification is defined as the use of game design elements in non-game contexts. The primary purpose of using gamification in an educational context is to engage students in school activities such that their likelihood of completion is increased. But how actually effective is gamification in improving student learning? In order to answer this question, this paper provides a systematic review of prior research studies on gamification in K-12 and university contexts limited to computer science discipline. Unlike other published gamification review works, we specifically analyzed comparison-based studies in quasi-experiment, historical control, and randomization rather than studies with mere anecdotal or phenomenological results. The main purpose for this is to discuss possible causal effects of gamified practices on student performance, behavior change, and perceptual skills following an integrative model. Implications for practice are discussed, along with several suggestions for future research studies.Keywords: computer science, gamification, learning performance, systematic review
Procedia PDF Downloads 13219233 Reliability Analysis of Glass Epoxy Composite Plate under Low Velocity
Authors: Shivdayal Patel, Suhail Ahmad
Abstract:
Safety assurance and failure prediction of composite material component of an offshore structure due to low velocity impact is essential for associated risk assessment. It is important to incorporate uncertainties associated with material properties and load due to an impact. Likelihood of this hazard causing a chain of failure events plays an important role in risk assessment. The material properties of composites mostly exhibit a scatter due to their in-homogeneity and anisotropic characteristics, brittleness of the matrix and fiber and manufacturing defects. In fact, the probability of occurrence of such a scenario is due to large uncertainties arising in the system. Probabilistic finite element analysis of composite plates due to low-velocity impact is carried out considering uncertainties of material properties and initial impact velocity. Impact-induced damage of composite plate is a probabilistic phenomenon due to a wide range of uncertainties arising in material and loading behavior. A typical failure crack initiates and propagates further into the interface causing de-lamination between dissimilar plies. Since individual crack in the ply is difficult to track. The progressive damage model is implemented in the FE code by a user-defined material subroutine (VUMAT) to overcome these problems. The limit state function is accordingly established while the stresses in the lamina are such that the limit state function (g(x)>0). The Gaussian process response surface method is presently adopted to determine the probability of failure. A comparative study is also carried out for different combination of impactor masses and velocities. The sensitivity based probabilistic design optimization procedure is investigated to achieve better strength and lighter weight of composite structures. Chain of failure events due to different modes of failure is considered to estimate the consequences of failure scenario. Frequencies of occurrence of specific impact hazards yield the expected risk due to economic loss.Keywords: composites, damage propagation, low velocity impact, probability of failure, uncertainty modeling
Procedia PDF Downloads 27919232 Smallholder’s Agricultural Water Management Technology Adoption, Adoption Intensity and Their Determinants: The Case of Meda Welabu Woreda, Oromia, Ethiopia
Authors: Naod Mekonnen Anega
Abstract:
The very objective of this paper was to empirically identify technology tailored determinants to the adoption and adoption intensity (extent of use) of agricultural water management technologies in Meda Welabu Woreda, Oromia regional state, Ethiopia. Meda Welabu Woreda which is one of the administrative Woredas of the Oromia regional state was selected purposively as this Woreda is one of the Woredas in the region where small scale irrigation practices and the use of agricultural water management technologies can be found among smallholders. Using the existence water management practices (use of water management technologies) and land use pattern as a criterion Genale Mekchira Kebele is selected to undergo the study. A total of 200 smallholders were selected from the Kebele using the technique developed by Krejeie and Morgan. The study employed the Logit and Tobit models to estimate and identify the economic, social, geographical, household, institutional, psychological, technological factors that determine adoption and adoption intensity of water management technologies. The study revealed that while 55 of the sampled households are adopters of agricultural water management technology the rest 140 were non adopters of the technologies. Among the adopters included in the sample 97% are using river diversion technology (traditional) with traditional canal while the rest 7% percent are using pond with treadle pump technology. The Logit estimation reveled that while adoption of river diversion is positively and significantly affected by membership to local institutions, active labor force, income, access to credit and land ownership, adoption of treadle pump technology is positively and significantly affected by family size, education level, access to credit, extension contact, income, access to market, and slope. The Logit estimation also revealed that whereas, group action requirement, distance to farm, and size of active labor force negative and significantly influenced adoption of river diversion, age and perception has negatively and significantly influenced adoption decision of treadle pump technology. On the other hand, the Tobit estimation reveled that while adoption intensity (extent of use) of agricultural water management is positively and significantly affected by education, credit, and extension contact, access to credit, access to market and income. This study revealed that technology tailored study on adoption of Agricultural water management technologies (AWMTs) should be considered to indentify and scale up best agricultural water management practices. In fact, in countries like Ethiopia, where there is difference in social, economic, cultural, environmental and agro ecological conditions even within the same Kebele technology tailored study that fit the condition of each Kebele would help to identify and scale up best practices in agricultural water management.Keywords: water management technology, adoption, adoption intensity, smallholders, technology tailored approach
Procedia PDF Downloads 45719231 Fault Diagnosis and Fault-Tolerant Control of Bilinear-Systems: Application to Heating, Ventilation, and Air Conditioning Systems in Multi-Zone Buildings
Authors: Abderrhamane Jarou, Dominique Sauter, Christophe Aubrun
Abstract:
Over the past decade, the growing demand for energy efficiency in buildings has attracted the attention of the control community. Failures in HVAC (heating, ventilation and air conditioning) systems in buildings can have a significant impact on the desired and expected energy performance of buildings and on the user's comfort as well. FTC is a recent technology area that studies the adaptation of control algorithms to faulty operating conditions of a system. The application of Fault-Tolerant Control (FTC) in HVAC systems has gained attention in the last two decades. The objective is to maintain the variations in system performance due to faults within an acceptable range with respect to the desired nominal behavior. This paper considers the so-called active approach, which is based on fault and identification scheme combined with a control reconfiguration algorithm that consists in determining a new set of control parameters so that the reconfigured performance is "as close as possible, "in some sense, to the nominal performance. Thermal models of buildings and their HVAC systems are described by non-linear (usually bi-linear) equations. Most of the works carried out so far in FDI (fault diagnosis and isolation) or FTC consider a linearized model of the studied system. However, this model is only valid in a reduced range of variation. This study presents a new fault diagnosis (FD) algorithm based on a bilinear observer for the detection and accurate estimation of the magnitude of the HVAC system failure. The main contribution of the proposed FD algorithm is that instead of using specific linearized models, the algorithm inherits the structure of the actual bilinear model of the building thermal dynamics. As an immediate consequence, the algorithm is applicable to a wide range of unpredictable operating conditions, i.e., weather dynamics, outdoor air temperature, zone occupancy profile. A bilinear fault detection observer is proposed for a bilinear system with unknown inputs. The residual vector in the observer design is decoupled from the unknown inputs and, under certain conditions, is made sensitive to all faults. Sufficient conditions are given for the existence of the observer and results are given for the explicit computation of observer design matrices. Dedicated observer schemes (DOS) are considered for sensor FDI while unknown input bilinear observers are considered for actuator or system components FDI. The proposed strategy for FTC works as follows: At a first level, FDI algorithms are implemented, making it also possible to estimate the magnitude of the fault. Once the fault is detected, the fault estimation is then used to feed the second level and reconfigure the control low so that that expected performances are recovered. This paper is organized as follows. A general structure for fault-tolerant control of buildings is first presented and the building model under consideration is introduced. Then, the observer-based design for Fault Diagnosis of bilinear systems is studied. The FTC approach is developed in Section IV. Finally, a simulation example is given in Section V to illustrate the proposed method.Keywords: bilinear systems, fault diagnosis, fault-tolerant control, multi-zones building
Procedia PDF Downloads 17319230 Prediction of Formation Pressure Using Artificial Intelligence Techniques
Authors: Abdulmalek Ahmed
Abstract:
Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)
Procedia PDF Downloads 15019229 Obtain the Stress Intensity Factor (SIF) in a Medium Containing a Penny-Shaped Crack by the Ritz Method
Authors: A. Tavangari, N. Salehzadeh
Abstract:
In the crack growth analysis, the Stress Intensity Factor (SIF) is a fundamental prerequisite. In the present study, the mode I stress intensity factor (SIF) of three-dimensional penny-Shaped crack is obtained in an isotropic elastic cylindrical medium with arbitrary dimensions under arbitrary loading at the top of the cylinder, by the semi-analytical method based on the Rayleigh-Ritz method. This method that is based on minimizing the potential energy amount of the whole of the system, gives a very close results to the previous studies. Defining the displacements (elastic fields) by hypothetical functions in a defined coordinate system is the base of this research. So for creating the singularity conditions at the tip of the crack the appropriate terms should be found.Keywords: penny-shaped crack, stress intensity factor, fracture mechanics, Ritz method
Procedia PDF Downloads 36619228 Platform Virtual for Joint Amplitude Measurement Based in MEMS
Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana, Andres F. Ruiz-Olaya, Juan C. Alvarez
Abstract:
Motion capture (MC) is the construction of a precise and accurate digital representation of a real motion. Systems have been used in the last years in a wide range of applications, from films special effects and animation, interactive entertainment, medicine, to high competitive sport where a maximum performance and low injury risk during training and competition is seeking. This paper presents an inertial and magnetic sensor based technological platform, intended for particular amplitude monitoring and telerehabilitation processes considering an efficient cost/technical considerations compromise. Our platform particularities offer high social impact possibilities by making telerehabilitation accessible to large population sectors in marginal socio-economic sector, especially in underdeveloped countries that in opposition to developed countries specialist are scarce, and high technology is not available or inexistent. This platform integrates high-resolution low-cost inertial and magnetic sensors with adequate user interfaces and communication protocols to perform a web or other communication networks available diagnosis service. The amplitude information is generated by sensors then transferred to a computing device with adequate interfaces to make it accessible to inexperienced personnel, providing a high social value. Amplitude measurements of the platform virtual system presented a good fit to its respective reference system. Analyzing the robotic arm results (estimation error RMSE 1=2.12° and estimation error RMSE 2=2.28°), it can be observed that during arm motion in any sense, the estimation error is negligible; in fact, error appears only during sense inversion what can easily be explained by the nature of inertial sensors and its relation to acceleration. Inertial sensors present a time constant delay which acts as a first order filter attenuating signals at large acceleration values as is the case for a change of sense in motion. It can be seen a damped response of platform virtual in other images where error analysis show that at maximum amplitude an underestimation of amplitude is present whereas at minimum amplitude estimations an overestimation of amplitude is observed. This work presents and describes the platform virtual as a motion capture system suitable for telerehabilitation with the cost - quality and precision - accessibility relations optimized. These particular characteristics achieved by efficiently using the state of the art of accessible generic technology in sensors and hardware, and adequate software for capture, transmission analysis and visualization, provides the capacity to offer good telerehabilitation services, reaching large more or less marginal populations where technologies and specialists are not available but accessible with basic communication networks.Keywords: inertial sensors, joint amplitude measurement, MEMS, telerehabilitation
Procedia PDF Downloads 260