Search results for: compass error
1329 Comparative Study of Impedance Parameters for 42CrMo4 Steel Nitrided and Exposed at Electrochemical Corrosion
Authors: M. H. Belahssen, S. Benramache
Abstract:
This paper presents corrosion behavior of alloy 42CrMo4 steel nitrided by plasma. Different samples nitrided were tested. The corrosion behavior was evaluated by electrochemical impedance spectroscopy and the tests were carried out in acid chloride solution 1M. The best corrosion protection was observed for nitrided samples. The aim of this work is to compare equivalents circuits corresponding to Nyquist curves simulated and experimental and select who gives best results of impedance parameters with lowest error.Keywords: pasma nitriding, steel, alloy 42CrMo4, elecrochemistry, corrosion behavior
Procedia PDF Downloads 3711328 Mathematical and Numerical Analysis of a Nonlinear Cross Diffusion System
Authors: Hassan Al Salman
Abstract:
We consider a nonlinear parabolic cross diffusion model arising in applied mathematics. A fully practical piecewise linear finite element approximation of the model is studied. By using entropy-type inequalities and compactness arguments, existence of a global weak solution is proved. Providing further regularity of the solution of the model, some uniqueness results and error estimates are established. Finally, some numerical experiments are performed.Keywords: cross diffusion model, entropy-type inequality, finite element approximation, numerical analysis
Procedia PDF Downloads 3831327 Normalized Compression Distance Based Scene Alteration Analysis of a Video
Authors: Lakshay Kharbanda, Aabhas Chauhan
Abstract:
In this paper, an application of Normalized Compression Distance (NCD) to detect notable scene alterations occurring in videos is presented. Several research groups have been developing methods to perform image classification using NCD, a computable approximation to Normalized Information Distance (NID) by studying the degree of similarity in images. The timeframes where significant aberrations between the frames of a video have occurred have been identified by obtaining a threshold NCD value, using two compressors: LZMA and BZIP2 and defining scene alterations using Pixel Difference Percentage metrics.Keywords: image compression, Kolmogorov complexity, normalized compression distance, root mean square error
Procedia PDF Downloads 3401326 Setting Control Limits For Inaccurate Measurements
Authors: Ran Etgar
Abstract:
The process of rounding off measurements in continuous variables is commonly encountered. Although it usually has minor effects, sometimes it can lead to poor outcomes in statistical process control using X ̅-chart. The traditional control limits can cause incorrect conclusions if applied carelessly. This study looks into the limitations of classical control limits, particularly the impact of asymmetry. An approach to determining the distribution function of the measured parameter (Y ̅) is presented, resulting in a more precise method to establish the upper and lower control limits. The proposed method, while slightly more complex than Shewhart's original idea, is still user-friendly and accurate and only requires the use of two straightforward tables.Keywords: quality control, process control, round-off, measurement, rounding error
Procedia PDF Downloads 991325 Automatic Vertical Wicking Tester Based on Optoelectronic Techniques
Authors: Chi-Wai Kan, Kam-Hong Chau, Ho-Shing Law
Abstract:
Wicking property is important for textile finishing and wears comfort. Good wicking properties can ensure uniformity and efficiency of the textiles treatment. In view of wear comfort, quick wicking fabrics facilitate the evaporation of sweat. Therefore, the wetness sensation of the skin is minimised to prevent discomfort. The testing method for vertical wicking was standardised by the American Association of Textile Chemists and Colorists (AATCC) in 2011. The traditional vertical wicking test involves human error to observe fast changing and/or unclear wicking height. This study introduces optoelectronic devices to achieve an automatic Vertical Wicking Tester (VWT) and reduce human error. The VWT can record the wicking time and wicking height of samples. By reducing the difficulties of manual judgment, the reliability of the vertical wicking experiment is highly increased. Furthermore, labour is greatly decreased by using the VWT. The automatic measurement of the VWT has optoelectronic devices to trace the liquid wicking with a simple operation procedure. The optoelectronic devices detect the colour difference between dry and wet samples. This allows high sensitivity to a difference in irradiance down to 10 μW/cm². Therefore, the VWT is capable of testing dark fabric. The VWT gives a wicking distance (wicking height) of 1 mm resolution and a wicking time of one-second resolution. Acknowledgment: This is a research project of HKRITA funded by Innovation and Technology Fund (ITF) with title “Development of an Automatic Measuring System for Vertical Wicking” (ITP/055/20TP). Author would like to thank the financial support by ITF. Any opinions, findings, conclusions or recommendations expressed in this material/event (or by members of the project team) do not reflect the views of the Government of the Hong Kong Special Administrative Region, the Innovation and Technology Commission or the Panel of Assessors for the Innovation and Technology Support Programme of the Innovation and Technology Fund and the Hong Kong Research Institute of Textiles and Apparel. Also, we would like to thank the support and sponsorship from Lai Tak Enterprises Limited, Kingis Development Limited and Wing Yue Textile Company Limited.Keywords: AATCC method, comfort, textile measurement, wetness sensation
Procedia PDF Downloads 1011324 Discharge Estimation in a Two Flow Braided Channel Based on Energy Concept
Authors: Amiya Kumar Pati, Spandan Sahu, Kishanjit Kumar Khatua
Abstract:
River is our main source of water which is a form of open channel flow and the flow in the open channel provides with many complex phenomena of sciences that needs to be tackled such as the critical flow conditions, boundary shear stress, and depth-averaged velocity. The development of society, more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. A river flow consisting of small and shallow channels sometimes divide and recombine numerous times because of the slow water flow or the built up sediments. The pattern formed during this process resembles the strands of a braid. Braided streams form where the sediment load is so heavy that some of the sediments are deposited as shifting islands. Braided rivers often exist near the mountainous regions and typically carry coarse-grained and heterogeneous sediments down a fairly steep gradient. In this paper, the apparent shear stress formulae were suitably modified, and the Energy Concept Method (ECM) was applied for the prediction of discharges at the junction of a two-flow braided compound channel. The Energy Concept Method has not been applied for estimating the discharges in the braided channels. The energy loss in the channels is analyzed based on mechanical analysis. The cross-section of channel is divided into two sub-areas, namely the main-channel below the bank-full level and region above the bank-full level for estimating the total discharge. The experimental data are compared with a wide range of theoretical data available in the published literature to verify this model. The accuracy of this approach is also compared with Divided Channel Method (DCM). From error analysis of this method, it is observed that the relative error is less for the data-sets having smooth floodplains when compared to rough floodplains. Comparisons with other models indicate that the present method has reasonable accuracy for engineering purposes.Keywords: critical flow, energy concept, open channel flow, sediment, two-flow braided compound channel
Procedia PDF Downloads 1261323 Developing A Third Degree Of Freedom For Opinion Dynamics Models Using Scales
Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle
Abstract:
Opinion dynamics models use an agent-based modeling approach to model people’s opinions. Model's properties are usually explored by testing the two 'degrees of freedom': the interaction rule and the network topology. The latter defines the connection, and thus the possible interaction, among agents. The interaction rule, instead, determines how agents select each other and update their own opinion. Here we show the existence of the third degree of freedom. This can be used for turning one model into each other or to change the model’s output up to 100% of its initial value. Opinion dynamics models represent the evolution of real-world opinions parsimoniously. Thus, it is fundamental to know how real-world opinion (e.g., supporting a candidate) could be turned into a number. Specifically, we want to know if, by choosing a different opinion-to-number transformation, the model’s dynamics would be preserved. This transformation is typically not addressed in opinion dynamics literature. However, it has already been studied in psychometrics, a branch of psychology. In this field, real-world opinions are converted into numbers using abstract objects called 'scales.' These scales can be converted one into the other, in the same way as we convert meters to feet. Thus, in our work, we analyze how this scale transformation may affect opinion dynamics models. We perform our analysis both using mathematical modeling and validating it via agent-based simulations. To distinguish between scale transformation and measurement error, we first analyze the case of perfect scales (i.e., no error or noise). Here we show that a scale transformation may change the model’s dynamics up to a qualitative level. Meaning that a researcher may reach a totally different conclusion, even using the same dataset just by slightly changing the way data are pre-processed. Indeed, we quantify that this effect may alter the model’s output by 100%. By using two models from the standard literature, we show that a scale transformation can transform one model into the other. This transformation is exact, and it holds for every result. Lastly, we also test the case of using real-world data (i.e., finite precision). We perform this test using a 7-points Likert scale, showing how even a small scale change may result in different predictions or a number of opinion clusters. Because of this, we think that scale transformation should be considered as a third-degree of freedom for opinion dynamics. Indeed, its properties have a strong impact both on theoretical models and for their application to real-world data.Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics
Procedia PDF Downloads 1551322 Establishing Control Chart Limits for Rounded Measurements
Authors: Ran Etgar
Abstract:
The process of rounding off measurements in continuous variables is commonly encountered. Although it usually has minor effects, sometimes it can lead to poor outcomes in statistical process control using X̄ chart. The traditional control limits can cause incorrect conclusions if applied carelessly. This study looks into the limitations of classical control limits, particularly the impact of asymmetry. An approach to determining the distribution function of the measured parameter ȳ is presented, resulting in a more precise method to establish the upper and lower control limits. The proposed method, while slightly more complex than Shewhart's original idea, is still user-friendly and accurate and only requires the use of two straightforward tables.Keywords: SPC, round-off data, control limit, rounding error
Procedia PDF Downloads 761321 RAFU Functions in Robotics and Automation
Authors: Alicia C. Sanchez
Abstract:
This paper investigates the implementation of RAFU functions (radical functions) in robotics and automation. Specifically, the main goal is to show how these functions may be useful in lane-keeping control and the lateral control of autonomous machines, vehicles, robots or the like. From the knowledge of several points of a certain route, the RAFU functions are used to achieve the lateral control purpose and maintain the lane-keeping errors within the fixed limits. The stability that these functions provide, their ease of approaching any continuous trajectory and the control of the possible error made on the approximation may be useful in practice.Keywords: automatic navigation control, lateral control, lane-keeping control, RAFU approximation
Procedia PDF Downloads 3021320 Reliability-Based Life-Cycle Cost Model for Engineering Systems
Authors: Reza Lotfalian, Sudarshan Martins, Peter Radziszewski
Abstract:
The effect of reliability on life-cycle cost, including initial and maintenance cost of a system is studied. The failure probability of a component is used to calculate the average maintenance cost during the operation cycle of the component. The standard deviation of the life-cycle cost is also calculated as an error measure for the average life-cycle cost. As a numerical example, the model is used to study the average life cycle cost of an electric motor.Keywords: initial cost, life-cycle cost, maintenance cost, reliability
Procedia PDF Downloads 6051319 Improvement of the Numerical Integration's Quality in Meshless Methods
Authors: Ahlem Mougaida, Hedi Bel Hadj Salah
Abstract:
Several methods are suggested to improve the numerical integration in Galerkin weak form for Meshless methods. In fact, integrating without taking into account the characteristics of the shape functions reproduced by Meshless methods (rational functions, with compact support etc.), causes a large integration error that influences the PDE’s approximate solution. Comparisons between different methods of numerical integration for rational functions are discussed and compared. The algorithms are implemented in Matlab. Finally, numerical results were presented to prove the efficiency of our algorithms in improving results.Keywords: adaptive methods, meshless, numerical integration, rational quadrature
Procedia PDF Downloads 3641318 Identification of Failures Occurring on a System on Chip Exposed to a Neutron Beam for Safety Applications
Authors: S. Thomet, S. De-Paoli, F. Ghaffari, J. M. Daveau, P. Roche, O. Romain
Abstract:
In this paper, we present a hardware module dedicated to understanding the fail reason of a System on Chip (SoC) exposed to a particle beam. Impact of Single-Event Effects (SEE) on processor-based SoCs is a concern that has increased in the past decade, particularly for terrestrial applications with automotive safety increasing requirements, as well as consumer and industrial domains. The SEE created by the impact of a particle on an SoC may have consequences that can end to instability or crashes. Specific hardening techniques for hardware and software have been developed to make such systems more reliable. SoC is then qualified using cosmic ray Accelerated Soft-Error Rate (ASER) to ensure the Soft-Error Rate (SER) remains in mission profiles. Understanding where errors are occurring is another challenge because of the complexity of operations performed in an SoC. Common techniques to monitor an SoC running under a beam are based on non-intrusive debug, consisting of recording the program counter and doing some consistency checking on the fly. To detect and understand SEE, we have developed a module embedded within the SoC that provide support for recording probes, hardware watchpoints, and a memory mapped register bank dedicated to software usage. To identify CPU failure modes and the most important resources to probe, we have carried out a fault injection campaign on the RTL model of the SoC. Probes are placed on generic CPU registers and bus accesses. They highlight the propagation of errors and allow identifying the failure modes. Typical resulting errors are bit-flips in resources creating bad addresses, illegal instructions, longer than expected loops, or incorrect bus accesses. Although our module is processor agnostic, it has been interfaced to a RISC-V by probing some of the processor registers. Probes are then recorded in a ring buffer. Associated hardware watchpoints are allowing to do some control, such as start or stop event recording or halt the processor. Finally, the module is also providing a bank of registers where the firmware running on the SoC can log information. Typical usage is for operating system context switch recording. The module is connected to a dedicated debug bus and is interfaced to a remote controller via a debugger link. Thus, a remote controller can interact with the monitoring module without any intrusiveness on the SoC. Moreover, in case of CPU unresponsiveness, or system-bus stall, the recorded information can still be recovered, providing the fail reason. A preliminary version of the module has been integrated into a test chip currently being manufactured at ST in 28-nm FDSOI technology. The module has been triplicated to provide reliable information on the SoC behavior. As the primary application domain is automotive and safety, the efficiency of the module will be evaluated by exposing the test chip under a fast-neutron beam by the end of the year. In the meantime, it will be tested with alpha particles and electromagnetic fault injection (EMFI). We will report in the paper on fault-injection results as well as irradiation results.Keywords: fault injection, SoC fail reason, SoC soft error rate, terrestrial application
Procedia PDF Downloads 2291317 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation
Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke
Abstract:
Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.Keywords: automatic calibration framework, approximate bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform
Procedia PDF Downloads 3091316 Investigation of Delivery of Triple Play Service in GE-PON Fiber to the Home Network
Authors: Anurag Sharma, Dinesh Kumar, Rahul Malhotra, Manoj Kumar
Abstract:
Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT
Procedia PDF Downloads 7341315 Use of Fractal Geometry in Machine Learning
Authors: Fuad M. Alkoot
Abstract:
The main component of a machine learning system is the classifier. Classifiers are mathematical models that can perform classification tasks for a specific application area. Additionally, many classifiers are combined using any of the available methods to reduce the classifier error rate. The benefits gained from the combination of multiple classifier designs has motivated the development of diverse approaches to multiple classifiers. We aim to investigate using fractal geometry to develop an improved classifier combiner. Initially we experiment with measuring the fractal dimension of data and use the results in the development of a combiner strategy.Keywords: fractal geometry, machine learning, classifier, fractal dimension
Procedia PDF Downloads 2181314 A Corpus Output Error Analysis of Chinese L2 Learners From America, Myanmar, and Singapore
Authors: Qiao-Yu Warren Cai
Abstract:
Due to the rise of big data, building corpora and using them to analyze ChineseL2 learners’ language output has become a trend. Various empirical research has been conducted using Chinese corpora built by different academic institutes. However, most of the research analyzed the data in the Chinese corpora usingcorpus-based qualitative content analysis with descriptive statistics. Descriptive statistics can be used to make summations about the subjects or samples that research has actually measured to describe the numerical data, but the collected data cannot be generalized to the population. Comte, a Frenchpositivist, has argued since the 19th century that human beings’ knowledge, whether the discipline is humanistic and social science or natural science, should be verified in a scientific way to construct a universal theory to explain the truth and human beings behaviors. Inferential statistics, able to make judgments of the probability of a difference observed between groups being dependable or caused by chance (Free Geography Notes, 2015)and to infer from the subjects or examples what the population might think or behave, is just the right method to support Comte’s argument in the field of TCSOL. Also, inferential statistics is a core of quantitative research, but little research has been conducted by combing corpora with inferential statistics. Little research analyzes the differences in Chinese L2 learners’ language corpus output errors by using theOne-way ANOVA so that the findings of previous research are limited to inferring the population's Chinese errors according to the given samples’ Chinese corpora. To fill this knowledge gap in the professional development of Taiwanese TCSOL, the present study aims to utilize the One-way ANOVA to analyze corpus output errors of Chinese L2 learners from America, Myanmar, and Singapore. The results show that no significant difference exists in ‘shì (是) sentence’ and word order errors, but compared with Americans and Singaporeans, it is significantly easier for Myanmar to have ‘sentence blends.’ Based on the above results, the present study provides an instructional approach and contributes to further exploration of how Chinese L2 learners can have (and use) learning strategies to lower errors.Keywords: Chinese corpus, error analysis, one-way analysis of variance, Chinese L2 learners, Americans, myanmar, Singaporeans
Procedia PDF Downloads 1061313 The Enhancement of Target Localization Using Ship-Borne Electro-Optical Stabilized Platform
Authors: Jaehoon Ha, Byungmo Kang, Kilho Hong, Jungsoo Park
Abstract:
Electro-optical (EO) stabilized platforms have been widely used for surveillance and reconnaissance on various types of vehicles, from surface ships to unmanned air vehicles (UAVs). EO stabilized platforms usually consist of an assembly of structure, bearings, and motors called gimbals in which a gyroscope is installed. EO elements such as a CCD camera and IR camera, are mounted to a gimbal, which has a range of motion in elevation and azimuth and can designate and track a target. In addition, a laser range finder (LRF) can be added to the gimbal in order to acquire the precise slant range from the platform to the target. Recently, a versatile functionality of target localization is needed in order to cooperate with the weapon systems that are mounted on the same platform. The target information, such as its location or velocity, needed to be more accurate. The accuracy of the target information depends on diverse component errors and alignment errors of each component. Specially, the type of moving platform can affect the accuracy of the target information. In the case of flying platforms, or UAVs, the target location error can be increased with altitude so it is important to measure altitude as precisely as possible. In the case of surface ships, target location error can be increased with obliqueness of the elevation angle of the gimbal since the altitude of the EO stabilized platform is supposed to be relatively low. The farther the slant ranges from the surface ship to the target, the more extreme the obliqueness of the elevation angle. This can hamper the precise acquisition of the target information. So far, there have been many studies on EO stabilized platforms of flying vehicles. However, few researchers have focused on ship-borne EO stabilized platforms of the surface ship. In this paper, we deal with a target localization method when an EO stabilized platform is located on the mast of a surface ship. Especially, we need to overcome the limitation caused by the obliqueness of the elevation angle of the gimbal. We introduce a well-known approach for target localization using Unscented Kalman Filter (UKF) and present the problem definition showing the above-mentioned limitation. Finally, we want to show the effectiveness of the approach that will be demonstrated through computer simulations.Keywords: target localization, ship-borne electro-optical stabilized platform, unscented kalman filter
Procedia PDF Downloads 5201312 Improving Functionality of Radiotherapy Department Through: Systemic Periodic Clinical Audits
Authors: Kamal Kaushik, Trisha, Dandapni, Sambit Nanda, A. Mukherjee, S. Pradhan
Abstract:
INTRODUCTION: As complexity in radiotherapy practice and processes are increasing, there is a need to assure quality control to a greater extent. At present, no international literature available with regards to the optimal quality control indicators for radiotherapy; moreover, few clinical audits have been conducted in the field of radiotherapy. The primary aim is to improve the processes that directly impact clinical outcomes for patients in terms of patient safety and quality of care. PROCEDURE: A team of an Oncologist, a Medical Physicist and a Radiation Therapist was formed for weekly clinical audits of patient’s undergoing radiotherapy audits The stages for audits include Pre planning audits, Simulation, Planning, Daily QA, Implementation and Execution (with image guidance). Errors in all the parts of the chain were evaluated and recorded for the development of further departmental protocols for radiotherapy. EVALUATION: The errors at various stages of radiotherapy chain were evaluated and recorded for comparison before starting the clinical audits in the department of radiotherapy and after starting the audits. It was also evaluated to find the stage in which maximum errors were recorded. The clinical audits were used to structure standard protocols (in the form of checklist) in department of Radiotherapy, which may lead to further reduce the occurrences of clinical errors in the chain of radiotherapy. RESULTS: The aim of this study is to perform a comparison between number of errors in different part of RT chain in two groups (A- Before Audit and B-After Audit). Group A: 94 pts. (48 males,46 female), Total no. of errors in RT chain:19 (9 needed Resimulation) Group B: 94 pts. (61 males,33 females), Total no. of errors in RT chain: 8 (4 needed Resimulation) CONCLUSION: After systematic periodic clinical audits percentage of error in radiotherapy process reduced more than 50% within 2 months. There is a great need in improving quality control in radiotherapy, and the role of clinical audits can only grow. Although clinical audits are time-consuming and complex undertakings, the potential benefits in terms of identifying and rectifying errors in quality control procedures are potentially enormous. Radiotherapy being a chain of various process. There is always a probability of occurrence of error in any part of the chain which may further propagate in the chain till execution of treatment. Structuring departmental protocols and policies helps in reducing, if not completely eradicating occurrence of such incidents.Keywords: audit, clinical, radiotherapy, improving functionality
Procedia PDF Downloads 881311 Design of a Fuzzy Luenberger Observer for Fault Nonlinear System
Authors: Mounir Bekaik, Messaoud Ramdani
Abstract:
We present in this work a new technique of stabilization for fault nonlinear systems. The approach we adopt focus on a fuzzy Luenverger observer. The T-S approximation of the nonlinear observer is based on fuzzy C-Means clustering algorithm to find local linear subsystems. The MOESP identification approach was applied to design an empirical model describing the subsystems state variables. The gain of the observer is given by the minimization of the estimation error through Lyapunov-krasovskii functional and LMI approach. We consider a three tank hydraulic system for an illustrative example.Keywords: nonlinear system, fuzzy, faults, TS, Lyapunov-Krasovskii, observer
Procedia PDF Downloads 3331310 BER Estimate of WCDMA Systems with MATLAB Simulation Model
Authors: Suyeb Ahmed Khan, Mahmood Mian
Abstract:
Simulation plays an important role during all phases of the design and engineering of communications systems, from early stages of conceptual design through the various stages of implementation, testing, and fielding of the system. In the present paper, a simulation model has been constructed for the WCDMA system in order to evaluate the performance. This model describes multiusers effects and calculation of BER (Bit Error Rate) in 3G mobile systems using Simulink MATLAB 7.1. Gaussian Approximation defines the multi-user effect on system performance. BER has been analyzed with comparison between transmitting data and receiving data.Keywords: WCDMA, simulations, BER, MATLAB
Procedia PDF Downloads 5921309 Study of Adaptive Filtering Algorithms and the Equalization of Radio Mobile Channel
Authors: Said Elkassimi, Said Safi, B. Manaut
Abstract:
This paper presented a study of three algorithms, the equalization algorithm to equalize the transmission channel with ZF and MMSE criteria, application of channel Bran A, and adaptive filtering algorithms LMS and RLS to estimate the parameters of the equalizer filter, i.e. move to the channel estimation and therefore reflect the temporal variations of the channel, and reduce the error in the transmitted signal. So far the performance of the algorithm equalizer with ZF and MMSE criteria both in the case without noise, a comparison of performance of the LMS and RLS algorithm.Keywords: adaptive filtering second equalizer, LMS, RLS Bran A, Proakis (B) MMSE, ZF
Procedia PDF Downloads 3131308 The Verification Study of Computational Fluid Dynamics Model of the Aircraft Piston Engine
Authors: Lukasz Grabowski, Konrad Pietrykowski, Michal Bialy
Abstract:
This paper presents the results of the research to verify the combustion in aircraft piston engine Asz62-IR. This engine was modernized and a type of ignition system was developed. Due to the high costs of experiments of a nine-cylinder 1,000 hp aircraft engine, a simulation technique should be applied. Therefore, computational fluid dynamics to simulate the combustion process is a reasonable solution. Accordingly, the tests for varied ignition advance angles were carried out and the optimal value to be tested on a real engine was specified. The CFD model was created with the AVL Fire software. The engine in the research had two spark plugs for each cylinder and ignition advance angles had to be set up separately for each spark. The results of the simulation were verified by comparing the pressure in the cylinder. The courses of the indicated pressure of the engine mounted on a test stand were compared. The real course of pressure was measured with an optical sensor, mounted in a specially drilled hole between the valves. It was the OPTRAND pressure sensor, which was designed especially to engine combustion process research. The indicated pressure was measured in cylinder no 3. The engine was running at take-off power. The engine was loaded by a propeller at a special test bench. The verification of the CFD simulation results was based on the results of the test bench studies. The course of the simulated pressure obtained is within the measurement error of the optical sensor. This error is 1% and reflects the hysteresis and nonlinearity of the sensor. The real indicated pressure measured in the cylinder and the pressure taken from the simulation were compared. It can be claimed that the verification of CFD simulations based on the pressure is a success. The next step was to research on the impact of changing the ignition advance timing of spark plugs 1 and 2 on a combustion process. Moving ignition timing between 1 and 2 spark plug results in a longer and uneven firing of a mixture. The most optimal point in terms of indicated power occurs when ignition is simultaneous for both spark plugs, but so severely separated ignitions are assured that ignition will occur at all speeds and loads of engine. It should be confirmed by a bench experiment of the engine. However, this simulation research enabled us to determine the optimal ignition advance angle to be implemented into the ignition control system. This knowledge allows us to set up the ignition point with two spark plugs to achieve as large power as possible.Keywords: CFD model, combustion, engine, simulation
Procedia PDF Downloads 3611307 Investigation of the Flow in Impeller Sidewall Gap of a Centrifugal Pump Using CFD
Authors: Mohammadreza DaqiqShirazi, Rouhollah Torabi, Alireza Riasi, Ahmad Nourbakhsh
Abstract:
In this paper, the flow in a sidewall gap of an impeller which belongs to a centrifugal pump is studied using numerical method. The flow in sidewall gap forms internal leakage and is the source of “disk friction loss” which is the most important cause of reduced efficiency in low specific speed centrifugal pumps. Simulation is done using CFX software and a high quality mesh, therefore the modeling error has been reduced. Navier-Stokes equations have been solved for this domain. In order to predict the turbulence effects the SST model has been employed.Keywords: numerical study, centrifugal pumps, disk friction loss, sidewall gap
Procedia PDF Downloads 5301306 A Study on Inference from Distance Variables in Hedonic Regression
Authors: Yan Wang, Yasushi Asami, Yukio Sadahiro
Abstract:
In urban area, several landmarks may affect housing price and rents, hedonic analysis should employ distance variables corresponding to each landmarks. Unfortunately, the effects of distances to landmarks on housing prices are generally not consistent with the true price. These distance variables may cause magnitude error in regression, pointing a problem of spatial multicollinearity. In this paper, we provided some approaches for getting the samples with less bias and method on locating the specific sampling area to avoid the multicollinerity problem in two specific landmarks case.Keywords: landmarks, hedonic regression, distance variables, collinearity, multicollinerity
Procedia PDF Downloads 4521305 Grid Connected Photovoltaic Micro Inverter
Authors: S. J. Bindhu, Edwina G. Rodrigues, Jijo Balakrishnan
Abstract:
A grid-connected photovoltaic (PV) micro inverter with good performance properties is proposed in this paper. The proposed inverter with a quadrupler, having more efficiency and less voltage stress across the diodes. The stress that come across the diodes that use in the inverter section is considerably low in the proposed converter, also the protection scheme that we provided can eliminate the chances of the error due to fault. The proposed converter is implemented using perturb and observe algorithm so that the fluctuation in the voltage can be reduce and can attain maximum power point. Finally, some simulation and experimental results are also presented to demonstrate the effectiveness of the proposed converter.Keywords: DC-DC converter, MPPT, quadrupler, PV panel
Procedia PDF Downloads 8421304 Calibration and Validation of the Aquacrop Model for Simulating Growth and Yield of Rain-fed Sesame (Sesamum indicum L.) Under Different Soil Fertility Levels in the Semi-arid Areas of Tigray
Authors: Abadi Berhane, Walelign Worku, Berhanu Abrha, Gebre Hadgu, Tigray
Abstract:
Sesame is an important oilseed crop in Ethiopia; which is the second most exported agricultural commodity next to coffee. However, there is poor soil fertility management and a research-led farming system for the crop. The AquaCrop model was applied as a decision-support tool; which performs a semi-quantitative approach to simulate the yield of crops under different soil fertility levels. The objective of this experiment was to calibrate and validated the AquaCrop model for simulating the growth and yield of sesame under different nitrogen fertilizer levels and to test the performance of the model as a decision-support tool for improved sesame cultivation in the study area. The experiment was laid out as a randomized complete block design (RCBD) in a factorial arrangement in the 2016, 2017, and 2018 main cropping seasons. In this experiment, four nitrogen fertilizer rates; 0, 23, 46, and 69 Kg/ha nitrogen, and three improved varieties (Setit-1, Setit-2, and Humera-1). In the meantime, growth, yield, and yield components of sesame were collected from each treatment. Coefficient of determination (R2), Root mean square error (RMSE), Normalized root mean square error (N-RMSE), Model efficiency (E), and Degree of agreement (D) were used to test the performance of the model. The results indicated that the AquaCrop model successfully simulated soil water content with R2 varying from 0.92 to 0.98, RMSE 6.5 to 13.9 mm, E 0.78 to 0.94, and D 0.95 to 0.99; and the corresponding values for AB also varied from 0.92 to 0.98, 0.33 to 0.54 tons/ha, 0.74 to 0.93, and 0.9 to 0.98, respectively. The results on the canopy cover of sesame also showed that the model acceptably simulated canopy cover with R2 varying from 0.95 to 0.99, and a RMSE of 5.3 to 8.6%. The AquaCrop model was appropriately calibrated to simulate soil water content, canopy cover, aboveground biomass, and sesame yield; the results indicated that the model adequately simulated the growth and yield of sesame under the different nitrogen fertilizer levels. The AquaCrop model might be an important tool for improved soil fertility management and yield enhancement strategies of sesame. Hence, the model might be applied as a decision-support tool in soil fertility management in sesame production.Keywords: aquacrop model, sesame, normalized water productivity, nitrogen fertilizer
Procedia PDF Downloads 751303 A Cooperative Space-Time Transmission Scheme Based On Symbol Combinations
Authors: Keunhong Chae, Seokho Yoon
Abstract:
This paper proposes a cooperative Alamouti space time transmission scheme with low relay complexity for the cooperative communication systems. In the proposed scheme, the source node combines the data symbols to construct the Alamouti-coded form at the destination node, while the conventional scheme performs the corresponding operations at the relay nodes. In simulation results, it is shown that the proposed scheme achieves the second order cooperative diversity while maintaining the same bit error rate (BER) performance as that of the conventional scheme.Keywords: Space-time transmission, cooperative communication system, MIMO.
Procedia PDF Downloads 3521302 Basic Calibration and Normalization Techniques for Time Domain Reflectometry Measurements
Authors: Shagufta Tabassum
Abstract:
The study of dielectric properties in a binary mixture of liquids is very useful to understand the liquid structure, molecular interaction, dynamics, and kinematics of the mixture. Time-domain reflectometry (TDR) is a powerful tool for studying the cooperation and molecular dynamics of the H-bonded system. In this paper, we discuss the basic calibration and normalization procedure for time-domain reflectometry measurements. Our approach is to explain the different types of error occur during TDR measurements and how these errors can be eliminated or minimized.Keywords: time domain reflectometry measurement techinque, cable and connector loss, oscilloscope loss, and normalization technique
Procedia PDF Downloads 2061301 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization
Authors: Soheila Sadeghi
Abstract:
Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction
Procedia PDF Downloads 601300 Calibration and Validation of the Aquacrop Model for Simulating Growth and Yield of Rain-Fed Sesame (Sesamum Indicum L.) Under Different Soil Fertility Levels in the Semi-arid Areas of Tigray, Ethiopia
Authors: Abadi Berhane, Walelign Worku, Berhanu Abrha, Gebre Hadgu
Abstract:
Sesame is an important oilseed crop in Ethiopia, which is the second most exported agricultural commodity next to coffee. However, there is poor soil fertility management and a research-led farming system for the crop. The AquaCrop model was applied as a decision-support tool, which performs a semi-quantitative approach to simulate the yield of crops under different soil fertility levels. The objective of this experiment was to calibrate and validate the AquaCrop model for simulating the growth and yield of sesame under different nitrogen fertilizer levels and to test the performance of the model as a decision-support tool for improved sesame cultivation in the study area. The experiment was laid out as a randomized complete block design (RCBD) in a factorial arrangement in the 2016, 2017, and 2018 main cropping seasons. In this experiment, four nitrogen fertilizer rates, 0, 23, 46, and 69 Kg/ha nitrogen, and three improved varieties (Setit-1, Setit-2, and Humera-1). In the meantime, growth, yield, and yield components of sesame were collected from each treatment. Coefficient of determination (R2), Root mean square error (RMSE), Normalized root mean square error (N-RMSE), Model efficiency (E), and Degree of agreement (D) were used to test the performance of the model. The results indicated that the AquaCrop model successfully simulated soil water content with R2 varying from 0.92 to 0.98, RMSE 6.5 to 13.9 mm, E 0.78 to 0.94, and D 0.95 to 0.99, and the corresponding values for AB also varied from 0.92 to 0.98, 0.33 to 0.54 tons/ha, 0.74 to 0.93, and 0.9 to 0.98, respectively. The results on the canopy cover of sesame also showed that the model acceptably simulated canopy cover with R2 varying from 0.95 to 0.99 and a RMSE of 5.3 to 8.6%. The AquaCrop model was appropriately calibrated to simulate soil water content, canopy cover, aboveground biomass, and sesame yield; the results indicated that the model adequately simulated the growth and yield of sesame under the different nitrogen fertilizer levels. The AquaCrop model might be an important tool for improved soil fertility management and yield enhancement strategies of sesame. Hence, the model might be applied as a decision-support tool in soil fertility management in sesame production.Keywords: aquacrop model, normalized water productivity, nitrogen fertilizer, canopy cover, sesame
Procedia PDF Downloads 79