Search results for: vector error correction model.
5827 An Enhanced Situational Awareness of AUV's Mission by Multirate Neural Control
Authors: Igor Astrov, Mikhail Pikkov
Abstract:
This paper focuses on a critical component of the situational awareness (SA), the neural control of depth flight of an autonomous underwater vehicle (AUV). Constant depth flight is a challenging but important task for AUVs to achieve high level of autonomy under adverse conditions. With the SA strategy, we proposed a multirate neural control of an AUV trajectory using neural network model reference controller for a nontrivial mid-small size AUV "r2D4" stochastic model. This control system has been demonstrated and evaluated by simulation of diving maneuvers using software package Simulink. From the simulation results it can be seen that the chosen AUV model is stable in the presence of high noise, and also can be concluded that the fast SA of similar AUV systems with economy in energy of batteries can be asserted during the underwater missions in search-and-rescue operations.
Keywords: Autonomous underwater vehicles, multirate systems, neurocontrollers, situational awareness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19465826 FAT based Adaptive Impedance Control for Unknown Environment Position
Authors: N. Z. Azlan, H. Yamaura
Abstract:
This paper presents the Function Approximation Technique (FAT) based adaptive impedance control for a robotic finger. The force based impedance control is developed so that the robotic finger tracks the desired force while following the reference position trajectory, under unknown environment position and uncertainties in finger parameters. The control strategy is divided into two phases, which are the free and contact phases. Force error feedback is utilized in updating the uncertain environment position during contact phase. Computer simulations results are presented to demonstrate the effectiveness of the proposed technique.Keywords: Adaptive impedance control, force based impedance control, force control, Function Approximation Technique (FAT), unknown environment position.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15375825 Churn Prediction for Telecommunication Industry Using Artificial Neural Networks
Authors: Ulas Vural, M. Ergun Okay, E. Mesut Yildiz
Abstract:
Telecommunication service providers demand accurate and precise prediction of customer churn probabilities to increase the effectiveness of their customer relation services. The large amount of customer data owned by the service providers is suitable for analysis by machine learning methods. In this study, expenditure data of customers are analyzed by using an artificial neural network (ANN). The ANN model is applied to the data of customers with different billing duration. The proposed model successfully predicts the churn probabilities at 83% accuracy for only three months expenditure data and the prediction accuracy increases up to 89% when the nine month data is used. The experiments also show that the accuracy of ANN model increases on an extended feature set with information of the changes on the bill amounts.Keywords: Customer relationship management, churn prediction, telecom industry, deep learning, Artificial Neural Networks, ANN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7605824 Consideration of Magnetic Lines of Force as Magnets Produced by Percussion Waves
Authors: Angel Pérez Sánchez
Abstract:
Considering magnetic lines of force as a vector magnetic current was introduced by convention around 1830. But this leads to a dead end in traditional physics, and quantum explanations must be referred to explain the magnetic phenomenon. However, a study of magnetic lines as percussive waves leads to other paths capable of interpreting magnetism through traditional physics. The concept was explored by examining the behavior of two parallel electric current cables, which attract each other when the current goes in the same direction, and its application at a microscopic level inside magnets. Consideration of magnetic lines as magnets themselves would mean a paradigm shift in the study of magnetism and open the way to provide solutions to mysteries of magnetism until now only revealed by quantum mechanics. This groundbreaking study discovers how a magnetic field is created, as well as reason how magnetic attraction and repulsion work, understand how magnets behave when splitting them, and reveal the impossibility of a Magnetic Monopole. All of this is presented as if it were a symphony in which all the notes fit together perfectly to create a beautiful, smart, and simple work.
Keywords: Magnetic lines of force, magnetic field, magnetic attraction and repulsion, magnet split, magnetic monopole, magnetic lines of force as magnets, magnetic lines of force as waves.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 655823 Studies on Properties of Knowledge Dependency and Reduction Algorithm in Tolerance Rough Set Model
Authors: Chen Wu, Lijuan Wang
Abstract:
Relation between tolerance class and indispensable attribute and knowledge dependency in rough set model with tolerance relation is explored. After giving definitions and concepts of knowledge dependency and knowledge dependency degree for incomplete information system in tolerance rough set model by distinguishing decision attribute containing missing attribute value or not, the result of maintaining reflectivity, transitivity, augmentation, decomposition law and merge law for complete knowledge dependency is proved. Knowledge dependency degrees (not complete knowledge dependency degrees) only satisfy some laws after transitivity, augmentation and decomposition operations. An algorithm to solve attribute reduction in an incomplete decision table is designed. The correctness is checked by an example.Keywords: Incomplete information system, rough set, tolerance relation, knowledge dependence, attribute reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7295822 A Case Study of Reactive Focus on Form through Negotiation on Spoken Errors: Does It Work for All Learners?
Authors: Vahid Parvaresh, Zohre Kassaian, Saeed Ketabi, Masoud Saeedi
Abstract:
This case study investigates the effects of reactive focus on form through negotiation on the linguistic development of an adult EFL learner in an exclusive private EFL classroom. The findings revealed that in this classroom negotiated feedback occurred significantly more often than non-negotiated feedback. However, it was also found that in the long run the learner was significantly more successful in correcting his own errors when he had received nonnegotiated feedback than negotiated feedback. This study, therefore, argues that although negotiated feedback seems to be effective for some learners in the short run, it is non-negotiated feedback which seems to be more effective in the long run. This long lasting effect might be attributed to the impact of schooling system which is itself indicative of the dominant culture, or to the absence of other interlocutors in the course of interaction.Keywords: error, feedback, focus on form, interaction, schooling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12365821 XML Schema Automatic Matching Solution
Authors: Huynh Quyet Thang, Vo Sy Nam
Abstract:
Schema matching plays a key role in many different applications, such as schema integration, data integration, data warehousing, data transformation, E-commerce, peer-to-peer data management, ontology matching and integration, semantic Web, semantic query processing, etc. Manual matching is expensive and error-prone, so it is therefore important to develop techniques to automate the schema matching process. In this paper, we present a solution for XML schema automated matching problem which produces semantic mappings between corresponding schema elements of given source and target schemas. This solution contributed in solving more comprehensively and efficiently XML schema automated matching problem. Our solution based on combining linguistic similarity, data type compatibility and structural similarity of XML schema elements. After describing our solution, we present experimental results that demonstrate the effectiveness of this approach.Keywords: XML Schema, Schema Matching, SemanticMatching, Automatic XML Schema Matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18315820 A Statistical Approach for Predicting and Optimizing Depth of Cut in AWJ Machining for 6063-T6 Al Alloy
Authors: Farhad Kolahan, A. Hamid Khajavi
Abstract:
In this paper, a set of experimental data has been used to assess the influence of abrasive water jet (AWJ) process parameters in cutting 6063-T6 aluminum alloy. The process variables considered here include nozzle diameter, jet traverse rate, jet pressure and abrasive flow rate. The effects of these input parameters are studied on depth of cut (h); one of most important characteristics of AWJ. The Taguchi method and regression modeling are used in order to establish the relationships between input and output parameters. The adequacy of the model is evaluated using analysis of variance (ANOVA) technique. In the next stage, the proposed model is embedded into a Simulated Annealing (SA) algorithm to optimize the AWJ process parameters. The objective is to determine a suitable set of process parameters that can produce a desired depth of cut, considering the ranges of the process parameters. Computational results prove the effectiveness of the proposed model and optimization procedure.
Keywords: AWJ machining, Mathematical modeling, Simulated Annealing, Optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17745819 Image Ranking to Assist Object Labeling for Training Detection Models
Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman
Abstract:
Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.Keywords: Computer vision, deep learning, object detection, semiconductor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8305818 Production of As Isotopes in the Interaction of natGe with 14-30 MeV Protons
Authors: Yong H. Chung, Eun J. Han, Seil Lee, Sun Y. Park, Eun H. Yoon, Eun J. Cho, Jang H. Lee, Young J. Chu, Jang H. Ha, Jongseo Chai, Yu S. Kim, Min Y. Lee, Hyeyoung Lee
Abstract:
Cross sections of As radionuclides in the interaction of natGe with 14-30 MeV protons have been deduced by off-line y-ray spectroscopy to find optimal reaction channels leading to radiotracers for positron emission tomography. The experimental results were compared with the previous results and those estimated by the compound nucleus reaction model.
Keywords: Compound nucleus reaction model, off-line g-ray spectroscopy, radionuclide.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15415817 Optimization of PEM Fuel Cell Biphasic Model
Authors: Boubekeur Dokkar, Nasreddine Chennouf, Noureddine Settou, Belkhir Negrou, Abdesslam Benmhidi
Abstract:
The optimal operation of proton exchange membrane fuel cell (PEMFC) requires good water management which is presented under two forms vapor and liquid. Moreover, fuel cells have to reach higher output require integration of some accessories which need electrical power. In order to analyze fuel cells operation and different species transport phenomena a biphasic mathematical model is presented by governing equations set. The numerical solution of these conservation equations is calculated by Matlab program. A multi-criteria optimization with weighting between two opposite objectives is used to determine the compromise solutions between maximum output and minimal stack size. The obtained results are in good agreement with available literature data.
Keywords: Biphasic model, PEM fuel cell, optimization, simulation, specie transport.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20325816 The Size Effects of Keyboards (Keycaps) on Computer Typing Tasks
Authors: Chih-Chun Lai, Jun-Yu Wang
Abstract:
Keyboard is the most important equipment for computer tasks. However, improper design of keyboard would cause some symptoms like ulnar and/or radial deviations. The research goal of this study was to investigate the optimal size(s) of keycaps to increase efficiency. As shown in the questionnaire pre-study with 49 participants aged from 20 to 44, the most commonly used keyboards were 101-key standard keyboards. Most of the keycap sizes (W×L) were 1.3×1.5 cm and 1.5×1.5 cm. The fingertip breadths of most participants were 1.2 cm. Therefore, in the main study with 18 participants, a standard keyboard with each set of the 3-sized (1.2×1.4 cm, 1.3×1.5 cm, and 1.5×1.5 cm) keycaps were used to investigate their typing efficiency, respectively. The results revealed that the differences between the operating times for using 1.3×1.5 cm and 1.2×1.4 cm keycaps was insignificant while operating times for using 1.5×1.5cm keycaps were significantly longer than for using 1.2×1.4 cm or 1.3×1.5 cm, respectively. As for typing error rate, there was no significant difference.
Keywords: Keyboard, Keycap size, Typing efficiency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15445815 Mathematical Modeling of Storm Surge in Three Dimensional Primitive Equations
Authors: Worachat Wannawong, Usa W. HumphriesPrungchan Wongwises, Suphat Vongvisessomjai
Abstract:
The mathematical modeling of storm surge in sea and coastal regions such as the South China Sea (SCS) and the Gulf of Thailand (GoT) are important to study the typhoon characteristics. The storm surge causes an inundation at a lateral boundary exhibiting in the coastal zones particularly in the GoT and some part of the SCS. The model simulations in the three dimensional primitive equations with a high resolution model are important to protect local properties and human life from the typhoon surges. In the present study, the mathematical modeling is used to simulate the typhoon–induced surges in three case studies of Typhoon Linda 1997. The results of model simulations at the tide gauge stations can describe the characteristics of storm surges at the coastal zones.Keywords: lateral boundary, mathematical modeling, numericalsimulations, three dimensional primitive equations, storm surge.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34405814 Equilibrium and Rate Based Simulation of MTBE Reactive Distillation Column
Authors: Debashish Panda, Kannan A.
Abstract:
Equilibrium and rate based models have been applied in the simulation of methyl tertiary-butyl ether (MTBE) synthesis through reactive distillation. Temperature and composition profiles were compared for both the models and found that both the profiles trends, though qualitatively similar are significantly different quantitatively. In the rate based method (RBM), multicomponent mass transfer coefficients have been incorporated to describe interphase mass transfer. MTBE mole fraction in the bottom stream is found to be 0.9914 in the Equilibrium Model (EQM) and only 0.9904 for RBM when the same column configuration was preserved. The individual tray efficiencies were incorporated in the EQM and simulations were carried out. Dynamic simulation have been also carried out for the two column configurations and compared.
Keywords: Aspen Plus, equilibrium stage model, methyl tertiary-butyl ether, rate based model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 49145813 Stochastic Model Predictive Control for Linear Discrete-Time Systems with Random Dither Quantization
Authors: Tomoaki Hashimoto
Abstract:
Recently, feedback control systems using random dither quantizers have been proposed for linear discrete-time systems. However, the constraints imposed on state and control variables have not yet been taken into account for the design of feedback control systems with random dither quantization. Model predictive control is a kind of optimal feedback control in which control performance over a finite future is optimized with a performance index that has a moving initial and terminal time. An important advantage of model predictive control is its ability to handle constraints imposed on state and control variables. Based on the model predictive control approach, the objective of this paper is to present a control method that satisfies probabilistic state constraints for linear discrete-time feedback control systems with random dither quantization. In other words, this paper provides a method for solving the optimal control problems subject to probabilistic state constraints for linear discrete-time feedback control systems with random dither quantization.Keywords: Optimal control, stochastic systems, discrete-time systems, probabilistic constraints, random dither quantization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11565812 Nongovernmental Organisations’ Sustainable Strategic Planning and Its Impact on Donors’ Loyalty
Authors: Farah Mahmoud Attallah, Sara El-Deeb
Abstract:
The non-profit sector has been heavily rising with the rise of sustainable development in developed and developing countries. Most economies are putting high pressure on this sector, believing that nongovernmental organizations (NGOs) are one of the main rescues during crises worldwide. However, with the rising number of those NGOs comes their incapability of sustaining their performance and fundraising. Additionally, donors who are considered the key partners for those organizations have become knowledgeable about this sector which made them more demanding, putting high pressure on those organizations to believe that there must be a valuable return for the economy in order to donate. This research aims to study the impact of a sustainable strategic planning model on raising loyal donors; the proposed model of this research presents several independent variables determining their impact on donors' intention to become loyal.
Keywords: Non-profit sector, non-governmental organizations, strategic planning, sustainable business model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1715811 Ion Thruster Grid Lifetime Assessment Based on Its Structural Failure
Authors: Juan Li, Jiawen Qiu, Yuchuan Chu, Tianping Zhang, Wei Meng, Yanhui Jia, Xiaohui Liu
Abstract:
This article developed an ion thruster optic system sputter erosion depth numerical 3D model by IFE-PIC (Immersed Finite Element-Particle-in-Cell) and Mont Carlo method, and calculated the downstream surface sputter erosion rate of accelerator grid; compared with LIPS-200 life test data. The results of the numerical model are in reasonable agreement with the measured data. Finally, we predicted the lifetime of the 20cm diameter ion thruster via the erosion data obtained with the model. The ultimate result demonstrated that under normal operating condition, the erosion rate of the grooves wears on the downstream surface of the accelerator grid is 34.6μm⁄1000h, which means the conservative lifetime until structural failure occurring on the accelerator grid is 11500 hours.Keywords: Ion thruster, accelerator gird, sputter erosion, lifetime assessment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20035810 Decolourization of Melanoidin Containing Wastewater Using South African Coal Fly Ash
Authors: V.O. Ojijo, M.S. Onyango, Aoyi Ochieng, F.A.O. Otieno
Abstract:
Batch adsorption of recalcitrant melanoidin using the abundantly available coal fly ash was carried out. It had low specific surface area (SBET) of 1.7287 m2/g and pore volume of 0.002245 cm3/g while qualitative evaluation of the predominant phases in it was done by XRD analysis. Colour removal efficiency was found to be dependent on various factors studied. Maximum colour removal was achieved around pH 6, whereas increasing sorbent mass from 10g/L to 200 g/L enhanced colour reduction from 25% to 86% at 298 K. Spontaneity of the process was suggested by negative Gibbs free energy while positive values for enthalpy change showed endothermic nature of the process. Non-linear optimization of error functions resulted in Freundlich and Redlich-Peterson isotherms describing sorption equilibrium data best. The coal fly ash had maximum sorption capacity of 53 mg/g and could thus be used as a low cost adsorbent in melanoidin removal.
Keywords: Adsorption, Isotherms, Melanoidin, South African coal fly ash.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25215809 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements
Authors: Alexander Buhr, Klaus Ehrenfried
Abstract:
Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.Keywords: Boundary layer, high-speed PIV, ICE3, moving train model, roughness elements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15285808 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes
Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono
Abstract:
Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is widely used for LV segmentation, but it suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is improved to achieve a fast and efficient LV segmentation. First, a robust and efficient detection based on Hough forest localizes cardiac feature points. Such feature points are used to predict the initial fitting of the LV shape model. Second, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. With the robust initialization, ASM is able to achieve more accurate segmentation. The performance of the proposed method is evaluated on a dataset of 810 cardiac ultrasound images that are mostly abnormal shapes. This proposed method is compared with several combinations of ASM and existing initialization methods. Our experiment results demonstrate that accuracy of the proposed method for feature point detection for initialization was 40% higher than the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops and thus speeds up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.Keywords: Hough forest, active shape model, segmentation, cardiac left ventricle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15045807 Low Power and Less Area Architecture for Integer Motion Estimation
Authors: C Hisham, K Komal, Amit K Mishra
Abstract:
Full search block matching algorithm is widely used for hardware implementation of motion estimators in video compression algorithms. In this paper we are proposing a new architecture, which consists of a 2D parallel processing unit and a 1D unit both working in parallel. The proposed architecture reduces both data access power and computational power which are the main causes of power consumption in integer motion estimation. It also completes the operations with nearly the same number of clock cycles as compared to a 2D systolic array architecture. In this work sum of absolute difference (SAD)-the most repeated operation in block matching, is calculated in two steps. The first step is to calculate the SAD for alternate rows by a 2D parallel unit. If the SAD calculated by the parallel unit is less than the stored minimum SAD, the SAD of the remaining rows is calculated by the 1D unit. Early termination, which stops avoidable computations has been achieved with the help of alternate rows method proposed in this paper and by finding a low initial SAD value based on motion vector prediction. Data reuse has been applied to the reference blocks in the same search area which significantly reduced the memory access.
Keywords: Sum of absolute difference, high speed DSP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14935806 An Approach in the Improvement of the Reliability of Impedance Relay
Authors: D. Ouahdi, R. Ladjeroud, I. Habi
Abstract:
The distance protection mainly the impedance relay which is considered as the main protection for transmission lines can be subjected to impedance measurement error which is, mainly, due to the fault resistance and to the power fluctuation. Thus, the impedance relay may not operate for a short circuit at the far end of the protected line (case of the under reach) or operates for a fault beyond its protected zone (case of overreach). In this paper, an approach to fault detection by a distance protection, which distinguishes between the faulty conditions and the effect of overload operation mode, has been developed. This approach is based on the symmetrical components; mainly the negative sequence, and it is taking into account both the effect of fault resistance and the overload situation which both have an effect upon the reliability of the protection in terms of dependability for the former and security for the latter.
Keywords: Distance Protection, Fault Detection, negative sequence, overload, Transmission line.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18415805 In situ Real-Time Multivariate Analysis of Methanolysis Monitoring of Sunflower Oil Using FTIR
Authors: Pascal Mwenge, Tumisang Seodigeng
Abstract:
The combination of world population and the third industrial revolution led to high demand for fuels. On the other hand, the decrease of global fossil 8fuels deposits and the environmental air pollution caused by these fuels has compounded the challenges the world faces due to its need for energy. Therefore, new forms of environmentally friendly and renewable fuels such as biodiesel are needed. The primary analytical techniques for methanolysis yield monitoring have been chromatography and spectroscopy, these methods have been proven reliable but are more demanding, costly and do not provide real-time monitoring. In this work, the in situ monitoring of biodiesel from sunflower oil using FTIR (Fourier Transform Infrared) has been studied; the study was performed using EasyMax Mettler Toledo reactor equipped with a DiComp (Diamond) probe. The quantitative monitoring of methanolysis was performed by building a quantitative model with multivariate calibration using iC Quant module from iC IR 7.0 software. 15 samples of known concentrations were used for the modelling which were taken in duplicate for model calibration and cross-validation, data were pre-processed using mean centering and variance scale, spectrum math square root and solvent subtraction. These pre-processing methods improved the performance indexes from 7.98 to 0.0096, 11.2 to 3.41, 6.32 to 2.72, 0.9416 to 0.9999, RMSEC, RMSECV, RMSEP and R2Cum, respectively. The R2 value of 1 (training), 0.9918 (test), 0.9946 (cross-validation) indicated the fitness of the model built. The model was tested against univariate model; small discrepancies were observed at low concentration due to unmodelled intermediates but were quite close at concentrations above 18%. The software eliminated the complexity of the Partial Least Square (PLS) chemometrics. It was concluded that the model obtained could be used to monitor methanol of sunflower oil at industrial and lab scale.
Keywords: Biodiesel, calibration, chemometrics, FTIR, methanolysis, multivariate analysis, transesterification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9365804 Urban Growth Prediction in Athens, Greece, Using Artificial Neural Networks
Authors: D. Triantakonstantis, D. Stathakis
Abstract:
Urban areas have been expanded throughout the globe. Monitoring and modelling urban growth have become a necessity for a sustainable urban planning and decision making. Urban prediction models are important tools for analyzing the causes and consequences of urban land use dynamics. The objective of this research paper is to analyze and model the urban change, which has been occurred from 1990 to 2000 using CORINE land cover maps. The model was developed using drivers of urban changes (such as road distance, slope, etc.) under an Artificial Neural Network modelling approach. Validation was achieved using a prediction map for 2006 which was compared with a real map of Urban Atlas of 2006. The accuracy produced a Kappa index of agreement of 0,639 and a value of Cramer's V of 0,648. These encouraging results indicate the importance of the developed urban growth prediction model which using a set of available common biophysical drivers could serve as a management tool for the assessment of urban change.
Keywords: Artificial Neural Networks, CORINE, Urban Atlas, Urban Growth Prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34515803 Sedimentation and its Challenges for Operation and Maintenance of Hydraulic Structures using SHARC Software- A Case Study of Eastern Intake in Dez Diversion Dam in Iran
Authors: M.R. Mansoujian, N. Hedayat, M. Mashal, H, Kiamanesh
Abstract:
Analytical investigation of the sedimentation processes in the river engineering and hydraulic structures is of vital importance as this can affect water supply for the cultivating lands in the command area. The reason being that gradual sediment formation behind the reservoir can reduce the nominal capacity of these dams. The aim of the present paper is to analytically investigate sedimentation process along the river course and behind the storage reservoirs in general and the Eastern Intake of the Dez Diversion weir in particular using the SHARC software. Results of the model indicated the water level at 115.97m whereas the real time measurement from the river cross section was 115.98 m which suggests a significantly close relation between them. The average transported sediment load in the river was measured at 0.25mm , from which it can be concluded that nearly 100% of the suspended loads in river are moving which suggests no sediment settling but indicates that almost all sediment loads enters into the intake. It was further showed the average sediment diameter entering the intake to be 0.293 mm which in turn suggests that about 85% of suspended sediments in the river entre the intake. Comparison of the results from the SHARC model with those obtained form the SSIIM software suggests quite similar outputs but distinguishing the SHARC model as more appropriate for the analysis of simpler problems than other model.Keywords: SHARC, Eastern Intake, Dez Diversion Weir.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15965802 Performance Enhancement of Cellular OFDM Based Wireless LANs by Exploiting Spatial Diversity Techniques
Authors: S. Ali. Tajer, Babak H. Khalaj
Abstract:
This paper represents an investigation on how exploiting multiple transmit antennas by OFDM based wireless LAN subscribers can mitigate physical layer error rate. Then by comparing the Wireless LANs that utilize spatial diversity techniques with the conventional ones it will reveal how PHY and TCP throughputs behaviors are ameliorated. In the next step it will assess the same issues based on a cellular context operation which is mainly introduced as an innovated solution that beside a multi cell operation scenario benefits spatio-temporal signaling schemes as well. Presented simulations will shed light on the improved performance of the wide range and high quality wireless LAN services provided by the proposed approach.
Keywords: Multiple Input Multiple Output (MIMO), Orthogonal Frequency Division Multiplexing (OFDM), and WirelessLocal Area Network (WLAN).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14045801 An Improved Preprocessing for Biosonar Target Classification
Authors: Turgay Temel, John Hallam
Abstract:
An improved processing description to be employed in biosonar signal processing in a cochlea model is proposed and examined. It is compared to conventional models using a modified discrimination analysis and both are tested. Their performances are evaluated with echo data captured from natural targets (trees).Results indicate that the phase characteristics of low-pass filters employed in the echo processing have a significant effect on class separability for this data.
Keywords: Cochlea model, discriminant analysis, neurospikecoding, classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14935800 Balancing Neural Trees to Improve Classification Performance
Authors: Asha Rani, Christian Micheloni, Gian Luca Foresti
Abstract:
In this paper, a neural tree (NT) classifier having a simple perceptron at each node is considered. A new concept for making a balanced tree is applied in the learning algorithm of the tree. At each node, if the perceptron classification is not accurate and unbalanced, then it is replaced by a new perceptron. This separates the training set in such a way that almost the equal number of patterns fall into each of the classes. Moreover, each perceptron is trained only for the classes which are present at respective node and ignore other classes. Splitting nodes are employed into the neural tree architecture to divide the training set when the current perceptron node repeats the same classification of the parent node. A new error function based on the depth of the tree is introduced to reduce the computational time for the training of a perceptron. Experiments are performed to check the efficiency and encouraging results are obtained in terms of accuracy and computational costs.Keywords: Neural Tree, Pattern Classification, Perceptron, Splitting Nodes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12265799 Effect of the Seasonal Variation in the Extrinsic Incubation Period on the Long Term Behavior of the Dengue Hemorrhagic Fever Epidemic
Authors: Puntani Pongsumpun, I-Ming Tang
Abstract:
The incidences of dengue hemorrhagic disease (DHF) over the long term exhibit a seasonal behavior. It has been hypothesized that these behaviors are due to the seasonal climate changes which in turn induce a seasonal variation in the incubation period of the virus while it is developing the mosquito. The standard dynamic analysis is applied for analysis the Susceptible-Exposed- Infectious-Recovered (SEIR) model which includes an annual variation in the length of the extrinsic incubation period (EIP). The presence of both asymptomatic and symptomatic infections is allowed in the present model. We found that dynamic behavior of the endemic state changes as the influence of the seasonal variation of the EIP becomes stronger. As the influence is further increased, the trajectory exhibits sustained oscillations when it leaves the chaotic region.Keywords: Chaotic behavior, dengue hemorrhagic fever, extrinsic incubation period, SEIR model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17635798 Power System with PSS and FACTS Controller: Modelling, Simulation and Simultaneous Tuning Employing Genetic Algorithm
Authors: Sidhartha Panda, Narayana Prasad Padhy
Abstract:
This paper presents a systematic procedure for modelling and simulation of a power system installed with a power system stabilizer (PSS) and a flexible ac transmission system (FACTS)-based controller. For the design purpose, the model of example power system which is a single-machine infinite-bus power system installed with the proposed controllers is developed in MATLAB/SIMULINK. In the developed model synchronous generator is represented by model 1.1. which includes both the generator main field winding and the damper winding in q-axis so as to evaluate the impact of PSS and FACTS-based controller on power system stability. The model can be can be used for teaching the power system stability phenomena, and also for research works especially to develop generator controllers using advanced technologies. Further, to avoid adverse interactions, PSS and FACTS-based controller are simultaneously designed employing genetic algorithm (GA). The non-linear simulation results are presented for the example power system under various disturbance conditions to validate the effectiveness of the proposed modelling and simultaneous design approach.
Keywords: Genetic algorithm, modelling and simulation, MATLAB/SIMULINK, power system stabilizer, thyristor controlledseries compensator, simultaneous design, power system stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3157