Search results for: forecasting accuracy.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1937

Search results for: forecasting accuracy.

227 Air Quality Forecast Based on Principal Component Analysis-Genetic Algorithm and Back Propagation Model

Authors: Bin Mu, Site Li, Shijin Yuan

Abstract:

Under the circumstance of environment deterioration, people are increasingly concerned about the quality of the environment, especially air quality. As a result, it is of great value to give accurate and timely forecast of AQI (air quality index). In order to simplify influencing factors of air quality in a city, and forecast the city’s AQI tomorrow, this study used MATLAB software and adopted the method of constructing a mathematic model of PCA-GABP to provide a solution. To be specific, this study firstly made principal component analysis (PCA) of influencing factors of AQI tomorrow including aspects of weather, industry waste gas and IAQI data today. Then, we used the back propagation neural network model (BP), which is optimized by genetic algorithm (GA), to give forecast of AQI tomorrow. In order to verify validity and accuracy of PCA-GABP model’s forecast capability. The study uses two statistical indices to evaluate AQI forecast results (normalized mean square error and fractional bias). Eventually, this study reduces mean square error by optimizing individual gene structure in genetic algorithm and adjusting the parameters of back propagation model. To conclude, the performance of the model to forecast AQI is comparatively convincing and the model is expected to take positive effect in AQI forecast in the future.

Keywords: AQI forecast, principal component analysis, genetic algorithm, back propagation neural network model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1029
226 Application of Mapping and Superimposing Rule for Solution of Parabolic PDE in Porous Medium under Cyclic Loading

Authors: Mohammad M. Toufigh, Ahad Ouria

Abstract:

This paper presents an analytical method to solve governing consolidation parabolic partial differential equation (PDE) for inelastic porous Medium (soil) with consideration of variation of equation coefficient under cyclic loading. Since under cyclic loads, soil skeleton parameters change, this would introduce variable coefficient of parabolic PDE. Classical theory would not rationalize consolidation phenomenon in such condition. In this research, a method based on time space mapping to a virtual time space along with superimposing rule is employed to solve consolidation of inelastic soils in cyclic condition. Changes of consolidation coefficient applied in solution by modification of loading and unloading duration by introducing virtual time. Mapping function is calculated based on consolidation partial differential equation results. Based on superimposing rule a set of continuous static loads in specified times used instead of cyclic load. A set of laboratory consolidation tests under cyclic load along with numerical calculations were performed in order to verify the presented method. Numerical solution and laboratory tests results showed accuracy of presented method.

Keywords: Mapping, Consolidation, Inelastic porous medium, Cyclic loading, Superimposing rule.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1778
225 BeamGA Median: A Hybrid Heuristic Search Approach

Authors: Ghada Badr, Manar Hosny, Nuha Bintayyash, Eman Albilali, Souad Larabi Marie-Sainte

Abstract:

The median problem is significantly applied to derive the most reasonable rearrangement phylogenetic tree for many species. More specifically, the problem is concerned with finding a permutation that minimizes the sum of distances between itself and a set of three signed permutations. Genomes with equal number of genes but different order can be represented as permutations. In this paper, an algorithm, namely BeamGA median, is proposed that combines a heuristic search approach (local beam) as an initialization step to generate a number of solutions, and then a Genetic Algorithm (GA) is applied in order to refine the solutions, aiming to achieve a better median with the smallest possible reversal distance from the three original permutations. In this approach, any genome rearrangement distance can be applied. In this paper, we use the reversal distance. To the best of our knowledge, the proposed approach was not applied before for solving the median problem. Our approach considers true biological evolution scenario by applying the concept of common intervals during the GA optimization process. This allows us to imitate a true biological behavior and enhance genetic approach time convergence. We were able to handle permutations with a large number of genes, within an acceptable time performance and with same or better accuracy as compared to existing algorithms.

Keywords: Median problem, phylogenetic tree, permutation, genetic algorithm, beam search, genome rearrangement distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 979
224 Multistage Data Envelopment Analysis Model for Malmquist Productivity Index Using Grey's System Theory to Evaluate Performance of Electric Power Supply Chain in Iran

Authors: Mesbaholdin Salami, Farzad Movahedi Sobhani, Mohammad Sadegh Ghazizadeh

Abstract:

Evaluation of organizational performance is among the most important measures that help organizations and entities continuously improve their efficiency. Organizations can use the existing data and results from the comparison of units under investigation to obtain an estimation of their performance. The Malmquist Productivity Index (MPI) is an important index in the evaluation of overall productivity, which considers technological developments and technical efficiency at the same time. This article proposed a model based on the multistage MPI, considering limited data (Grey’s theory). This model can evaluate the performance of units using limited and uncertain data in a multistage process. It was applied by the electricity market manager to Iran’s electric power supply chain (EPSC), which contains uncertain data, to evaluate the performance of its actors. Results from solving the model showed an improvement in the accuracy of future performance of the units under investigation, using the Grey’s system theory. This model can be used in all case studies, in which MPI is used and there are limited or uncertain data.

Keywords: Malmquist Index, Grey's Theory, Charnes Cooper & Rhodes (CCR) Model, network data envelopment analysis, Iran electricity power chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 553
223 Classification of Potential Biomarkers in Breast Cancer Using Artificial Intelligence Algorithms and Anthropometric Datasets

Authors: Aref Aasi, Sahar Ebrahimi Bajgani, Erfan Aasi

Abstract:

Breast cancer (BC) continues to be the most frequent cancer in females and causes the highest number of cancer-related deaths in women worldwide. Inspired by recent advances in studying the relationship between different patient attributes and features and the disease, in this paper, we have tried to investigate the different classification methods for better diagnosis of BC in the early stages. In this regard, datasets from the University Hospital Centre of Coimbra were chosen, and different machine learning (ML)-based and neural network (NN) classifiers have been studied. For this purpose, we have selected favorable features among the nine provided attributes from the clinical dataset by using a random forest algorithm. This dataset consists of both healthy controls and BC patients, and it was noted that glucose, BMI, resistin, and age have the most importance, respectively. Moreover, we have analyzed these features with various ML-based classifier methods, including Decision Tree (DT), K-Nearest Neighbors (KNN), eXtreme Gradient Boosting (XGBoost), Logistic Regression (LR), Naive Bayes (NB), and Support Vector Machine (SVM) along with NN-based Multi-Layer Perceptron (MLP) classifier. The results revealed that among different techniques, the SVM and MLP classifiers have the most accuracy, with amounts of 96% and 92%, respectively. These results divulged that the adopted procedure could be used effectively for the classification of cancer cells, and also it encourages further experimental investigations with more collected data for other types of cancers.

Keywords: Breast cancer, health diagnosis, Machine Learning, biomarker classification, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 321
222 Resilience Assessment for Power Distribution Systems

Authors: Berna Eren Tokgoz, Mahdi Safa, Seokyon Hwang

Abstract:

Power distribution systems are essential and crucial infrastructures for the development and maintenance of a sustainable society. These systems are extremely vulnerable to various types of natural and man-made disasters. The assessment of resilience focuses on preparedness and mitigation actions under pre-disaster conditions. It also concentrates on response and recovery actions under post-disaster situations. The aim of this study is to present a methodology to assess the resilience of electric power distribution poles against wind-related events. The proposed methodology can improve the accuracy and rapidity of the evaluation of the conditions and the assessment of the resilience of poles. The methodology provides a metric for the evaluation of the resilience of poles under pre-disaster and post-disaster conditions. The metric was developed using mathematical expressions for physical forces that involve various variables, such as physical dimensions of the pole, the inclination of the pole, and wind speed. A three-dimensional imaging technology (photogrammetry) was used to determine the inclination of poles. Based on expert opinion, the proposed metric was used to define zones to visualize resilience. Visual representation of resilience is helpful for decision makers to prioritize their resources before and after experiencing a wind-related disaster. Multiple electric poles in the City of Beaumont, TX were used in a case study to evaluate the proposed methodology.  

Keywords: Photogrammetry, power distribution systems, resilience metric, system resilience, wind-related disasters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1422
221 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks

Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone

Abstract:

Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.

Keywords: Artificial Neural Network, Data Mining, Electroencephalogram, Epilepsy, Feature Extraction, Seizure Detection, Signal Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1314
220 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.

Keywords: Injection molding, shrinkage, six sigma, Taguchi parameter design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1382
219 Finite Volume Method for Flow Prediction Using Unstructured Meshes

Authors: Juhee Lee, Yongjun Lee

Abstract:

In designing a low-energy-consuming buildings, the heat transfer through a large glass or wall becomes critical. Multiple layers of the window glasses and walls are employed for the high insulation. The gravity driven air flow between window glasses or wall layers is a natural heat convection phenomenon being a key of the heat transfer. For the first step of the natural heat transfer analysis, in this study the development and application of a finite volume method for the numerical computation of viscous incompressible flows is presented. It will become a part of the natural convection analysis with high-order scheme, multi-grid method, and dual-time step in the future. A finite volume method based on a fully-implicit second-order is used to discretize and solve the fluid flow on unstructured grids composed of arbitrary-shaped cells. The integrations of the governing equation are discretised in the finite volume manner using a collocated arrangement of variables. The convergence of the SIMPLE segregated algorithm for the solution of the coupled nonlinear algebraic equations is accelerated by using a sparse matrix solver such as BiCGSTAB. The method used in the present study is verified by applying it to some flows for which either the numerical solution is known or the solution can be obtained using another numerical technique available in the other researches. The accuracy of the method is assessed through the grid refinement.

Keywords: Finite volume method, fluid flow, laminar flow, unstructured grid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846
218 Numerical Investigation of Non Fourier Heat Conduction in a Semi-infinite Body due to a Moving Concentrated Heat Source Composed with Radiational Boundary Condition

Authors: M. Akbari, S. Sadodin

Abstract:

In this paper, the melting of a semi-infinite body as a result of a moving laser beam has been studied. Because the Fourier heat transfer equation at short times and large dimensions does not have sufficient accuracy; a non-Fourier form of heat transfer equation has been used. Due to the fact that the beam is moving in x direction, the temperature distribution and the melting pool shape are not asymmetric. As a result, the problem is a transient threedimensional problem. Therefore, thermophysical properties such as heat conductivity coefficient, density and heat capacity are functions of temperature and material states. The enthalpy technique, used for the solution of phase change problems, has been used in an explicit finite volume form for the hyperbolic heat transfer equation. This technique has been used to calculate the transient temperature distribution in the semi-infinite body and the growth rate of the melt pool. In order to validate the numerical results, comparisons were made with experimental data. Finally, the results of this paper were compared with similar problem that has used the Fourier theory. The comparison shows the influence of infinite speed of heat propagation in Fourier theory on the temperature distribution and the melt pool size.

Keywords: Non-Fourier, Enthalpy technique, Melt pool, Radiational boundary condition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980
217 Flow Discharge Determination in Straight Compound Channels Using ANNs

Authors: A. Zahiri, A. A. Dehghani

Abstract:

Although many researchers have studied the flow hydraulics in compound channels, there are still many complicated problems in determination of their flow rating curves. Many different methods have been presented for these channels but extending them for all types of compound channels with different geometrical and hydraulic conditions is certainly difficult. In this study, by aid of nearly 400 laboratory and field data sets of geometry and flow rating curves from 30 different straight compound sections and using artificial neural networks (ANNs), flow discharge in compound channels was estimated. 13 dimensionless input variables including relative depth, relative roughness, relative width, aspect ratio, bed slope, main channel side slopes, flood plains side slopes and berm inclination and one output variable (flow discharge), have been used in ANNs. Comparison of ANNs model and traditional method (divided channel method-DCM) shows high accuracy of ANNs model results. The results of Sensitivity analysis showed that the relative depth with 47.6 percent contribution, is the most effective input parameter for flow discharge prediction. Relative width and relative roughness have 19.3 and 12.2 percent of importance, respectively. On the other hand, shape parameter, main channel and flood plains side slopes with 2.1, 3.8 and 3.8 percent of contribution, have the least importance.

Keywords: ANN model, compound channels, divided channel method (DCM), flow rating curve

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2558
216 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: Algorithm optimization, Bank Failures, OpenMP, Parallel Techniques, Statistical tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1901
215 Pre-Operative Tool for Facial-Post-Surgical Estimation and Detection

Authors: Ayat E. Ali, Christeen R. Aziz, Merna A. Helmy, Mohammed M. Malek, Sherif H. El-Gohary

Abstract:

Goal: Purpose of the project was to make a plastic surgery prediction by using pre-operative images for the plastic surgeries’ patients and to show this prediction on a screen to compare between the current case and the appearance after the surgery. Methods: To this aim, we implemented a software which used data from the internet for facial skin diseases, skin burns, pre-and post-images for plastic surgeries then the post- surgical prediction is done by using K-nearest neighbor (KNN). So we designed and fabricated a smart mirror divided into two parts a screen and a reflective mirror so patient's pre- and post-appearance will be showed at the same time. Results: We worked on some skin diseases like vitiligo, skin burns and wrinkles. We classified the three degrees of burns using KNN classifier with accuracy 60%. We also succeeded in segmenting the area of vitiligo. Our future work will include working on more skin diseases, classify them and give a prediction for the look after the surgery. Also we will go deeper into facial deformities and plastic surgeries like nose reshaping and face slim down. Conclusion: Our project will give a prediction relates strongly to the real look after surgery and decrease different diagnoses among doctors. Significance: The mirror may have broad societal appeal as it will make the distance between patient's satisfaction and the medical standards smaller.

Keywords: K-nearest neighbor, face detection, vitiligo, bone deformity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 701
214 Empirical Roughness Progression Models of Heavy Duty Rural Pavements

Authors: Nahla H. Alaswadko, Rayya A. Hassan, Bayar N. Mohammed

Abstract:

Empirical deterministic models have been developed to predict roughness progression of heavy duty spray sealed pavements for a dataset representing rural arterial roads. The dataset provides a good representation of the relevant network and covers a wide range of operating and environmental conditions. A sample with a large size of historical time series data for many pavement sections has been collected and prepared for use in multilevel regression analysis. The modelling parameters include road roughness as performance parameter and traffic loading, time, initial pavement strength, reactivity level of subgrade soil, climate condition, and condition of drainage system as predictor parameters. The purpose of this paper is to report the approaches adopted for models development and validation. The study presents multilevel models that can account for the correlation among time series data of the same section and to capture the effect of unobserved variables. Study results show that the models fit the data very well. The contribution and significance of relevant influencing factors in predicting roughness progression are presented and explained. The paper concludes that the analysis approach used for developing the models confirmed their accuracy and reliability by well-fitting to the validation data.

Keywords: Roughness progression, empirical model, pavement performance, heavy duty pavement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 804
213 ECG Based Reliable User Identification Using Deep Learning

Authors: R. N. Begum, Ambalika Sharma, G. K. Singh

Abstract:

Identity theft has serious ramifications beyond data and personal information loss. This necessitates the implementation of robust and efficient user identification systems. Therefore, automatic biometric recognition systems are the need of the hour, and electrocardiogram (ECG)-based systems are unquestionably the best choice due to their appealing inherent characteristics. The Convolutional Neural Networks (CNNs) are the recent state-of-the-art techniques for ECG-based user identification systems. However, the results obtained are significantly below standards, and the situation worsens as the number of users and types of heartbeats in the dataset grows. As a result, this study proposes a highly accurate and resilient ECG-based person identification system using CNN's dense learning framework. The proposed research explores explicitly the caliber of dense CNNs in the field of ECG-based human recognition. The study tests four different configurations of dense CNN which are trained on a dataset of recordings collected from eight popular ECG databases. With the highest False Acceptance Rate (FAR)  of 0.04% and the highest False Rejection Rate (FRR)  of 5%, the best performing network achieved an identification accuracy of 99.94%. The best network is also tested with various train/test split ratios. The findings show that DenseNets are not only extremely reliable, but also highly efficient. Thus, they might also be implemented in real-time ECG-based human recognition systems.

Keywords: Biometrics, dense networks, identification rate, train/test split ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 541
212 Vision-Based Daily Routine Recognition for Healthcare with Transfer Learning

Authors: Bruce X. B. Yu, Yan Liu, Keith C. C. Chan

Abstract:

We propose to record Activities of Daily Living (ADLs) of elderly people using a vision-based system so as to provide better assistive and personalization technologies. Current ADL-related research is based on data collected with help from non-elderly subjects in laboratory environments and the activities performed are predetermined for the sole purpose of data collection. To obtain more realistic datasets for the application, we recorded ADLs for the elderly with data collected from real-world environment involving real elderly subjects. Motivated by the need to collect data for more effective research related to elderly care, we chose to collect data in the room of an elderly person. Specifically, we installed Kinect, a vision-based sensor on the ceiling, to capture the activities that the elderly subject performs in the morning every day. Based on the data, we identified 12 morning activities that the elderly person performs daily. To recognize these activities, we created a HARELCARE framework to investigate into the effectiveness of existing Human Activity Recognition (HAR) algorithms and propose the use of a transfer learning algorithm for HAR. We compared the performance, in terms of accuracy, and training progress. Although the collected dataset is relatively small, the proposed algorithm has a good potential to be applied to all daily routine activities for healthcare purposes such as evidence-based diagnosis and treatment.

Keywords: Daily activity recognition, healthcare, IoT sensors, transfer learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895
211 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions

Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal

Abstract:

We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.

Keywords: Air pollution, dispersion, emissions, line sources, road traffic, urban transport.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1929
210 Controlling of Multi-Level Inverter under Shading Conditions Using Artificial Neural Network

Authors: Abed Sami Qawasme, Sameer Khader

Abstract:

This paper describes the effects of photovoltaic voltage changes on Multi-level inverter (MLI) due to solar irradiation variations, and methods to overcome these changes. The irradiation variation affects the generated voltage, which in turn varies the switching angles required to turn-on the inverter power switches in order to obtain minimum harmonic content in the output voltage profile. Genetic Algorithm (GA) is used to solve harmonics elimination equations of eleven level inverters with equal and non-equal dc sources. After that artificial neural network (ANN) algorithm is proposed to generate appropriate set of switching angles for MLI at any level of input dc sources voltage causing minimization of the total harmonic distortion (THD) to an acceptable limit. MATLAB/Simulink platform is used as a simulation tool and Fast Fourier Transform (FFT) analyses are carried out for output voltage profile to verify the reliability and accuracy of the applied technique for controlling the MLI harmonic distortion. According to the simulation results, the obtained THD for equal dc source is 9.38%, while for variable or unequal dc sources it varies between 10.26% and 12.93% as the input dc voltage varies between 4.47V nd 11.43V respectively. The proposed ANN algorithm provides satisfied simulation results that match with results obtained by alternative algorithms.

Keywords: Multi level inverter, genetic algorithm, artificial neural network, total harmonic distortion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 617
209 Multi-Level Air Quality Classification in China Using Information Gain and Support Vector Machine

Authors: Bingchun Liu, Pei-Chann Chang, Natasha Huang, Dun Li

Abstract:

Machine Learning and Data Mining are the two important tools for extracting useful information and knowledge from large datasets. In machine learning, classification is a wildly used technique to predict qualitative variables and is generally preferred over regression from an operational point of view. Due to the enormous increase in air pollution in various countries especially China, Air Quality Classification has become one of the most important topics in air quality research and modelling. This study aims at introducing a hybrid classification model based on information theory and Support Vector Machine (SVM) using the air quality data of four cities in China namely Beijing, Guangzhou, Shanghai and Tianjin from Jan 1, 2014 to April 30, 2016. China's Ministry of Environmental Protection has classified the daily air quality into 6 levels namely Serious Pollution, Severe Pollution, Moderate Pollution, Light Pollution, Good and Excellent based on their respective Air Quality Index (AQI) values. Using the information theory, information gain (IG) is calculated and feature selection is done for both categorical features and continuous numeric features. Then SVM Machine Learning algorithm is implemented on the selected features with cross-validation. The final evaluation reveals that the IG and SVM hybrid model performs better than SVM (alone), Artificial Neural Network (ANN) and K-Nearest Neighbours (KNN) models in terms of accuracy as well as complexity.

Keywords: Machine learning, air quality classification, air quality index, information gain, support vector machine, cross-validation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948
208 Earth Station Neural Network Control Methodology and Simulation

Authors: Hanaa T. El-Madany, Faten H. Fahmy, Ninet M. A. El-Rahman, Hassen T. Dorrah

Abstract:

Renewable energy resources are inexhaustible, clean as compared with conventional resources. Also, it is used to supply regions with no grid, no telephone lines, and often with difficult accessibility by common transport. Satellite earth stations which located in remote areas are the most important application of renewable energy. Neural control is a branch of the general field of intelligent control, which is based on the concept of artificial intelligence. This paper presents the mathematical modeling of satellite earth station power system which is required for simulating the system.Aswan is selected to be the site under consideration because it is a rich region with solar energy. The complete power system is simulated using MATLAB–SIMULINK.An artificial neural network (ANN) based model has been developed for the optimum operation of earth station power system. An ANN is trained using a back propagation with Levenberg–Marquardt algorithm. The best validation performance is obtained for minimum mean square error. The regression between the network output and the corresponding target is equal to 96% which means a high accuracy. Neural network controller architecture gives satisfactory results with small number of neurons, hence better in terms of memory and time are required for NNC implementation. The results indicate that the proposed control unit using ANN can be successfully used for controlling the satellite earth station power system.

Keywords: Satellite, neural network, MATLAB, power system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1868
207 An Approximate Lateral-Torsional Buckling Mode Function for Cantilever I-Beams

Authors: H. Ozbasaran

Abstract:

Lateral torsional buckling is a global buckling mode which should be considered in design of slender structural members under flexure about their strong axis. It is possible to compute the load which causes lateral torsional buckling of a beam by finite element analysis, however, closed form equations are needed in engineering practice for calculation ease which can be obtained by using energy method. In lateral torsional buckling applications of energy method, a proper function for the critical lateral torsional buckling mode should be chosen which can be thought as the variation of twisting angle along the buckled beam. Accuracy of the results depends on how close is the chosen function to the exact mode. Since critical lateral torsional buckling mode of the cantilever I-beams varies due to material properties, section properties and loading case, the hardest step is to determine a proper mode function in application of energy method. This paper presents an approximate function for critical lateral torsional buckling mode of doubly symmetric cantilever I-beams. Coefficient matrices are calculated for concentrated load at free end, uniformly distributed load and constant moment along the beam cases. Critical lateral torsional buckling modes obtained by presented function and exact solutions are compared. It is found that the modes obtained by presented function coincide with differential equation solutions for considered loading cases.

Keywords: Buckling mode, cantilever, lateral-torsional buckling, I-beam.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2567
206 Land Use/Land Cover Mapping Using Landsat 8 and Sentinel-2 in a Mediterranean Landscape

Authors: M. Vogiatzis, K. Perakis

Abstract:

Spatial-explicit and up-to-date land use/land cover information is fundamental for spatial planning, land management, sustainable development, and sound decision-making. In the last decade, many satellite-derived land cover products at different spatial, spectral, and temporal resolutions have been developed, such as the European Copernicus Land Cover product. However, more efficient and detailed information for land use/land cover is required at the regional or local scale. A typical Mediterranean basin with a complex landscape comprised of various forest types, crops, artificial surfaces, and wetlands was selected to test and develop our approach. In this study, we investigate the improvement of Copernicus Land Cover product (CLC2018) using Landsat 8 and Sentinel-2 pixel-based classification based on all available existing geospatial data (Forest Maps, LPIS, Natura2000 habitats, cadastral parcels, etc.). We examined and compared the performance of the Random Forest classifier for land use/land cover mapping. In total, 10 land use/land cover categories were recognized in Landsat 8 and 11 in Sentinel-2A. A comparison of the overall classification accuracies for 2018 shows that Landsat 8 classification accuracy was slightly higher than Sentinel-2A (82,99% vs. 80,30%). We concluded that the main land use/land cover types of CLC2018, even within a heterogeneous area, can be successfully mapped and updated according to CLC nomenclature. Future research should be oriented toward integrating spatiotemporal information from seasonal bands and spectral indexes in the classification process.

Keywords: land use/land cover, random forest, Landsat-8 OLI, Sentinel-2A MSI, Corine land cover

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 339
205 Magneto-Thermo-Mechanical Analysis of Electromagnetic Devices Using the Finite Element Method

Authors: Michael G. Pantelyat

Abstract:

Fundamental basics of pure and applied research in the area of magneto-thermo-mechanical numerical analysis and design of innovative electromagnetic devices (modern induction heaters, novel thermoelastic actuators, rotating electrical machines, induction cookers, electrophysical devices) are elaborated. Thus, mathematical models of magneto-thermo-mechanical processes in electromagnetic devices taking into account main interactions of interrelated phenomena are developed. In addition, graphical representation of coupled (multiphysics) phenomena under consideration is proposed. Besides, numerical techniques for nonlinear problems solution are developed. On this base, effective numerical algorithms for solution of actual problems of practical interest are proposed, validated and implemented in applied 2D and 3D computer codes developed. Many applied problems of practical interest regarding modern electrical engineering devices are numerically solved. Investigations of the influences of various interrelated physical phenomena (temperature dependences of material properties, thermal radiation, conditions of convective heat transfer, contact phenomena, etc.) on the accuracy of the electromagnetic, thermal and structural analyses are conducted. Important practical recommendations on the choice of rational structures, materials and operation modes of electromagnetic devices under consideration are proposed and implemented in industry.

Keywords: Electromagnetic devices, multiphysics, numerical analysis, simulation and design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1721
204 The Effect of Bottom Shape and Baffle Length on the Flow Field in Stirred Tanks in Turbulent and Transitional Flow

Authors: Jie Dong, Binjie Hu, Andrzej W Pacek, Xiaogang Yang, Nicholas J. Miles

Abstract:

The effect of the shape of the vessel bottom and the length of baffles on the velocity distributions in a turbulent and in a transitional flow has been simulated. The turbulent flow was simulated using standard k-ε model and simulation was verified using LES whereas transitional flow was simulated using only LES. It has been found that both the shape of tank bottom and the baffles’ length has significant effect on the flow pattern and velocity distribution below the impeller. In the dished bottom tank with baffles reaching the edge of the dish, the large rotating volume of liquid was formed below the impeller. Liquid in this rotating region was not fully mixing. A dead zone was formed here. The size and the intensity of circulation within this zone calculated by k-ε model and LES were practically identical what reinforces the accuracy of the numerical simulations. Both types of simulations also show that employing full-length baffles can reduce the size of dead zone formed below the impeller. The LES was also used to simulate the velocity distribution below the impeller in transitional flow and it has been found that secondary circulation loops were formed near the tank bottom in all investigated geometries. However, in this case the length of baffles has smaller effect on the volume of rotating liquid than in the turbulent flow.

Keywords: Baffles length, dished bottom, dead zone, flow field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2092
203 Anisotropic Total Fractional Order Variation Model in Seismic Data Denoising

Authors: Jianwei Ma, Diriba Gemechu

Abstract:

In seismic data processing, attenuation of random noise is the basic step to improve quality of data for further application of seismic data in exploration and development in different gas and oil industries. The signal-to-noise ratio of the data also highly determines quality of seismic data. This factor affects the reliability as well as the accuracy of seismic signal during interpretation for different purposes in different companies. To use seismic data for further application and interpretation, we need to improve the signal-to-noise ration while attenuating random noise effectively. To improve the signal-to-noise ration and attenuating seismic random noise by preserving important features and information about seismic signals, we introduce the concept of anisotropic total fractional order denoising algorithm. The anisotropic total fractional order variation model defined in fractional order bounded variation is proposed as a regularization in seismic denoising. The split Bregman algorithm is employed to solve the minimization problem of the anisotropic total fractional order variation model and the corresponding denoising algorithm for the proposed method is derived. We test the effectiveness of theproposed method for synthetic and real seismic data sets and the denoised result is compared with F-X deconvolution and non-local means denoising algorithm.

Keywords: Anisotropic total fractional order variation, fractional order bounded variation, seismic random noise attenuation, Split Bregman Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1014
202 Hybrid Approach for Software Defect Prediction Using Machine Learning with Optimization Technique

Authors: C. Manjula, Lilly Florence

Abstract:

Software technology is developing rapidly which leads to the growth of various industries. Now-a-days, software-based applications have been adopted widely for business purposes. For any software industry, development of reliable software is becoming a challenging task because a faulty software module may be harmful for the growth of industry and business. Hence there is a need to develop techniques which can be used for early prediction of software defects. Due to complexities in manual prediction, automated software defect prediction techniques have been introduced. These techniques are based on the pattern learning from the previous software versions and finding the defects in the current version. These techniques have attracted researchers due to their significant impact on industrial growth by identifying the bugs in software. Based on this, several researches have been carried out but achieving desirable defect prediction performance is still a challenging task. To address this issue, here we present a machine learning based hybrid technique for software defect prediction. First of all, Genetic Algorithm (GA) is presented where an improved fitness function is used for better optimization of features in data sets. Later, these features are processed through Decision Tree (DT) classification model. Finally, an experimental study is presented where results from the proposed GA-DT based hybrid approach is compared with those from the DT classification technique. The results show that the proposed hybrid approach achieves better classification accuracy.

Keywords: Decision tree, genetic algorithm, machine learning, software defect prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465
201 An Automated Test Setup for the Characterization of Antenna in CATR

Authors: Faisal Amin, Abdul Mueed, Xu Jiadong

Abstract:

This paper describes the development of a fully automated measurement software for antenna radiation pattern measurements in a Compact Antenna Test Range (CATR). The CATR has a frequency range from 2-40 GHz and the measurement hardware includes a Network Analyzer for transmitting and Receiving the microwave signal and a Positioner controller to control the motion of the Styrofoam column. The measurement process includes Calibration of CATR with a Standard Gain Horn (SGH) antenna followed by Gain versus angle measurement of the Antenna under test (AUT). The software is designed to control a variety of microwave transmitter / receiver and two axis Positioner controllers through the standard General Purpose interface bus (GPIB) interface. Addition of new Network Analyzers is supported through a slight modification of hardware control module. Time-domain gating is implemented to remove the unwanted signals and get the isolated response of AUT. The gated response of the AUT is compared with the calibration data in the frequency domain to obtain the desired results. The data acquisition and processing is implemented in Agilent VEE and Matlab. A variety of experimental measurements with SGH antennas were performed to validate the accuracy of software. A comparison of results with existing commercial softwares is presented and the measured results are found to be within .2 dBm.

Keywords: Antenna measurement, calibration, time-domain gating, VNA, Positioner controller

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1970
200 Sensitivity of the SHARC Model to Variations of Manning Coefficient and Effect of ā€œnā€œ on the Sediment Materials Entry into the Eastern Water intake- A Case in the Dez Diversion Weir in Iran

Authors: M.R.Mansoujian, A.Rohani, N.Hedayat , M.Qamari, M. Osroosh

Abstract:

Permanent rivers are the main sources of renewable water supply for the croplands under the irrigation and drainage schemes. They are also the major source of sediment loads transport into the storage reservoirs of the hydro-electrical dams, diversion weirs and regulating dams. Sedimentation process results from soil erosion which is related to poor watershed management and human intervention ion in the hydraulic regime of the rivers. These could change the hydraulic behavior and as such, leads to riverbed and river bank scouring, the consequences of which would be sediment load transport into the dams and therefore reducing the flow discharge in water intakes. The present paper investigate sedimentation process by varying the Manning coefficient "n" by using the SHARC software along the watercourse in the Dez River. Results indicated that the optimum "n" within that river range is 0.0315 at which quantity minimum sediment loads are transported into the Eastern intake. Comparison of the model results with those obtained by those from the SSIIM software within the same river reach showed a very close proximity between them. This suggests a relative accuracy with which the model can simulate the hydraulic flow characteristics and therefore its suitability as a powerful analytical tool for project feasibility studies and project implementation.

Keywords: Sediment transport, Manning coefficient, Eastern Intake, SHARC, Dez River.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1681
199 Swarm Intelligence based Optimal Linear Phase FIR High Pass Filter Design using Particle Swarm Optimization with Constriction Factor and Inertia Weight Approach

Authors: Sangeeta Mandal, Rajib Kar, Durbadal Mandal, Sakti Prasad Ghoshal

Abstract:

In this paper, an optimal design of linear phase digital high pass finite impulse response (FIR) filter using Particle Swarm Optimization with Constriction Factor and Inertia Weight Approach (PSO-CFIWA) has been presented. In the design process, the filter length, pass band and stop band frequencies, feasible pass band and stop band ripple sizes are specified. FIR filter design is a multi-modal optimization problem. The conventional gradient based optimization techniques are not efficient for digital filter design. Given the filter specifications to be realized, the PSO-CFIWA algorithm generates a set of optimal filter coefficients and tries to meet the ideal frequency response characteristic. In this paper, for the given problem, the designs of the optimal FIR high pass filters of different orders have been performed. The simulation results have been compared to those obtained by the well accepted algorithms such as Parks and McClellan algorithm (PM), genetic algorithm (GA). The results justify that the proposed optimal filter design approach using PSOCFIWA outperforms PM and GA, not only in the accuracy of the designed filter but also in the convergence speed and solution quality.

Keywords: FIR Filter; PSO-CFIWA; PSO; Parks and McClellanAlgorithm, Evolutionary Optimization Technique; MagnitudeResponse; Convergence; High Pass Filter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1554
198 A Comparative Analysis of Multiple Criteria Decision Making Analysis Methods for Strategic, Tactical, and Operational Decisions in Military Fighter Aircraft Selection

Authors: C. Ardil

Abstract:

This paper considers a comparative analysis of multiple criteria decision making analysis methods for strategic, tactical, and operational decisions in military fighter aircraft selection for the air force fleet planning. The evaluation criteria governing the decision analysis process are determined from the literature for the three existing military combat aircraft. Military fighter aircraft selection problem is structured using "preference analysis for reference ideal solution (PARIS)ā€ approach in multiple criteria decision analysis (MCDMA). Systematic comparisons were made with existing MCDMA methods (PARIS, and TOPSIS) to verify the stability and accuracy of the results obtained. The proposed integrated MCDMA systematic approach is expected to address the issues encountered in the aircraft selection process. The comparative analysis results show that the proposed method is an effective and accurate tool that can help analysts make better strategic, tactical, and operational decisions.

Keywords: aircraft, military fighter aircraft selection, multiple criteria decision making, multiple criteria decision making analysis, mean weight, entropy weight, MCDMA, PARIS, TOPSIS, Saab Gripen, Dassault Rafale, Eurofighter Typhoon

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 574