Search results for: linear machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2912

Search results for: linear machine

1742 Feature Based Unsupervised Intrusion Detection

Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein

Abstract:

The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.

Keywords: Information Gain (IG), Intrusion Detection System (IDS), K-means Clustering, Weka.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2776
1741 A Static Android Malware Detection Based on Actual Used Permissions Combination and API Calls

Authors: Xiaoqing Wang, Junfeng Wang, Xiaolan Zhu

Abstract:

Android operating system has been recognized by most application developers because of its good open-source and compatibility, which enriches the categories of applications greatly. However, it has become the target of malware attackers due to the lack of strict security supervision mechanisms, which leads to the rapid growth of malware, thus bringing serious safety hazards to users. Therefore, it is critical to detect Android malware effectively. Generally, the permissions declared in the AndroidManifest.xml can reflect the function and behavior of the application to a large extent. Since current Android system has not any restrictions to the number of permissions that an application can request, developers tend to apply more than actually needed permissions in order to ensure the successful running of the application, which results in the abuse of permissions. However, some traditional detection methods only consider the requested permissions and ignore whether it is actually used, which leads to incorrect identification of some malwares. Therefore, a machine learning detection method based on the actually used permissions combination and API calls was put forward in this paper. Meanwhile, several experiments are conducted to evaluate our methodology. The result shows that it can detect unknown malware effectively with higher true positive rate and accuracy while maintaining a low false positive rate. Consequently, the AdaboostM1 (J48) classification algorithm based on information gain feature selection algorithm has the best detection result, which can achieve an accuracy of 99.8%, a true positive rate of 99.6% and a lowest false positive rate of 0.

Keywords: Android, permissions combination, API calls, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915
1740 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: Audit fee, heteroscedasticity, Lagrange multiplier test, periodicity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 739
1739 A Comprehensive Evaluation of Supervised Machine Learning for the Phase Identification Problem

Authors: Brandon Foggo, Nanpeng Yu

Abstract:

Power distribution circuits undergo frequent network topology changes that are often left undocumented. As a result, the documentation of a circuit’s connectivity becomes inaccurate with time. The lack of reliable circuit connectivity information is one of the biggest obstacles to model, monitor, and control modern distribution systems. To enhance the reliability and efficiency of electric power distribution systems, the circuit’s connectivity information must be updated periodically. This paper focuses on one critical component of a distribution circuit’s topology - the secondary transformer to phase association. This topology component describes the set of phase lines that feed power to a given secondary transformer (and therefore a given group of power consumers). Finding the documentation of this component is call Phase Identification, and is typically performed with physical measurements. These measurements can take time lengths on the order of several months, but with supervised learning, the time length can be reduced significantly. This paper compares several such methods applied to Phase Identification for a large range of real distribution circuits, describes a method of training data selection, describes preprocessing steps unique to the Phase Identification problem, and ultimately describes a method which obtains high accuracy (> 96% in most cases, > 92% in the worst case) using only 5% of the measurements typically used for Phase Identification.

Keywords: Distribution network, machine learning, network topology, phase identification, smart grid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1074
1738 A Supervised Learning Data Mining Approach for Object Recognition and Classification in High Resolution Satellite Data

Authors: Mais Nijim, Rama Devi Chennuboyina, Waseem Al Aqqad

Abstract:

Advances in spatial and spectral resolution of satellite images have led to tremendous growth in large image databases. The data we acquire through satellites, radars, and sensors consists of important geographical information that can be used for remote sensing applications such as region planning, disaster management. Spatial data classification and object recognition are important tasks for many applications. However, classifying objects and identifying them manually from images is a difficult task. Object recognition is often considered as a classification problem, this task can be performed using machine-learning techniques. Despite of many machine-learning algorithms, the classification is done using supervised classifiers such as Support Vector Machines (SVM) as the area of interest is known. We proposed a classification method, which considers neighboring pixels in a region for feature extraction and it evaluates classifications precisely according to neighboring classes for semantic interpretation of region of interest (ROI). A dataset has been created for training and testing purpose; we generated the attributes by considering pixel intensity values and mean values of reflectance. We demonstrated the benefits of using knowledge discovery and data-mining techniques, which can be on image data for accurate information extraction and classification from high spatial resolution remote sensing imagery.

Keywords: Remote sensing, object recognition, classification, data mining, waterbody identification, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2053
1737 Quality Parameters of Offset Printing Wastewater

Authors: Kiurski S. Jelena, Kecić S. Vesna, Aksentijević M. Snežana

Abstract:

Samples of tap and wastewater were collected in three offset printing facilities in Novi Sad, Serbia. Ten physicochemical parameters were analyzed within all collected samples: pH, conductivity, m - alkalinity, p - alkalinity, acidity, carbonate concentration, hydrogen carbonate concentration, active oxygen content, chloride concentration and total alkali content. All measurements were conducted using the standard analytical and instrumental methods. Comparing the obtained results for tap water and wastewater, a clear quality difference was noticeable, since all physicochemical parameters were significantly higher within wastewater samples. The study also involves the application of simple linear regression analysis on the obtained dataset. By using software package ORIGIN 5 the pH value was mutually correlated with other physicochemical parameters. Based on the obtained values of Pearson coefficient of determination a strong positive correlation between chloride concentration and pH (r = -0.943), as well as between acidity and pH (r = -0.855) was determined. In addition, statistically significant difference was obtained only between acidity and chloride concentration with pH values, since the values of parameter F (247.634 and 182.536) were higher than Fcritical (5.59). In this way, results of statistical analysis highlighted the most influential parameter of water contamination in offset printing, in the form of acidity and chloride concentration. The results showed that variable dependence could be represented by the general regression model: y = a0 + a1x+ k, which further resulted with matching graphic regressions.

Keywords: Pollution, printing industry, simple linear regression analysis, wastewater.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1674
1736 Swarm Intelligence based Optimal Linear Phase FIR High Pass Filter Design using Particle Swarm Optimization with Constriction Factor and Inertia Weight Approach

Authors: Sangeeta Mandal, Rajib Kar, Durbadal Mandal, Sakti Prasad Ghoshal

Abstract:

In this paper, an optimal design of linear phase digital high pass finite impulse response (FIR) filter using Particle Swarm Optimization with Constriction Factor and Inertia Weight Approach (PSO-CFIWA) has been presented. In the design process, the filter length, pass band and stop band frequencies, feasible pass band and stop band ripple sizes are specified. FIR filter design is a multi-modal optimization problem. The conventional gradient based optimization techniques are not efficient for digital filter design. Given the filter specifications to be realized, the PSO-CFIWA algorithm generates a set of optimal filter coefficients and tries to meet the ideal frequency response characteristic. In this paper, for the given problem, the designs of the optimal FIR high pass filters of different orders have been performed. The simulation results have been compared to those obtained by the well accepted algorithms such as Parks and McClellan algorithm (PM), genetic algorithm (GA). The results justify that the proposed optimal filter design approach using PSOCFIWA outperforms PM and GA, not only in the accuracy of the designed filter but also in the convergence speed and solution quality.

Keywords: FIR Filter; PSO-CFIWA; PSO; Parks and McClellanAlgorithm, Evolutionary Optimization Technique; MagnitudeResponse; Convergence; High Pass Filter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1554
1735 Topographical Image Transference Compatibility Generated Through Moiré Technique Applying Parametrical Softwares of Computer Assisted Design

Authors: M. V. G. Silva, J. Gazzola, I. M. Dal Fabbro, A. C. L. Lino

Abstract:

Computer aided design accounts with the support of parametric software in the design of machine components as well as of any other pieces of interest. The complexities of the element under study sometimes offer certain difficulties to computer design, or ever might generate mistakes in the final body conception. Reverse engineering techniques are based on the transformation of already conceived body images into a matrix of points which can be visualized by the design software. The literature exhibits several techniques to obtain machine components dimensional fields, as contact instrument (MMC), calipers and optical methods as laser scanner, holograms as well as moiré methods. The objective of this research work was to analyze the moiré technique as instrument of reverse engineering, applied to bodies of nom complex geometry as simple solid figures, creating matrices of points. These matrices were forwarded to a parametric software named SolidWorks to generate the virtual object. Volume data obtained by mechanical means, i.e., by caliper, the volume obtained through the moiré method and the volume generated by the SolidWorks software were compared and found to be in close agreement. This research work suggests the application of phase shifting moiré methods as instrument of reverse engineering, serving also to support farm machinery element designs.

Keywords: Reverse engineering, Moiré technique, three dimensional image generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3457
1734 Measuring the Structural Similarity of Web-based Documents: A Novel Approach

Authors: Matthias Dehmer, Frank Emmert Streib, Alexander Mehler, Jürgen Kilian

Abstract:

Most known methods for measuring the structural similarity of document structures are based on, e.g., tag measures, path metrics and tree measures in terms of their DOM-Trees. Other methods measures the similarity in the framework of the well known vector space model. In contrast to these we present a new approach to measuring the structural similarity of web-based documents represented by so called generalized trees which are more general than DOM-Trees which represent only directed rooted trees.We will design a new similarity measure for graphs representing web-based hypertext structures. Our similarity measure is mainly based on a novel representation of a graph as strings of linear integers, whose components represent structural properties of the graph. The similarity of two graphs is then defined as the optimal alignment of the underlying property strings. In this paper we apply the well known technique of sequence alignments to solve a novel and challenging problem: Measuring the structural similarity of generalized trees. More precisely, we first transform our graphs considered as high dimensional objects in linear structures. Then we derive similarity values from the alignments of the property strings in order to measure the structural similarity of generalized trees. Hence, we transform a graph similarity problem to a string similarity problem. We demonstrate that our similarity measure captures important structural information by applying it to two different test sets consisting of graphs representing web-based documents.

Keywords: Graph similarity, hierarchical and directed graphs, hypertext, generalized trees, web structure mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2557
1733 Power System Damping Using Hierarchical Fuzzy Multi- Input PSS and Communication Lines Active Power Deviations Input and SVC

Authors: Mohammad Hasan Raouf, Ahmad Rouhani, Mohammad Abedini, Ebrahim Rasooli Anarmarzi

Abstract:

In this paper the application of a hierarchical fuzzy system (HFS) based on MPSS and SVC in multi-machine environment is studied. Also the effect of communication lines active power variance signal between two ΔPTie-line regions, as one of the inputs of hierarchical fuzzy multi-input PSS and SVC (HFMPSS & SVC), on the increase of low frequency oscillation damping is examined. In the MPSS, to have better efficiency an auxiliary signal of reactive power deviation (ΔQ) is added with ΔP+ Δω input type PSS. The number of rules grows exponentially with the number of variables in a classic fuzzy system. To reduce the number of rules the HFS consists of a number of low-dimensional fuzzy systems in a hierarchical structure. Phasor model of SVC is described and used in this paper. The performances of MPSS and ΔPTie-line based HFMPSS and also the proposed method in damping inter-area mode of oscillation are examined in response to disturbances. The efficiency of the proposed model is examined by simulating a four-machine power system. Results show that the proposed method is performing satisfactorily within the whole range of disturbances and reduces the cost of system.

Keywords: Communication lines active power variance signal, Hierarchical fuzzy system (HFS), Multi-input power system stabilizer (MPSS), Static VAR compensator (SVC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670
1732 Compact Optical Sensors for Harsh Environments

Authors: Branislav Timotijevic, Yves Petremand, Markus Luetzelschwab, Dara Bayat, Laurent Aebi

Abstract:

Optical miniaturized sensors with remote readout are required devices for the monitoring in harsh electromagnetic environments. As an example, in turbo and hydro generators, excessively high vibrations of the end-windings can lead to dramatic damages, imposing very high, additional service costs. A significant change of the generator temperature can also be an indicator of the system failure. Continuous monitoring of vibrations, temperature, humidity, and gases is therefore mandatory. The high electromagnetic fields in the generators impose the use of non-conductive devices in order to prevent electromagnetic interferences and to electrically isolate the sensing element to the electronic readout. Metal-free sensors are good candidates for such systems since they are immune to very strong electromagnetic fields and given the fact that they are non-conductive. We have realized miniature optical accelerometer and temperature sensors for a remote sensing of the harsh environments using the common, inexpensive silicon Micro Electro-Mechanical System (MEMS) platform. Both devices show highly linear response. The accelerometer has a deviation within 1% from the linear fit when tested in a range 0 – 40 g. The temperature sensor can provide the measurement accuracy better than 1 °C in a range 20 – 150 °C. The design of other type of sensors for the environments with high electromagnetic interferences has also been discussed.

Keywords: Accelerometer, harsh environment, optical MEMS, pressure sensor, remote sensing, temperature sensor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1130
1731 Machine Learning Framework: Competitive Intelligence and Key Drivers Identification of Market Share Trends among Healthcare Facilities

Authors: A. Appe, B. Poluparthi, L. Kasivajjula, U. Mv, S. Bagadi, P. Modi, A. Singh, H. Gunupudi, S. Troiano, J. Paul, J. Stovall, J. Yamamoto

Abstract:

The necessity of data-driven decisions in healthcare strategy formulation is rapidly increasing. A reliable framework which helps identify factors impacting a healthcare provider facility or a hospital (from here on termed as facility) market share is of key importance. This pilot study aims at developing a data-driven machine learning-regression framework which aids strategists in formulating key decisions to improve the facility’s market share which in turn impacts in improving the quality of healthcare services. The US (United States) healthcare business is chosen for the study, and the data spanning 60 key facilities in Washington State and about 3 years of historical data are considered. In the current analysis, market share is termed as the ratio of the facility’s encounters to the total encounters among the group of potential competitor facilities. The current study proposes a two-pronged approach of competitor identification and regression approach to evaluate and predict market share, respectively. Leveraged model agnostic technique, SHAP (SHapley Additive exPlanations), to quantify the relative importance of features impacting the market share. Typical techniques in literature to quantify the degree of competitiveness among facilities use an empirical method to calculate a competitive factor to interpret the severity of competition. The proposed method identifies a pool of competitors, develops Directed Acyclic Graphs (DAGs) and feature level word vectors, and evaluates the key connected components at the facility level. This technique is robust since it is data-driven, which minimizes the bias from empirical techniques. The DAGs factor in partial correlations at various segregations and key demographics of facilities along with a placeholder to factor in various business rules (for e.g., quantifying the patient exchanges, provider references, and sister facilities). Identified are the multiple groups of competitors among facilities. Leveraging the competitors' identified developed and fine-tuned Random Forest Regression model to predict the market share. To identify key drivers of market share at an overall level, permutation feature importance of the attributes was calculated. For relative quantification of features at a facility level, incorporated SHAP, a model agnostic explainer. This helped to identify and rank the attributes at each facility which impacts the market share. This approach proposes an amalgamation of the two popular and efficient modeling practices, viz., machine learning with graphs and tree-based regression techniques to reduce the bias. With these, we helped to drive strategic business decisions.

Keywords: Competition, DAGs, hospital, healthcare, machine learning, market share, random forest, SHAP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 284
1730 Optimization of Air Pollution Control Model for Mining

Authors: Zunaira Asif, Zhi Chen

Abstract:

The sustainable measures on air quality management are recognized as one of the most serious environmental concerns in the mining region. The mining operations emit various types of pollutants which have significant impacts on the environment. This study presents a stochastic control strategy by developing the air pollution control model to achieve a cost-effective solution. The optimization method is formulated to predict the cost of treatment using linear programming with an objective function and multi-constraints. The constraints mainly focus on two factors which are: production of metal should not exceed the available resources, and air quality should meet the standard criteria of the pollutant. The applicability of this model is explored through a case study of an open pit metal mine, Utah, USA. This method simultaneously uses meteorological data as a dispersion transfer function to support the practical local conditions. The probabilistic analysis and the uncertainties in the meteorological conditions are accomplished by Monte Carlo simulation. Reasonable results have been obtained to select the optimized treatment technology for PM2.5, PM10, NOx, and SO2. Additional comparison analysis shows that baghouse is the least cost option as compared to electrostatic precipitator and wet scrubbers for particulate matter, whereas non-selective catalytical reduction and dry-flue gas desulfurization are suitable for NOx and SO2 reduction respectively. Thus, this model can aid planners to reduce these pollutants at a marginal cost by suggesting control pollution devices, while accounting for dynamic meteorological conditions and mining activities.

Keywords: Air pollution, linear programming, mining, optimization, treatment technologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606
1729 Probabilistic Crash Prediction and Prevention of Vehicle Crash

Authors: Lavanya Annadi, Fahimeh Jafari

Abstract:

Transportation brings immense benefits to society, but it also has its costs. Costs include the cost of infrastructure, personnel, and equipment, but also the loss of life and property in traffic accidents on the road, delays in travel due to traffic congestion, and various indirect costs in terms of air transport. This research aims to predict the probabilistic crash prediction of vehicles using Machine Learning due to natural and structural reasons by excluding spontaneous reasons, like overspeeding, etc., in the United States. These factors range from meteorological elements such as weather conditions, precipitation, visibility, wind speed, wind direction, temperature, pressure, and humidity, to human-made structures, like road structure components such as Bumps, Roundabouts, No Exit, Turning Loops, Give Away, etc. The probabilities are categorized into ten distinct classes. All the predictions are based on multiclass classification techniques, which are supervised learning. This study considers all crashes in all states collected by the US government. The probability of the crash was determined by employing Multinomial Expected Value, and a classification label was assigned accordingly. We applied three classification models, including multiclass Logistic Regression, Random Forest and XGBoost. The numerical results show that XGBoost achieved a 75.2% accuracy rate which indicates the part that is being played by natural and structural reasons for the crash. The paper has provided in-depth insights through exploratory data analysis.

Keywords: Road safety, crash prediction, exploratory analysis, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83
1728 A Neurofuzzy Learning and its Application to Control System

Authors: Seema Chopra, R. Mitra, Vijay Kumar

Abstract:

A neurofuzzy approach for a given set of input-output training data is proposed in two phases. Firstly, the data set is partitioned automatically into a set of clusters. Then a fuzzy if-then rule is extracted from each cluster to form a fuzzy rule base. Secondly, a fuzzy neural network is constructed accordingly and parameters are tuned to increase the precision of the fuzzy rule base. This network is able to learn and optimize the rule base of a Sugeno like Fuzzy inference system using Hybrid learning algorithm, which combines gradient descent, and least mean square algorithm. This proposed neurofuzzy system has the advantage of determining the number of rules automatically and also reduce the number of rules, decrease computational time, learns faster and consumes less memory. The authors also investigate that how neurofuzzy techniques can be applied in the area of control theory to design a fuzzy controller for linear and nonlinear dynamic systems modelling from a set of input/output data. The simulation analysis on a wide range of processes, to identify nonlinear components on-linely in a control system and a benchmark problem involving the prediction of a chaotic time series is carried out. Furthermore, the well-known examples of linear and nonlinear systems are also simulated under the Matlab/Simulink environment. The above combination is also illustrated in modeling the relationship between automobile trips and demographic factors.

Keywords: Fuzzy control, neuro-fuzzy techniques, fuzzy subtractive clustering, extraction of rules, and optimization of membership functions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2592
1727 Evaluation of Dynamic Behavior a Machine Tool Spindle System through Modal and Unbalance Response Analysis

Authors: Khairul Jauhari, Achmad Widodo, Ismoyo Haryanto

Abstract:

The spindle system is one of the most important components of machine tool. The dynamic properties of the spindle affect the machining productivity and quality of the work pieces. Thus, it is important and necessary to determine its dynamic characteristics of spindles in the design and development in order to avoid forced resonance. The finite element method (FEM) has been adopted in order to obtain the dynamic behavior of spindle system. For this reason, obtaining the Campbell diagrams and determining the critical speeds are very useful to evaluate the spindle system dynamics. The unbalance response of the system to the center of mass unbalance at the cutting tool is also calculated to investigate the dynamic behavior. In this paper, we used an ANSYS Parametric Design Language (APDL) program which based on finite element method has been implemented to make the full dynamic analysis and evaluation of the results. Results show that the calculated critical speeds are far from the operating speed range of the spindle, thus, the spindle would not experience resonance, and the maximum unbalance response at operating speed is still with acceptable limit. ANSYS Parametric Design Language (APDL) can be used by spindle designer as tools in order to increase the product quality, reducing cost, and time consuming in the design and development stages.

Keywords: ANSYS parametric design language (APDL), Campbell diagram, Critical speeds, Unbalance response, The Spindle system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2830
1726 Three Dimensional Dynamic Analysis of Water Storage Tanks Considering FSI Using FEM

Authors: S. Mahdi S. Kolbadi, Ramezan Ali Alvand, Afrasiab Mirzaei

Abstract:

In this study, to investigate and analyze the seismic behavior of concrete in open rectangular water storage tanks in two-dimensional and three-dimensional spaces, the Finite Element Method has been used. Through this method, dynamic responses can be investigated together in fluid storages system. Soil behavior has been simulated using tanks boundary conditions in linear form. In this research, in addition to flexibility of wall, the effects of fluid-structure interaction on seismic response of tanks have been investigated to account for the effects of flexible foundation in linear boundary conditions form, and a dynamic response of rectangular tanks in two-dimensional and three-dimensional spaces using finite element method has been provided. The boundary conditions of both rigid and flexible walls in two-dimensional finite element method have been considered to investigate the effect of wall flexibility on seismic response of fluid and storage system. Furthermore, three-dimensional model of fluid-structure interaction issue together with wall flexibility has been analyzed under the three components of earthquake. The obtained results show that two-dimensional model is also accurately near to the results of three-dimension as well as flexibility of foundation leads to absorb received energy and relative reduction of responses.

Keywords: Dynamic behavior, water storage tank, fluid-structure interaction, flexible wall.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 980
1725 Evaluation of the Impact of Dataset Characteristics for Classification Problems in Biological Applications

Authors: Kanthida Kusonmano, Michael Netzer, Bernhard Pfeifer, Christian Baumgartner, Klaus R. Liedl, Armin Graber

Abstract:

Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.

Keywords: Classification, High dimensional data, Machine learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2384
1724 Rapid Finite-Element Based Airport Pavement Moduli Solutions using Neural Networks

Authors: Kasthurirangan Gopalakrishnan, Marshall R. Thompson, Anshu Manik

Abstract:

This paper describes the use of artificial neural networks (ANN) for predicting non-linear layer moduli of flexible airfield pavements subjected to new generation aircraft (NGA) loading, based on the deflection profiles obtained from Heavy Weight Deflectometer (HWD) test data. The HWD test is one of the most widely used tests for routinely assessing the structural integrity of airport pavements in a non-destructive manner. The elastic moduli of the individual pavement layers backcalculated from the HWD deflection profiles are effective indicators of layer condition and are used for estimating the pavement remaining life. HWD tests were periodically conducted at the Federal Aviation Administration-s (FAA-s) National Airport Pavement Test Facility (NAPTF) to monitor the effect of Boeing 777 (B777) and Beoing 747 (B747) test gear trafficking on the structural condition of flexible pavement sections. In this study, a multi-layer, feed-forward network which uses an error-backpropagation algorithm was trained to approximate the HWD backcalculation function. The synthetic database generated using an advanced non-linear pavement finite-element program was used to train the ANN to overcome the limitations associated with conventional pavement moduli backcalculation. The changes in ANN-based backcalculated pavement moduli with trafficking were used to compare the relative severity effects of the aircraft landing gears on the NAPTF test pavements.

Keywords: Airfield pavements, ANN, backcalculation, newgeneration aircraft

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2185
1723 Aircraft Selection Using Multiple Criteria Decision Making Analysis Method with Different Data Normalization Techniques

Authors: C. Ardil

Abstract:

This paper presents an original application of multiple criteria decision making analysis theory to the evaluation of aircraft selection problem. The selection of an optimal, efficient and reliable fleet, network and operations planning policy is one of the most important factors in aircraft selection problem. Given that decision making in aircraft selection involves the consideration of a number of opposite criteria and possible solutions, such a selection can be considered as a multiple criteria decision making analysis problem. This study presents a new integrated approach to decision making by considering the multiple criteria utility theory and the maximal regret minimization theory methods as well as aircraft technical, economical, and environmental aspects. Multiple criteria decision making analysis method uses different normalization techniques to allow criteria to be aggregated with qualitative and quantitative data of the decision problem. Therefore, selecting a suitable normalization technique for the model is also a challenge to provide data aggregation for the aircraft selection problem. To compare the impact of different normalization techniques on the decision problem, the vector, linear (sum), linear (max), and linear (max-min) data normalization techniques were identified to evaluate aircraft selection problem. As a logical implication of the proposed approach, it enhances the decision making process through enabling the decision maker to: (i) use higher level knowledge regarding the selection of criteria weights and the proposed technique, (ii) estimate the ranking of an alternative, under different data normalization techniques and integrated criteria weights after a posteriori analysis of the final rankings of alternatives. A set of commercial passenger aircraft were considered in order to illustrate the proposed approach. The obtained results of the proposed approach were compared using Spearman's rho tests. An analysis of the final rank stability with respect to the changes in criteria weights was also performed so as to assess the sensitivity of the alternative rankings obtained by the application of different data normalization techniques and the proposed approach.

Keywords: Normalization Techniques, Aircraft Selection, Multiple Criteria Decision Making, Multiple Criteria Decision Making Analysis, MCDMA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 587
1722 Performance Evaluation of Parallel Surface Modeling and Generation on Actual and Virtual Multicore Systems

Authors: Nyeng P. Gyang

Abstract:

Even though past, current and future trends suggest that multicore and cloud computing systems are increasingly prevalent/ubiquitous, this class of parallel systems is nonetheless underutilized, in general, and barely used for research on employing parallel Delaunay triangulation for parallel surface modeling and generation, in particular. The performances, of actual/physical and virtual/cloud multicore systems/machines, at executing various algorithms, which implement various parallelization strategies of the incremental insertion technique of the Delaunay triangulation algorithm, were evaluated. T-tests were run on the data collected, in order to determine whether various performance metrics differences (including execution time, speedup and efficiency) were statistically significant. Results show that the actual machine is approximately twice faster than the virtual machine at executing the same programs for the various parallelization strategies. Results, which furnish the scalability behaviors of the various parallelization strategies, also show that some of the differences between the performances of these systems, during different runs of the algorithms on the systems, were statistically significant. A few pseudo superlinear speedup results, which were computed from the raw data collected, are not true superlinear speedup values. These pseudo superlinear speedup values, which arise as a result of one way of computing speedups, disappear and give way to asymmetric speedups, which are the accurate kind of speedups that occur in the experiments performed.

Keywords: Cloud computing systems, multicore systems, parallel delaunay triangulation, parallel surface modeling and generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 879
1721 Development of Rock Engineering System-Based Models for Tunneling Progress Analysis and Evaluation: Case Study of Tailrace Tunnel of Azad Power Plant Project

Authors: S. Golmohammadi, M. Noorian Bidgoli

Abstract:

Tunneling progress is a key parameter in the blasting method of tunneling. Taking measures to enhance tunneling advance can limit the progress distance without a supporting system, subsequently reducing or eliminating the risk of damage. This paper focuses on modeling tunneling progress using three main groups of parameters (tunneling geometry, blasting pattern, and rock mass specifications) based on the Rock Engineering Systems (RES) methodology. In the proposed models, four main effective parameters on tunneling progress are considered as inputs (RMR, Q-system, Specific charge of blasting, Area), with progress as the output. Data from 86 blasts conducted at the tailrace tunnel in the Azad Dam, western Iran, were used to evaluate the progress value for each blast. The results indicated that, for the 86 blasts, the progress of the estimated model aligns mostly with the measured progress. This paper presents a method for building the interaction matrix (statistical base) of the RES model. Additionally, a comparison was made between the results of the new RES-based model and a Multi-Linear Regression (MLR) analysis model. In the RES-based model, the effective parameters are RMR (35.62%), Q (28.6%), q (specific charge of blasting) (20.35%), and A (15.42%), respectively, whereas for MLR analysis, the main parameters are RMR, Q (system), q, and A. These findings confirm the superior performance of the RES-based model over the other proposed models.

Keywords: Rock Engineering Systems, tunneling progress, Multi Linear Regression, Specific charge of blasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 141
1720 Investigation of the Operational Principle and Flow Analysis of a Newly Developed Dry Separator

Authors: Sung Uk Park, Young Su Kang, Sangmo Kang, Yong Kweon Suh

Abstract:

Mineral product, waste concrete (fine aggregates), waste in the optical field, industry, and construction employ separators to separate solids and classify them according to their size. Various sorting machines are used in the industrial field such as those operating under electrical properties, centrifugal force, wind power, vibration, and magnetic force. Study on separators has been carried out to contribute to the environmental industry. In this study, we perform CFD analysis for understanding the basic mechanism of the separation of waste concrete (fine aggregate) particles from air with a machine built with a rotor with blades. In CFD, we first performed two-dimensional particle tracking for various particle sizes for the model with 1 degree, 1.5 degree, and 2 degree angle between each blade to verify the boundary conditions and the method of rotating domain method to be used in 3D. Then we developed 3D numerical model with ANSYS CFX to calculate the air flow and track the particles. We judged the capability of particle separation for given size by counting the number of particles escaping from the domain toward the exit among 10 particles issued at the inlet. We confirm that particles experience stagnant behavior near the exit of the rotating blades where the centrifugal force acting on the particles is in balance with the air drag force. It was also found that the minimum particle size that can be separated by the machine with the rotor is determined by its capability to stay at the outlet of the rotor channels.

Keywords: Environmental industry, Separator, CFD, Fine aggregate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1807
1719 Smart Power Scheduling to Reduce Peak Demand and Cost of Energy in Smart Grid

Authors: Hemant I. Joshi, Vivek J. Pandya

Abstract:

This paper discusses the simulation and experimental work of small Smart Grid containing ten consumers. Smart Grid is characterized by a two-way flow of real-time information and energy. RTP (Real Time Pricing) based tariff is implemented in this work to reduce peak demand, PAR (peak to average ratio) and cost of energy consumed. In the experimental work described here, working of Smart Plug, HEC (Home Energy Controller), HAN (Home Area Network) and communication link between consumers and utility server are explained. Algorithms for Smart Plug, HEC, and utility server are presented and explained in this work. After receiving the Real Time Price for different time slots of the day, HEC interacts automatically by running an algorithm which is based on Linear Programming Problem (LPP) method to find the optimal energy consumption schedule. Algorithm made for utility server can handle more than one off-peak time period during the day. Simulation and experimental work are carried out for different cases. At the end of this work, comparison between simulation results and experimental results are presented to show the effectiveness of the minimization method adopted.

Keywords: Smart Grid, Real Time Pricing, Peak to Average Ratio, Home Area Network, Home Energy Controller, Smart Plug, Utility Server, Linear Programming Problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685
1718 Performance Based Design of Masonry Infilled Reinforced Concrete Frames for Near-Field Earthquakes Using Energy Methods

Authors: Alok Madan, Arshad K. Hashmi

Abstract:

Performance based design (PBD) is an iterative exercise in which a preliminary trial design of the building structure is selected and if the selected trial design of the building structure does not conform to the desired performance objective, the trial design is revised. In this context, development of a fundamental approach for performance based seismic design of masonry infilled frames with minimum number of trials is an important objective. The paper presents a plastic design procedure based on the energy balance concept for PBD of multi-story multi-bay masonry infilled reinforced concrete (R/C) frames subjected to near-field earthquakes. The proposed energy based plastic design procedure was implemented for trial performance based seismic design of representative masonry infilled reinforced concrete frames with various practically relevant distributions of masonry infill panels over the frame elevation. Non-linear dynamic analyses of the trial PBD of masonry infilled R/C frames was performed under the action of near-field earthquake ground motions. The results of non-linear dynamic analyses demonstrate that the proposed energy method is effective for performance based design of masonry infilled R/C frames under near-field as well as far-field earthquakes.

Keywords: Masonry Infilled Frame, Energy Methods, Near-fault Ground Motions, Pushover Analysis, Nonlinear Dynamic Analysis, Seismic Demand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2790
1717 Effect of Injection Moulding Process Parameter on Tensile Strength Using Taguchi Method

Authors: Gurjeet Singh, M. K. Pradhan, Ajay Verma

Abstract:

The plastic industry plays very important role in the economy of any country. It is generally among the leading share of the economy of the country. Since metals and their alloys are very rarely available on the earth. Therefore, to produce plastic products and components, which finds application in many industrial as well as household consumer products is beneficial. Since 50% plastic products are manufactured by injection moulding process. For production of better quality product, we have to control quality characteristics and performance of the product. The process parameters plays a significant role in production of plastic, hence the control of process parameter is essential. In this paper the effect of the parameters selection on injection moulding process has been described. It is to define suitable parameters in producing plastic product. Selecting the process parameter by trial and error is neither desirable nor acceptable, as it is often tends to increase the cost and time. Hence, optimization of processing parameter of injection moulding process is essential. The experiments were designed with Taguchi’s orthogonal array to achieve the result with least number of experiments. Plastic material polypropylene is studied. Tensile strength test of material is done on universal testing machine, which is produced by injection moulding machine. By using Taguchi technique with the help of MiniTab-14 software the best value of injection pressure, melt temperature, packing pressure and packing time is obtained. We found that process parameter packing pressure contribute more in production of good tensile plastic product.

Keywords: Injection moulding, tensile strength, Taguchi method, poly-propylene.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3764
1716 Development of a Real-Time Simulink Based Robotic System to Study Force Feedback Mechanism during Instrument-Object Interaction

Authors: Jaydip M. Desai, Antonio Valdevit, Arthur Ritter

Abstract:

Robotic surgery is used to enhance minimally invasive surgical procedure. It provides greater degree of freedom for surgical tools but lacks of haptic feedback system to provide sense of touch to the surgeon. Surgical robots work on master-slave operation, where user is a master and robotic arms are the slaves. Current, surgical robots provide precise control of the surgical tools, but heavily rely on visual feedback, which sometimes cause damage to the inner organs. The goal of this research was to design and develop a realtime Simulink based robotic system to study force feedback mechanism during instrument-object interaction. Setup includes three VelmexXSlide assembly (XYZ Stage) for three dimensional movement, an end effector assembly for forceps, electronic circuit for four strain gages, two Novint Falcon 3D gaming controllers, microcontroller board with linear actuators, MATLAB and Simulink toolboxes. Strain gages were calibrated using Imada Digital Force Gauge device and tested with a hard-core wire to measure instrument-object interaction in the range of 0-35N. Designed Simulink model successfully acquires 3D coordinates from two Novint Falcon controllers and transfer coordinates to the XYZ stage and forceps. Simulink model also reads strain gages signal through 10-bit analog to digital converter resolution of a microcontroller assembly in real time, converts voltage into force and feedback the output signals to the Novint Falcon controller for force feedback mechanism. Experimental setup allows user to change forward kinematics algorithms to achieve the best-desired movement of the XYZ stage and forceps. This project combines haptic technology with surgical robot to provide sense of touch to the user controlling forceps through machine-computer interface.

Keywords: Haptic feedback, MATLAB, Simulink, Strain Gage, Surgical Robot.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3212
1715 Impact of the Electricity Market Prices on Energy Storage Operation during the COVID-19 Pandemic

Authors: Marin Mandić, Elis Sutlović, Tonći Modrić, Luka Stanić

Abstract:

With the restructuring and deregulation of the power system, storage owners, generation companies or private producers can offer their multiple services on various power markets and earn income in different types of markets, such as the day-ahead, real-time, ancillary services market, etc. During the COVID-19 pandemic, electricity prices, as well as ancillary services prices, increased significantly. The optimization of the energy storage operation was performed using a suitable model for simulating the operation of a pumped storage hydropower plant under market conditions. The objective function maximizes the income earned through energy arbitration, regulation-up, regulation-down and spinning reserve services. The optimization technique used for solving the objective function is mixed integer linear programming (MILP). In numerical examples, the pumped storage hydropower plant operation has been optimized considering the already achieved hourly electricity market prices from Nord Pool for the pre-pandemic (2019) and the pandemic (2020 and 2021) years. The impact of the electricity market prices during the COVID-19 pandemic on energy storage operation is shown through the analysis of income, operating hours, reserved capacity and consumed energy for each service. The results indicate the role of energy storage during a significant fluctuation in electricity and services prices.

Keywords: Electrical market prices, electricity market, energy storage optimization, mixed integer linear programming, MILP, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 517
1714 The Effects of Shot and Grit Blasting Process Parameters on Steel Pipes Coating Adhesion

Authors: Saeed Khorasanizadeh

Abstract:

Adhesion strength of exterior or interior coating of steel pipes is too important. Increasing of coating adhesion on surfaces can increase the life time of coating, safety factor of transmitting line pipe and decreasing the rate of corrosion and costs. Preparation of steel pipe surfaces before doing the coating process is done by shot and grit blasting. This is a mechanical way to do it. Some effective parameters on that process, are particle size of abrasives, distance to surface, rate of abrasive flow, abrasive physical properties, shapes, selection of abrasive, kind of machine and its power, standard of surface cleanness degree, roughness, time of blasting and weather humidity. This search intended to find some better conditions which improve the surface preparation, adhesion strength and corrosion resistance of coating. So, this paper has studied the effect of varying abrasive flow rate, changing the abrasive particle size, time of surface blasting on steel surface roughness and over blasting on it by using the centrifugal blasting machine. After preparation of numbers of steel samples (according to API 5L X52) and applying epoxy powder coating on them, to compare strength adhesion of coating by Pull-Off test. The results have shown that, increasing the abrasive particles size and flow rate, can increase the steel surface roughness and coating adhesion strength but increasing the blasting time can do surface over blasting and increasing surface temperature and hardness too, change, decreasing steel surface roughness and coating adhesion strength.

Keywords: surface preparation, abrasive particles, adhesionstrength

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9076
1713 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning

Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar

Abstract:

As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling. The research proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling. The paper concludes the challenges and improvement directions for Deep Reinforcement Learning-based resource scheduling algorithms.

Keywords: Resource scheduling, deep reinforcement learning, distributed system, artificial intelligence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 495