Search results for: simple random sampling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2287

Search results for: simple random sampling

1837 Ensemble Learning with Decision Tree for Remote Sensing Classification

Authors: Mahesh Pal

Abstract:

In recent years, a number of works proposing the combination of multiple classifiers to produce a single classification have been reported in remote sensing literature. The resulting classifier, referred to as an ensemble classifier, is generally found to be more accurate than any of the individual classifiers making up the ensemble. As accuracy is the primary concern, much of the research in the field of land cover classification is focused on improving classification accuracy. This study compares the performance of four ensemble approaches (boosting, bagging, DECORATE and random subspace) with a univariate decision tree as base classifier. Two training datasets, one without ant noise and other with 20 percent noise was used to judge the performance of different ensemble approaches. Results with noise free data set suggest an improvement of about 4% in classification accuracy with all ensemble approaches in comparison to the results provided by univariate decision tree classifier. Highest classification accuracy of 87.43% was achieved by boosted decision tree. A comparison of results with noisy data set suggests that bagging, DECORATE and random subspace approaches works well with this data whereas the performance of boosted decision tree degrades and a classification accuracy of 79.7% is achieved which is even lower than that is achieved (i.e. 80.02%) by using unboosted decision tree classifier.

Keywords: Ensemble learning, decision tree, remote sensingclassification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2548
1836 Time Series Regression with Meta-Clusters

Authors: Monika Chuchro

Abstract:

This paper presents a preliminary attempt to apply classification of time series using meta-clusters in order to improve the quality of regression models. In this case, clustering was performed as a method to obtain subgroups of time series data with normal distribution from the inflow into wastewater treatment plant data, composed of several groups differing by mean value. Two simple algorithms, K-mean and EM, were chosen as a clustering method. The Rand index was used to measure the similarity. After simple meta-clustering, a regression model was performed for each subgroups. The final model was a sum of the subgroups models. The quality of the obtained model was compared with the regression model made using the same explanatory variables, but with no clustering of data. Results were compared using determination coefficient (R2), measure of prediction accuracy- mean absolute percentage error (MAPE) and comparison on a linear chart. Preliminary results allow us to foresee the potential of the presented technique.

Keywords: Clustering, Data analysis, Data mining, Predictive models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1922
1835 The Direct Drivers of Ethnocentric Consumer, Intention and Actual Purchasing Behavior in Malaysia

Authors: Nik Kamariah Nik-Mat, Noor Hasmini Abd-Ghani, Jamal Mohammed Esmail Al-Ekam

Abstract:

The Malaysian government had consistently revived its campaign for “Buy Malaysian Goods” from time to time. The purpose of the campaign is to remind consumers to be ethnocentric and patriotic when purchasing product and services. This is necessary to ensure high demand for local products and services compared to foreign products. However, the decline of domestic investment in 2012 has triggered concern for the Malaysian economy. Hence, this study attempts to determine the drivers of actual purchasing behavior, intention to purchase domestic products and ethnocentrism. The study employs the cross-sectional primary data, self-administered on household, selected using stratified random sampling in four Malaysian regions. A nine factor driver of actual domestic purchasing behavior (culture openness, conservatism, collectivism, patriotism, control belief, interest in foreign travel, attitude, ethnocentrism and intention) were measured utilizing 60 items, using 7-point Likertscale. From 1000 questionnaires distributed, a sample of 486 were returned representing 48.6 percent response rate. From the fit generated structural model (SEM analysis), it was found that the drivers of actual purchase behavior are collectivism, cultural openness and patriotism; the drivers of intention to purchase domestic product are attitude, control belief, collectivism and conservatism; and drivers of ethnocentrism are cultural openness, control belief, foreign travel and patriotism. It also shows that Malaysian consumers scored high in ethnocentrism and patriotism. The findings are discussed in the perspective of its implication to Malaysian National Agenda.

Keywords: Actual purchase, ethnocentrism, culture openness, conservatism, collectivism, patriotism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3106
1834 Improvement of Central Composite Design in Modeling and Optimization of Simulation Experiments

Authors: A. Nuchitprasittichai, N. Lerdritsirikoon, T. Khamsing

Abstract:

Simulation modeling can be used to solve real world problems. It provides an understanding of a complex system. To develop a simplified model of process simulation, a suitable experimental design is required to be able to capture surface characteristics. This paper presents the experimental design and algorithm used to model the process simulation for optimization problem. The CO2 liquefaction based on external refrigeration with two refrigeration circuits was used as a simulation case study. Latin Hypercube Sampling (LHS) was purposed to combine with existing Central Composite Design (CCD) samples to improve the performance of CCD in generating the second order model of the system. The second order model was then used as the objective function of the optimization problem. The results showed that adding LHS samples to CCD samples can help capture surface curvature characteristics. Suitable number of LHS sample points should be considered in order to get an accurate nonlinear model with minimum number of simulation experiments.

Keywords: Central composite design, CO2 liquefaction, Latin Hypercube Sampling, simulation – based optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 708
1833 CFD Modeling of PROX Microreactor for Fuel Processing

Authors: M. Vahabi, M. H. Akbari

Abstract:

In order to investigate a PROX microreactor performance, two-dimensional modeling of the reacting flow between two parallel plates is performed through a finite volume method using an improved SIMPLE algorithm. A three-step surface kinetics including hydrogen oxidation, carbon monoxide oxidation and water-gas shift reaction is applied for a Pt-Fe/γ-Al2O3 catalyst and operating temperatures of about 100ºC. Flow pattern, pressure field, temperature distribution, and mole fractions of species are found in the whole domain for all cases. Also, the required reactive length for removing carbon monoxide from about 2% to less than 10 ppm is found. Furthermore, effects of hydraulic diameter, wall temperature, and inlet mole fraction of air and water are investigated by considering carbon monoxide selectivity and conversion. It is found that air and water addition may improve the performance of the microreactor in carbon monoxide removal in such operating conditions; this is in agreement with the pervious published results.

Keywords: CFD, Fuel Processing, PROX, Reacting Flow, SIMPLE algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1417
1832 Free Vibration Analysis of Functionally Graded Pretwisted Plate in Thermal Environment Using Finite Element Method

Authors: S. Parida, S. C. Mohanty

Abstract:

The free vibration behavior of thick pretwisted cantilevered functionally graded material (FGM) plate subjected to the thermal environment is investigated numerically in the present paper. A mathematical model is developed in the framework of higher order shear deformation theory (HOST) with C0 finite element formulation i.e. independent displacement and rotations. The material properties are assumed to be temperature dependent and vary continuously through the thickness based on the volume fraction exponent in simple power rule. The finite element model has been discretized into eight node quadratic serendipity elements with node wise seven degrees of freedom. The effect of plate geometry, temperature field, material composition, and the modal analysis on the vibrational characteristics is examined. Finally, the results are verified by comparing with those available in literature.

Keywords: FGM, pretwisted plate, thermal environment, HOST, simple power law.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 762
1831 Analysis and Design of Offshore Triceratops under Ultra-Deep Waters

Authors: Srinivasan Chandrasekaran, R. Nagavinothini

Abstract:

Offshore platforms for ultra-deep waters are form-dominant by design; hybrid systems with large flexibility in horizontal plane and high rigidity in vertical plane are preferred due to functional complexities. Offshore triceratops is relatively a new-generation offshore platform, whose deck is partially isolated from the supporting buoyant legs by ball joints. They allow transfer of partial displacements of buoyant legs to the deck but restrain transfer of rotational response. Buoyant legs are in turn taut-moored to the sea bed using pre-tension tethers. Present study will discuss detailed dynamic analysis and preliminary design of the chosen geometric, which is necessary as a proof of validation for such design applications. A detailed numeric analysis of triceratops at 2400 m water depth under random waves is presented. Preliminary design confirms member-level design requirements under various modes of failure. Tether configuration, proposed in the study confirms no pull-out of tethers as stress variation is comparatively lesser than the yield value. Presented study shall aid offshore engineers and contractors to understand suitability of triceratops, in terms of design and dynamic response behaviour.

Keywords: Buoyant legs, dynamic analysis, offshore structures, preliminary design, random waves, triceratops.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1041
1830 Effect of Clustering on Energy Efficiency and Network Lifetime in Wireless Sensor Networks

Authors: Prakash G L, Chaitra K Meti, Poojitha K, Divya R.K.

Abstract:

Wireless Sensor Network is Multi hop Self-configuring Wireless Network consisting of sensor nodes. The deployment of wireless sensor networks in many application areas, e.g., aggregation services, requires self-organization of the network nodes into clusters. Efficient way to enhance the lifetime of the system is to partition the network into distinct clusters with a high energy node as cluster head. The different methods of node clustering techniques have appeared in the literature, and roughly fall into two families; those based on the construction of a dominating set and those which are based solely on energy considerations. Energy optimized cluster formation for a set of randomly scattered wireless sensors is presented. Sensors within a cluster are expected to be communicating with cluster head only. The energy constraint and limited computing resources of the sensor nodes present the major challenges in gathering the data. In this paper we propose a framework to study how partially correlated data affect the performance of clustering algorithms. The total energy consumption and network lifetime can be analyzed by combining random geometry techniques and rate distortion theory. We also present the relation between compression distortion and data correlation.

Keywords: Clusters, multi hop, random geometry, rate distortion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1608
1829 A New Design of Permanent Magnets Reluctance Generator

Authors: Andi Pawawoi, Syafii

Abstract:

Instantaneous electromagnetic torque of simple reflectance generator can be positive at a time and negative at other time. It is utilized to design a permanent magnet reluctance generator specifically. Generator is designed by combining two simple reluctance generators, consists of two rotors mounted on the same shaft, two output-windings and a field source of the permanent magnet. By this design, the electromagnetic torque on both rotor will be eliminated each other, so the input torque generator can be smaller. Rotor is expected only to regulate the flux flow to both output windings alternately, until the magnetic energy is converted into electrical energy, such as occurs in the transformer energy conversion. ​​The prototype trials have been made to test this design. The test result show that the new design of permanent magnets reluctance generator able to convert energy from permanent magnets into electrical energy, this is proven by the existence 167% power output compared to the shaft input power.

Keywords: Energy, Magnet permanent, Reluctance generator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2815
1828 Ensemble Approach for Predicting Student's Academic Performance

Authors: L. A. Muhammad, M. S. Argungu

Abstract:

Educational data mining (EDM) has recorded substantial considerations. Techniques of data mining in one way or the other have been proposed to dig out out-of-sight knowledge in educational data. The result of the study got assists academic institutions in further enhancing their process of learning and methods of passing knowledge to students. Consequently, the performance of students boasts and the educational products are by no doubt enhanced. This study adopted a student performance prediction model premised on techniques of data mining with Students' Essential Features (SEF). SEF are linked to the learner's interactivity with the e-learning management system. The performance of the student's predictive model is assessed by a set of classifiers, viz. Bayes Network, Logistic Regression, and Reduce Error Pruning Tree (REP). Consequently, ensemble methods of Bagging, Boosting, and Random Forest (RF) are applied to improve the performance of these single classifiers. The study reveals that the result shows a robust affinity between learners' behaviors and their academic attainment. Result from the study shows that the REP Tree and its ensemble record the highest accuracy of 83.33% using SEF. Hence, in terms of the Receiver Operating Curve (ROC), boosting method of REP Tree records 0.903, which is the best. This result further demonstrates the dependability of the proposed model.

Keywords: Ensemble, bagging, Random Forest, boosting, data mining, classifiers, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 695
1827 Analysing the Elementary Science and Technology Coursebook and Student Workbook in Terms of Constructivism

Authors: Nil Duban

Abstract:

The curriculum of the primary school science course was redesigned on the basis of constructivism in 2005-2006 academic years, in Turkey. In this context, the name of this course has been changed as “Science and Technology"; and both content and course books, students workbooks for this course have been redesigned in light of constructivism. The aim of this study is to determine whether the Science and Technology course books and student work books for primary school 5th grade are appropriate for the constructivism by evaluating them in terms of the fundamental principles of constructivism. In this study, out of qualitative research methods, documentation technique (i.e. document analysis) is applied; while selecting samples, criterion-sampling is used out of purposeful sampling techniques. When the Science and Technology course book and workbook for the 5th grade in primary education are examined, it is seen that both books complete each other in certain areas. Consequently, it can be claimed that in spite of some inadequate and missing points in the course book and workbook of the primary school Science and Technology course for the 5th grade students, these books are attempted to be designed in terms of the principles of constructivism. To overcome the inadequacies in the books, it can be suggested to redesign them. In addition to them, not to ignore the technology dimension of the course, the activities that encourage the students to prepare projects using technology cycle should be included.

Keywords: Constructivism, coursebooks, science and technology education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1925
1826 A Finite Difference Calculation Procedure for the Navier-Stokes Equations on a Staggered Curvilinear Grid

Authors: R. M. Barron, B. Zogheib

Abstract:

A new numerical method for solving the twodimensional, steady, incompressible, viscous flow equations on a Curvilinear staggered grid is presented in this paper. The proposed methodology is finite difference based, but essentially takes advantage of the best features of two well-established numerical formulations, the finite difference and finite volume methods. Some weaknesses of the finite difference approach are removed by exploiting the strengths of the finite volume method. In particular, the issue of velocity-pressure coupling is dealt with in the proposed finite difference formulation by developing a pressure correction equation in a manner similar to the SIMPLE approach commonly used in finite volume formulations. However, since this is purely a finite difference formulation, numerical approximation of fluxes is not required. Results obtained from the present method are based on the first-order upwind scheme for the convective terms, but the methodology can easily be modified to accommodate higher order differencing schemes.

Keywords: Curvilinear, finite difference, finite volume, SIMPLE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3164
1825 Computational and Experimental Investigation of Supersonic Flow and their Controls

Authors: Vasana M. Don, Eldad J. Avital, Fariborz Motallebi

Abstract:

Supersonic open and closed cavity flows are investigated experimentally and computationally. Free stream Mach number of two is set. Schlieren imaging is used to visualise the flow behaviour showing stark differences between open and closed. Computational Fluid Dynamics (CFD) is used to simulate open cavity of flow with aspect ratio of 4. A rear wall treatment is implemented in order to pursue a simple passive control approach. Good qualitative agreement is achieved between the experimental flow visualisation and the CFD in terms of the expansion-shock waves system. The cavity oscillations are shown to be dominated by the first and third Rossister modes combining to high fluctuations of non-linear nature above the cavity rear edge. A simple rear wall treatment in terms of a hole shows mixed effect on the flow oscillations, RMS contours, and time history density fluctuations are given and analysed.

Keywords: Supersonic, Schlieren, open-cavity, flow simulation, passive control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2367
1824 Microbiological Contamination of Outdoor Air in Marine Durres's Harbour, Albania

Authors: Laura Gjyli, Pirro Prifti, Lindita Mukli, Silvana Gjyli, Irida Ikonomi, Jerina Kolitari

Abstract:

Microbial air contamination of the outdoor air in Marine Durres-s Harbour (Durres, Albania) was estimated by sedimentation technique in August-October 2008. The sampling areas were: Ferry Terminal (FT), Fishery Harbor (FH), East Zone (EZ), Fuel Quay (FQ) and Apollonian Beach (AB). The aim of this study was to measure the number of aerobic plate count (mesophilic aerobic bacteria) and fungi (yeasts and molds) in the outdoor air in these areas. The number of colonies that were formed determines the number of cells at the moment in the outdoor air; respectively the number of mesophilic aerobic bacteria and yeasts and molds. The measure of bacteria and fungi used is CFU (Colony Forming Units) per Petri dish. It is said that marine harbours are very polluted areas. The aim of study was the definition of mesophilic aerobic bacteria and yeasts and molds number, and the comparison of microorganisms number in air sampling areas.

Keywords: Air microbiology, colony forming units, Marine Durres's Harbour, mesophilic aerobic bacteria, outdoor air, yeasts and molds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2701
1823 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race

Authors: Joonas Pääkkönen

Abstract:

In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.

Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781
1822 A New Digital Transceiver Circuit for Asynchronous Communication

Authors: Aakash Subramanian, Vansh Pal Singh Makh, Abhijit Mitra

Abstract:

A new digital transceiver circuit for asynchronous frame detection is proposed where both the transmitter and receiver contain all digital components, thereby avoiding possible use of conventional devices like monostable multivibrators with unstable external components such as resistances and capacitances. The proposed receiver circuit, in particular, uses a combinational logic block yielding an output which changes its state as soon as the start bit of a new frame is detected. This, in turn, helps in generating an efficient receiver sampling clock. A data latching circuit is also used in the receiver to latch the recovered data bits in any new frame. The proposed receiver structure is also extended from 4- bit information to any general n data bits within a frame with a common expression for the output of the combinational logic block. Performance of the proposed hardware design is evaluated in terms of time delay, reliability and robustness in comparison with the standard schemes using monostable multivibrators. It is observed from hardware implementation that the proposed circuit achieves almost 33 percent speed up over any conventional circuit.

Keywords: Asynchronous Communication, Digital Detector, Combinational logic output, Sampling clock generator, Hardwareimplementation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2181
1821 Comparison between Separable and Irreducible Goppa Code in McEliece Cryptosystem

Authors: Thuraya M. Qaradaghi, Newroz N. Abdulrazaq

Abstract:

The McEliece cryptosystem is an asymmetric type of cryptography based on error correction code. The classical McEliece used irreducible binary Goppa code which considered unbreakable until now especially with parameter [1024, 524, and 101], but it is suffering from large public key matrix which leads to be difficult to be used practically. In this work Irreducible and Separable Goppa codes have been introduced. The Irreducible and Separable Goppa codes used are with flexible parameters and dynamic error vectors. A Comparison between Separable and Irreducible Goppa code in McEliece Cryptosystem has been done. For encryption stage, to get better result for comparison, two types of testing have been chosen; in the first one the random message is constant while the parameters of Goppa code have been changed. But for the second test, the parameters of Goppa code are constant (m=8 and t=10) while the random message have been changed. The results show that the time needed to calculate parity check matrix in separable are higher than the one for irreducible McEliece cryptosystem, which is considered expected results due to calculate extra parity check matrix in decryption process for g2(z) in separable type, and the time needed to execute error locator in decryption stage in separable type is better than the time needed to calculate it in irreducible type. The proposed implementation has been done by Visual studio C#.

Keywords: McEliece cryptosystem, Goppa code, separable, irreducible.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2185
1820 Speaker Identification by Joint Statistical Characterization in the Log Gabor Wavelet Domain

Authors: Suman Senapati, Goutam Saha

Abstract:

Real world Speaker Identification (SI) application differs from ideal or laboratory conditions causing perturbations that leads to a mismatch between the training and testing environment and degrade the performance drastically. Many strategies have been adopted to cope with acoustical degradation; wavelet based Bayesian marginal model is one of them. But Bayesian marginal models cannot model the inter-scale statistical dependencies of different wavelet scales. Simple nonlinear estimators for wavelet based denoising assume that the wavelet coefficients in different scales are independent in nature. However wavelet coefficients have significant inter-scale dependency. This paper enhances this inter-scale dependency property by a Circularly Symmetric Probability Density Function (CS-PDF) related to the family of Spherically Invariant Random Processes (SIRPs) in Log Gabor Wavelet (LGW) domain and corresponding joint shrinkage estimator is derived by Maximum a Posteriori (MAP) estimator. A framework is proposed based on these to denoise speech signal for automatic speaker identification problems. The robustness of the proposed framework is tested for Text Independent Speaker Identification application on 100 speakers of POLYCOST and 100 speakers of YOHO speech database in three different noise environments. Experimental results show that the proposed estimator yields a higher improvement in identification accuracy compared to other estimators on popular Gaussian Mixture Model (GMM) based speaker model and Mel-Frequency Cepstral Coefficient (MFCC) features.

Keywords: Speaker Identification, Log Gabor Wavelet, Bayesian Bivariate Estimator, Circularly Symmetric Probability Density Function, SIRP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1624
1819 Evaluation of Edge Configuration in Medical Echo Images Using Genetic Algorithms

Authors: Ching-Fen Jiang

Abstract:

Edge detection is usually the first step in medical image processing. However, the difficulty increases when a conventional kernel-based edge detector is applied to ultrasonic images with a textural pattern and speckle noise. We designed an adaptive diffusion filter to remove speckle noise while preserving the initial edges detected by using a Sobel edge detector. We also propose a genetic algorithm for edge selection to form complete boundaries of the detected entities. We designed two fitness functions to evaluate whether a criterion with a complex edge configuration can render a better result than a simple criterion such as the strength of gradient. The edges obtained by using a complex fitness function are thicker and more fragmented than those obtained by using a simple fitness function, suggesting that a complex edge selecting scheme is not necessary for good edge detection in medical ultrasonic images; instead, a proper noise-smoothing filter is the key.

Keywords: edge detection, ultrasonic images, speckle noise

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1456
1818 On Leak Localization in the Main Branched and Simple Inclined Gas Pipelines

Authors: T. Davitashvili, G. Gubelidze

Abstract:

In this paper two mathematical models for definition of gas accidental escape localization in the gas pipelines are suggested. The first model was created for leak localization in the horizontal branched pipeline and second one for leak detection in inclined section of the main gas pipeline. The algorithm of leak localization in the branched pipeline did not demand on knowledge of corresponding initial hydraulic parameters at entrance and ending points of each sections of pipeline. For detection of the damaged section and then leak localization in this section special functions and equations have been constructed. Some results of calculations for compound pipelines having two, four and five sections are presented. Also a method and formula for the leak localization in the simple inclined section of the main gas pipeline are suggested. Some results of numerical calculations defining localization of gas escape for the inclined pipeline are presented.

Keywords: Branched and inclined gas pipelines, leak detection, mathematical modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1875
1817 Adaptive Fourier Decomposition Based Signal Instantaneous Frequency Computation Approach

Authors: Liming Zhang

Abstract:

There have been different approaches to compute the analytic instantaneous frequency with a variety of background reasoning and applicability in practice, as well as restrictions. This paper presents an adaptive Fourier decomposition and (α-counting) based instantaneous frequency computation approach. The adaptive Fourier decomposition is a recently proposed new signal decomposition approach. The instantaneous frequency can be computed through the so called mono-components decomposed by it. Due to the fast energy convergency, the highest frequency of the signal will be discarded by the adaptive Fourier decomposition, which represents the noise of the signal in most of the situation. A new instantaneous frequency definition for a large class of so-called simple waves is also proposed in this paper. Simple wave contains a wide range of signals for which the concept instantaneous frequency has a perfect physical sense. The α-counting instantaneous frequency can be used to compute the highest frequency for a signal. Combination of these two approaches one can obtain the IFs of the whole signal. An experiment is demonstrated the computation procedure with promising results.

Keywords: Adaptive Fourier decomposition, Fourier series, signal processing, instantaneous frequency

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2325
1816 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.

Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 97
1815 Predicting Bridge Pier Scour Depth with SVM

Authors: Arun Goel

Abstract:

Prediction of maximum local scour is necessary for the safety and economical design of the bridges. A number of equations have been developed over the years to predict local scour depth using laboratory data and a few pier equations have also been proposed using field data. Most of these equations are empirical in nature as indicated by the past publications. In this paper attempts have been made to compute local depth of scour around bridge pier in dimensional and non-dimensional form by using linear regression, simple regression and SVM (Poly & Rbf) techniques along with few conventional empirical equations. The outcome of this study suggests that the SVM (Poly & Rbf) based modeling can be employed as an alternate to linear regression, simple regression and the conventional empirical equations in predicting scour depth of bridge piers. The results of present study on the basis of non-dimensional form of bridge pier scour indicate the improvement in the performance of SVM (Poly & Rbf) in comparison to dimensional form of scour.

Keywords: Modeling, pier scour, regression, prediction, SVM (Poly & Rbf kernels).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513
1814 Concept for Knowledge out of Sri Lankan Non-State Sector: Performances of Higher Educational Institutes and Successes of Its Sector

Authors: S. Jeyarajan

Abstract:

Concept of knowledge is discovered from conducted study for successive Competition in Sri Lankan Non-State Higher Educational Institutes. The Concept discovered out of collected Knowledge Management Practices from Emerald inside likewise reputed literatures and of Non-State Higher Educational sector. A test is conducted to reveal existences and its reason behind of these collected practices in Sri Lankan Non-State Higher Education Institutes. Further, unavailability of such study and uncertain on number of participants for data collection in the Sri Lankan context contributed selection of research method as qualitative method, which used attributes of Delphi Method to manage those likewise uncertainty. Data are collected under Dramaturgical Method, which contributes efficient usage of the Delphi method. Grounded theory is selected as data analysis techniques, which is conducted in intermixed discourse to manage different perspectives of data that are collected systematically through perspective and modified snowball sampling techniques. Data are then analysed using Grounded Theory Development Techniques in Intermix discourses to manage differences in Data. Consequently, Agreement in the results of Grounded theories and of finding in the Foreign Study is discovered in the analysis whereas present study conducted as Qualitative Research and The Foreign Study conducted as Quantitative Research. As such, the Present study widens the discovery in the Foreign Study. Further, having discovered reason behind of the existences, the Present result shows Concept for Knowledge from Sri Lankan Non-State sector to manage higher educational Institutes in successful manner.

Keywords: Adherence of snowball sampling into perspective sampling, Delphi method in qualitative method, grounded theory development in intermix discourses of analysis, knowledge management for success of higher educational institutes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 742
1813 Manufacturing of Twist-Free Surfaces by Magnetism Aided Machining Technologies

Authors: Zs. Kovács, Zs. J. Viharos, J. Kodácsy

Abstract:

As a well-known conventional finishing process, the grinding is commonly used to manufacture seal mating surfaces and bearing surfaces, but is also creates twisted surfaces. The machined surfaces by turning or grinding usually have twist structure on the surfaces, which can convey lubricants such as conveyor screw. To avoid this phenomenon, have to use special techniques or machines, for example start-stop turning, tangential turning, ultrasonic protection or special toll geometries. All of these solutions have high cost and difficult usability. In this paper, we describe a system and summarize the results of the experimental research carried out mainly in the field of Magnetic Abrasive Polishing (MAP) and Magnetic Roller Burnishing (MRB). These technologies are simple and also green while able to produce twist-free surfaces. During the tests, C45 normalized steel was used as workpiece material which was machined by simple and Wiper geometrical turning inserts in a CNC turning lathe. After the turning, the MAP and MRB technologies can be used directly to reduce the twist of surfaces. The evaluation was completed by advanced measuring and IT equipment.

Keywords: Magnetism, finishing, polishing, roller burnishing, twist-free.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1143
1812 Upgraded Cuckoo Search Algorithm to Solve Optimisation Problems Using Gaussian Selection Operator and Neighbour Strategy Approach

Authors: Mukesh Kumar Shah, Tushar Gupta

Abstract:

An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by “Study Nearby Strategy” to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.

Keywords: Economic dispatch, Gaussian selection operator, prohibited operating zones, ramp rate limits, upgraded cuckoo search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 652
1811 FEM Models of Glued Laminated Timber Beams Enhanced by Bayesian Updating of Elastic Moduli

Authors: L. Melzerová, T. Janda, M. Šejnoha, J. Šejnoha

Abstract:

Two finite element (FEM) models are presented in this paper to address the random nature of the response of glued timber structures made of wood segments with variable elastic moduli evaluated from 3600 indentation measurements. This total database served to create the same number of ensembles as was the number of segments in the tested beam. Statistics of these ensembles were then assigned to given segments of beams and the Latin Hypercube Sampling (LHS) method was called to perform 100 simulations resulting into the ensemble of 100 deflections subjected to statistical evaluation. Here, a detailed geometrical arrangement of individual segments in the laminated beam was considered in the construction of two-dimensional FEM model subjected to in fourpoint bending to comply with the laboratory tests. Since laboratory measurements of local elastic moduli may in general suffer from a significant experimental error, it appears advantageous to exploit the full scale measurements of timber beams, i.e. deflections, to improve their prior distributions with the help of the Bayesian statistical method. This, however, requires an efficient computational model when simulating the laboratory tests numerically. To this end, a simplified model based on Mindlin’s beam theory was established. The improved posterior distributions show that the most significant change of the Young’s modulus distribution takes place in laminae in the most strained zones, i.e. in the top and bottom layers within the beam center region. Posterior distributions of moduli of elasticity were subsequently utilized in the 2D FEM model and compared with the original simulations.

Keywords: Bayesian inference, FEM, four point bending test, laminated timber, parameter estimation, prior and posterior distribution, Young’s modulus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2182
1810 Separation of Chlorinated Plastics and Immobilization of Heavy Metals in Hazardous Automotive Shredder Residue

Authors: Srinivasa Reddy Mallampati, Chi-Hyeon Lee, Nguyen Thi Thanh Truc, Byeong-Kyu Lee

Abstract:

In the present study, feasibility of the selective surface hydrophilization of polyvinyl chloride (PVC) by microwave treatment was evaluated to facilitate the separation from automotive shredder residue (ASR), by the froth flotation. The combination of 60 sec microwave treatment with PAC, a sharp and significant decrease about 16.5° contact angle of PVC was observed in ASR plastic compared with other plastics. The microwave treatment with the addition of PAC resulted in a synergetic effect for the froth flotation, which may be a result of the 90% selective separation of PVC from ASR plastics, with 82% purity. While, simple mixing with a nanometallic Ca/CaO/PO4 dispersion mixture immobilized 95-100% of heavy metals in ASR soil/residues. The quantity of heavy metals leached from thermal residues after treatment by nanometallic Ca/CaO/PO4 was lower than the Korean standard regulatory limit for hazardous waste landfills. Microwave treatment can be a simple and effective method for PVC separation from ASR plastics.

Keywords: Automotive shredder residue, microwave treatment, chlorinated plastics, separation, heavy metals, Immobilization, separation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2044
1809 Minimization of Non-Productive Time during 2.5D Milling

Authors: Satish Kumar, Arun Kumar Gupta, Pankaj Chandna

Abstract:

In the modern manufacturing systems, the use of thermal cutting techniques using oxyfuel, plasma and laser have become indispensable for the shape forming of high quality complex components; however, the conventional chip removal production techniques still have its widespread space in the manufacturing industry. Both these types of machining operations require the positioning of end effector tool at the edge where the cutting process commences. This repositioning of the cutting tool in every machining operation is repeated several times and is termed as non-productive time or airtime motion. Minimization of this non-productive machining time plays an important role in mass production with high speed machining. As, the tool moves from one region to the other by rapid movement and visits a meticulous region once in the whole operation, hence the non-productive time can be minimized by synchronizing the tool movements. In this work, this problem is being formulated as a general travelling salesman problem (TSP) and a genetic algorithm approach has been applied to solve the same. For improving the efficiency of the algorithm, the GA has been hybridized with a noble special heuristic and simulating annealing (SA). In the present work a novel heuristic in the combination of GA has been developed for synchronization of toolpath movements during repositioning of the tool. A comparative analysis of new Meta heuristic techniques with simple genetic algorithm has been performed. The proposed metaheuristic approach shows better performance than simple genetic algorithm for minimization of nonproductive toolpath length. Also, the results obtained with the help of hybrid simulated annealing genetic algorithm (HSAGA) are also found better than the results using simple genetic algorithm only.

Keywords: Non-productive time, Airtime, 2.5 D milling, Laser cutting, Metaheuristic, Genetic Algorithm, Simulated Annealing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2704
1808 A Monte Carlo Method to Data Stream Analysis

Authors: Kittisak Kerdprasop, Nittaya Kerdprasop, Pairote Sattayatham

Abstract:

Data stream analysis is the process of computing various summaries and derived values from large amounts of data which are continuously generated at a rapid rate. The nature of a stream does not allow a revisit on each data element. Furthermore, data processing must be fast to produce timely analysis results. These requirements impose constraints on the design of the algorithms to balance correctness against timely responses. Several techniques have been proposed over the past few years to address these challenges. These techniques can be categorized as either dataoriented or task-oriented. The data-oriented approach analyzes a subset of data or a smaller transformed representation, whereas taskoriented scheme solves the problem directly via approximation techniques. We propose a hybrid approach to tackle the data stream analysis problem. The data stream has been both statistically transformed to a smaller size and computationally approximated its characteristics. We adopt a Monte Carlo method in the approximation step. The data reduction has been performed horizontally and vertically through our EMR sampling method. The proposed method is analyzed by a series of experiments. We apply our algorithm on clustering and classification tasks to evaluate the utility of our approach.

Keywords: Data Stream, Monte Carlo, Sampling, DensityEstimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1398