Search results for: duplex methods
3305 Investigation about Mechanical Equipment Needed to Break the Molecular Bonds of Heavy Oil by Using Hydrodynamic Cavitation
Authors: Mahdi Asghari
Abstract:
The cavitation phenomenon is the formation and production of micro-bubbles and eventually the bursting of the micro-bubbles inside the liquid fluid, which results in localized high pressure and temperature, causing physical and chemical fluid changes. This pressure and temperature are predicted to be 2000 atmospheres and 5000 °C, respectively. As a result of small bubbles bursting from this process, temperature and pressure increase momentarily and locally, so that the intensity and magnitude of these temperatures and pressures provide the energy needed to break the molecular bonds of heavy compounds such as fuel oil. In this paper, we study the theory of cavitation and the methods of cavitation production by acoustic and hydrodynamic methods and the necessary mechanical equipment and reactors for industrial application of the hydrodynamic cavitation method to break down the molecular bonds of the fuel oil and convert it into useful and economical products.
Keywords: Cavitation, hydrodynamic cavitation, cavitation reactor, fuel oil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5793304 A Relationship Extraction Method from Literary Fiction Considering Korean Linguistic Features
Authors: Hee-Jeong Ahn, Kee-Won Kim, Seung-Hoon Kim
Abstract:
The knowledge of the relationship between characters can help readers to understand the overall story or plot of the literary fiction. In this paper, we present a method for extracting the specific relationship between characters from a Korean literary fiction. Generally, methods for extracting relationships between characters in text are statistical or computational methods based on the sentence distance between characters without considering Korean linguistic features. Furthermore, it is difficult to extract the relationship with direction from text, such as one-sided love, because they consider only the weight of relationship, without considering the direction of the relationship. Therefore, in order to identify specific relationships between characters, we propose a statistical method considering linguistic features, such as syntactic patterns and speech verbs in Korean. The result of our method is represented by a weighted directed graph of the relationship between the characters. Furthermore, we expect that proposed method could be applied to the relationship analysis between characters of other content like movie or TV drama.
Keywords: Data mining, Korean linguistic feature, literary fiction, relationship extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17953303 Calcium Silicate Bricks – Ultrasonic Pulse Method: Effects of Natural Frequency of Transducers on Measurement Results
Authors: Jiri Brozovsky
Abstract:
Modulus of elasticity is one of the important parameters of construction materials, which considerably influence their deformation properties and which can also be determined by means of non-destructive test methods like ultrasonic pulse method. However, measurement results of ultrasonic pulse methods are influenced by various factors, one of which is the natural frequency of the transducers. The paper states knowledge about influence of natural frequency of the transducers (54; 82 and 150kHz) on ultrasonic pulse velocity and dynamic modulus of elasticity (Young's Dynamic modulus of elasticity). Differences between ultrasonic pulse velocity and dynamic modulus of elasticity were found with the same smallest dimension of test specimen in the direction of sounding and density their value decreases as the natural frequency of transducers grew.
Keywords: Calcium silicate brick, ultrasonic pulse method, ultrasonic pulse velocity, dynamic modulus of elasticity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22443302 Developing Well-Being Indicators and Measurement Methods as Illustrated by Projects Aimed at Preventing Obesity in Children
Authors: E. Grochowska-Niedworok, K. Brukało, M. Hadasik, M. Kardas
Abstract:
Consumption of vegetables by school children and adolescents is essential for their normal growth, development and health, but a significant minority of the world's population consumes the right amount of these products. The aim of the study was to evaluate the preferences and frequency of consumption of vegetables by school children and adolescents. It has been assumed that effectively implemented nutrition education programs should have an impact on increasing the frequency of vegetable consumption among the recipients. The study covered 514 students of five schools in the Opole Voivodeship aged 9 years to 22 years. The research tool was an author's questionnaire, which consisted of closed questions on the frequency of vegetable consumption and the use of 10 ways to treat them. Preferences and frequencies are shown in percentages, while correlations were estimated on the basis of Cramer`s V and gamma coefficients. In each of the examined age groups, the relationship between sex and vegetable consumption (the Cramer`s V coefficient value was 0.06 to 0.38) was determined and the various methods of culinary processing were used (V Craméra was 0.08 to 0.34). For both sexes, the relationship between age and frequency of vegetable consumption was shown (gamma values ranged from ~ 0.00 to 0.39) and different cooking methods (gamma values were 0.01 to 0.22). The most important determinant of nutritional choices is the taste and availability of products. The fact that they have a positive effect on their health is only in third position. As has been shown, obesity prevention programs can not only address nutrition education but also teach about new flavors and increase the availability of healthy foods. In addition, the frequency of vegetable consumption can be a good indicator reflecting the healthy behaviors of children and adolescents.
Keywords: Children and adolescents, frequency, welfare rate, vegetables.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10163301 Enhanced Approaches to Rectify the Noise, Illumination and Shadow Artifacts
Authors: M. Sankari, C. Meena
Abstract:
Enhancing the quality of two dimensional signals is one of the most important factors in the fields of video surveillance and computer vision. Usually in real-life video surveillance, false detection occurs due to the presence of random noise, illumination and shadow artifacts. The detection methods based on background subtraction faces several problems in accurately detecting objects in realistic environments: In this paper, we propose a noise removal algorithm using neighborhood comparison method with thresholding. The illumination variations correction is done in the detected foreground objects by using an amalgamation of techniques like homomorphic decomposition, curvelet transformation and gamma adjustment operator. Shadow is removed using chromaticity estimator with local relation estimator. Results are compared with the existing methods and prove as high robustness in the video surveillance.
Keywords: Chromaticity Estimator, Curvelet Transformation, Denoising, Gamma correction, Homomorphic, Neighborhood Assessment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19603300 DIFFER: A Propositionalization approach for Learning from Structured Data
Authors: Thashmee Karunaratne, Henrik Böstrom
Abstract:
Logic based methods for learning from structured data is limited w.r.t. handling large search spaces, preventing large-sized substructures from being considered by the resulting classifiers. A novel approach to learning from structured data is introduced that employs a structure transformation method, called finger printing, for addressing these limitations. The method, which generates features corresponding to arbitrarily complex substructures, is implemented in a system, called DIFFER. The method is demonstrated to perform comparably to an existing state-of-art method on some benchmark data sets without requiring restrictions on the search space. Furthermore, learning from the union of features generated by finger printing and the previous method outperforms learning from each individual set of features on all benchmark data sets, demonstrating the benefit of developing complementary, rather than competing, methods for structure classification.Keywords: Machine learning, Structure classification, Propositionalization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12223299 Improving Injection Moulding Processes Using Experimental Design
Authors: Yousef Amer, Mehdi Moayyedian, Zeinab Hajiabolhasani, Lida Moayyedian
Abstract:
Moulded parts contribute to more than 70% of components in products. However, common defects particularly in plastic injection moulding exist such as: warpage, shrinkage, sink marks, and weld lines. In this paper Taguchi experimental design methods are applied to reduce the warpage defect of thin plate Acrylonitrile Butadiene Styrene (ABS) and are demonstrated in two levels; namely, orthogonal arrays of Taguchi and the Analysis of Variance (ANOVA). Eight trials have been run in which the optimal parameters that can minimize the warpage defect in factorial experiment are obtained. The results obtained from ANOVA approach analysis with respect to those derived from MINITAB illustrate the most significant factors which may cause warpage in injection moulding process. Moreover, ANOVA approach in comparison with other approaches like S/N ratio is more accurate and with the interaction of factors it is possible to achieve higher and the better outcomes.Keywords: Analysis of variance, ANOVA, plastic injection mould, Taguchi methods, Warpage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38963298 Application of Transportation Linear Programming Algorithms to Cost Reduction in Nigeria Soft Drinks Industry
Authors: A. O. Salami
Abstract:
The transportation problems are primarily concerned with the optimal way in which products produced at different plants (supply origins) are transported to a number of warehouses or customers (demand destinations). The objective in a transportation problem is to fully satisfy the destination requirements within the operating production capacity constraints at the minimum possible cost. The objective of this study is to determine ways of minimizing transportation cost in order to maximum profit. Data were sourced from the records of the Distribution Department of 7-Up Bottling Company Plc., Ilorin, Kwara State, Nigeria. The data were computed and analyzed using the three methods of solving transportation problem. The result shows that the three methods produced the same total transportation costs amounting to N1, 358, 019, implying that any of the method can be adopted by the company in transporting its final products to the wholesale dealers in order to minimize total production cost.
Keywords: Allocation problem, Cost Minimization, Distribution system, Resources utilization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 88023297 Kernel Matching versus Inverse Probability Weighting: A Comparative Study
Authors: Andy Handouyahia, Tony Haddad, Frank Eaton
Abstract:
Recent quasi-experimental evaluation of the Canadian Active Labour Market Policies (ALMP) by Human Resources and Skills Development Canada (HRSDC) has provided an opportunity to examine alternative methods to estimating the incremental effects of Employment Benefits and Support Measures (EBSMs) on program participants. The focus of this paper is to assess the efficiency and robustness of inverse probability weighting (IPW) relative to kernel matching (KM) in the estimation of program effects. To accomplish this objective, the authors compare pairs of 1,080 estimates, along with their associated standard errors, to assess which type of estimate is generally more efficient and robust. In the interest of practicality, the authorsalso document the computationaltime it took to produce the IPW and KM estimates, respectively.
Keywords: Treatment effect, causal inference, observational studies, Propensity score based matching, Kernel Matching, Inverse Probability Weighting, Estimation methods for incremental effect.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69253296 Forecasting Stock Indexes Using Bayesian Additive Regression Tree
Authors: Darren Zou
Abstract:
Forecasting the stock market is a very challenging task. Various economic indicators such as GDP, exchange rates, interest rates, and unemployment have a substantial impact on the stock market. Time series models are the traditional methods used to predict stock market changes. In this paper, a machine learning method, Bayesian Additive Regression Tree (BART) is used in predicting stock market indexes based on multiple economic indicators. BART can be used to model heterogeneous treatment effects, and thereby works well when models are misspecified. It also has the capability to handle non-linear main effects and multi-way interactions without much input from financial analysts. In this research, BART is proposed to provide a reliable prediction on day-to-day stock market activities. By comparing the analysis results from BART and with time series method, BART can perform well and has better prediction capability than the traditional methods.
Keywords: Bayesian, Forecast, Stock, BART.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7343295 A Sequential Pattern Mining Method Based On Sequential Interestingness
Authors: Shigeaki Sakurai, Youichi Kitahara, Ryohei Orihara
Abstract:
Sequential mining methods efficiently discover all frequent sequential patterns included in sequential data. These methods use the support, which is the previous criterion that satisfies the Apriori property, to evaluate the frequency. However, the discovered patterns do not always correspond to the interests of analysts, because the patterns are common and the analysts cannot get new knowledge from the patterns. The paper proposes a new criterion, namely, the sequential interestingness, to discover sequential patterns that are more attractive for the analysts. The paper shows that the criterion satisfies the Apriori property and how the criterion is related to the support. Also, the paper proposes an efficient sequential mining method based on the proposed criterion. Lastly, the paper shows the effectiveness of the proposed method by applying the method to two kinds of sequential data.
Keywords: Sequential mining, Support, Confidence, Apriori property
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12763294 Abrupt Scene Change Detection
Authors: Priyadarshinee Adhikari, Neeta Gargote, Jyothi Digge, B.G. Hogade
Abstract:
A number of automated shot-change detection methods for indexing a video sequence to facilitate browsing and retrieval have been proposed in recent years. This paper emphasizes on the simulation of video shot boundary detection using one of the methods of the color histogram wherein scaling of the histogram metrics is an added feature. The difference between the histograms of two consecutive frames is evaluated resulting in the metrics. Further scaling of the metrics is performed to avoid ambiguity and to enable the choice of apt threshold for any type of videos which involves minor error due to flashlight, camera motion, etc. Two sample videos are used here with resolution of 352 X 240 pixels using color histogram approach in the uncompressed media. An attempt is made for the retrieval of color video. The simulation is performed for the abrupt change in video which yields 90% recall and precision value.Keywords: Abrupt change, color histogram, ground-truthing, precision, recall, scaling, threshold.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21033293 Comparison of Neural Network and Logistic Regression Methods to Predict Xerostomia after Radiotherapy
Authors: Hui-Min Ting, Tsair-Fwu Lee, Ming-Yuan Cho, Pei-Ju Chao, Chun-Ming Chang, Long-Chang Chen, Fu-Min Fang
Abstract:
To evaluate the ability to predict xerostomia after radiotherapy, we constructed and compared neural network and logistic regression models. In this study, 61 patients who completed a questionnaire about their quality of life (QoL) before and after a full course of radiation therapy were included. Based on this questionnaire, some statistical data about the condition of the patients’ salivary glands were obtained, and these subjects were included as the inputs of the neural network and logistic regression models in order to predict the probability of xerostomia. Seven variables were then selected from the statistical data according to Cramer’s V and point-biserial correlation values and were trained by each model to obtain the respective outputs which were 0.88 and 0.89 for AUC, 9.20 and 7.65 for SSE, and 13.7% and 19.0% for MAPE, respectively. These parameters demonstrate that both neural network and logistic regression methods are effective for predicting conditions of parotid glands.
Keywords: NPC, ANN, logistic regression, xerostomia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16363292 2 – Block 3 - Point Modified Numerov Block Methods for Solving Ordinary Differential Equations
Authors: Abdu Masanawa Sagir
Abstract:
In this paper, linear multistep technique using power series as the basis function is used to develop the block methods which are suitable for generating direct solution of the special second order ordinary differential equations of the form y′′ = f(x,y), a < = x < = b with associated initial or boundary conditions. The continuaous hybrid formulations enable us to differentiate and evaluate at some grids and off – grid points to obtain two different three discrete schemes, each of order (4,4,4)T, which were used in block form for parallel or sequential solutions of the problems. The computational burden and computer time wastage involved in the usual reduction of second order problem into system of first order equations are avoided by this approach. Furthermore, a stability analysis and efficiency of the block method are tested on linear and non-linear ordinary differential equations whose solutions are oscillatory or nearly periodic in nature, and the results obtained compared favourably with the exact solution.Keywords: Block Method, Hybrid, Linear Multistep Method, Self – starting, Special Second Order.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19503291 Evaluation of Physicochemical Pretreatment Methods on COD and Ammonia Removal from Landfill Leachate
Authors: M. Poveda, S. Lozecznik, J. Oleszkiewicz, Q. Yuan
Abstract:
The goal of this experiment is to evaluate the effectiveness of different leachate pre-treatment options in terms of COD and ammonia removal. This research focused on the evaluation of physical-chemical methods for pre-treatment of leachate that would be effective and rapid in order to satisfy the requirements of the sewer discharge by-laws. The four pre-treatment options evaluated were: air stripping, chemical coagulation, electrocoagulation and advanced oxidation with sodium ferrate. Chemical coagulation reported the best COD removal rate at 43%, compared to 18% for both air stripping and electro-coagulation, and 20% for oxidation with sodium ferrate. On the other hand, air stripping was far superior to the other treatment options in terms of ammonia removal with 86%. Oxidation with sodium ferrate reached only 16%, while chemical coagulation and electro-coagulation removed less than 10%. When combined, air stripping and chemical coagulation removed up to 50% COD and 85% ammonia.Keywords: Leachate pretreatment, air stripping, chemical coagulation, electro-coagulation, oxidation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19243290 Studying the Intercalation of Low Density Polyethylene/Clay Nanocomposites after Different UV Exposures
Authors: Samir Al-Zobaidi
Abstract:
This study attempts to understand the effect of different UV irradiation methods on the intercalation of LDPE/MMT nanocomposites, and its molecular behavior at certain isothermal crystallization temperature. Three different methods of UV exposure were employed using single composition of LDPE/MMT nanocomposites. All samples were annealed for 5 hours at a crystallization temperature of 100oC. The crystallization temperature was chosen to be at large supercooling temperature to ensure quick and complete crystallization. The raw material of LDPE consisted of two stable monoclinic and orthorhombic phases according to XRD results. The thermal behavior of both phases acted differently when UV exposure method was changed. The monoclinic phase was more dependent on the method used compared to the orthorhombic phase. The intercalation of clay, as well as, the non-isothermal crystallization temperature, has also shown a clear dependency on the type of UV exposure. A third phase that is thermally less stable was also observed. Its respond to UV irradiation was greater since it contains low molecular weight entities which make it more vulnerable to any UV exposure.Keywords: LDPE/MMt nanocomposites, crystallization, UV irradiation, intercalation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17153289 Comparison of Parameterization Methods in Recognizing Spoken Arabic Digits
Authors: Ali Ganoun
Abstract:
This paper proposes evaluation of sound parameterization methods in recognizing some spoken Arabic words, namely digits from zero to nine. Each isolated spoken word is represented by a single template based on a specific recognition feature, and the recognition is based on the Euclidean distance from those templates. The performance analysis of recognition is based on four parameterization features: the Burg Spectrum Analysis, the Walsh Spectrum Analysis, the Thomson Multitaper Spectrum Analysis and the Mel Frequency Cepstral Coefficients (MFCC) features. The main aim of this paper was to compare, analyze, and discuss the outcomes of spoken Arabic digits recognition systems based on the selected recognition features. The results acqired confirm that the use of MFCC features is a very promising method in recognizing Spoken Arabic digits.
Keywords: Speech Recognition, Spectrum Analysis, Burg Spectrum, Walsh Spectrum Analysis, Thomson Multitaper Spectrum, MFCC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15933288 Learning Object Interface Adapted to the Learner's Learning Style
Authors: Zenaide Carvalho da Silva, Leandro Rodrigues Ferreira, Andrey Ricardo Pimentel
Abstract:
Learning styles (LS) refer to the ways and forms that the student prefers to learn in the teaching and learning process. Each student has their own way of receiving and processing information throughout the learning process. Therefore, knowing their LS is important to better understand their individual learning preferences, and also, understand why the use of some teaching methods and techniques give better results with some students, while others it does not. We believe that knowledge of these styles enables the possibility of making propositions for teaching; thus, reorganizing teaching methods and techniques in order to allow learning that is adapted to the individual needs of the student. Adapting learning would be possible through the creation of online educational resources adapted to the style of the student. In this context, this article presents the structure of a learning object interface adaptation based on the LS. The structure created should enable the creation of the adapted learning object according to the student's LS and contributes to the increase of student’s motivation in the use of a learning object as an educational resource.
Keywords: Adaptation, interface, learning object, learning style.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9863287 Simulation of Complex-Shaped Particle Breakage Using the Discrete Element Method
Authors: Felix Platzer, Eric Fimbinger
Abstract:
In Discrete Element Method (DEM) simulations, the breakage behavior of particles can be simulated based on different principles. In the case of large, complex-shaped particles that show various breakage patterns depending on the scenario leading to the failure and often only break locally instead of fracturing completely, some of these principles do not lead to realistic results. The reason for this is that in said cases, the methods in question, such as the Particle Replacement Method (PRM) or Voronoi Fracture, replace the initial particle (that is intended to break) into several sub-particles when certain breakage criteria are reached, such as exceeding the fracture energy. That is why those methods are commonly used for the simulation of materials that fracture completely instead of breaking locally. That being the case, when simulating local failure, it is advisable to pre-build the initial particle from sub-particles that are bonded together. The dimensions of these sub-particles consequently define the minimum size of the fracture results. This structure of bonded sub-particles enables the initial particle to break at the location of the highest local loads – due to the failure of the bonds in those areas – with several sub-particle clusters being the result of the fracture, which can again also break locally. In this project, different methods for the generation and calibration of complex-shaped particle conglomerates using bonded particle modeling (BPM) to enable the ability to depict more realistic fracture behavior were evaluated based on the example of filter cake. The method that proved suitable for this purpose and which furthermore allows efficient and realistic simulation of breakage behavior of complex-shaped particles applicable to industrial-sized simulations is presented in this paper.
Keywords: Bonded particle model (BPM), DEM, filter cake, particle breakage, particle fracture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4013286 Application of the Central-Difference with Half- Sweep Gauss-Seidel Method for Solving First Order Linear Fredholm Integro-Differential Equations
Authors: E. Aruchunan, J. Sulaiman
Abstract:
The objective of this paper is to analyse the application of the Half-Sweep Gauss-Seidel (HSGS) method by using the Half-sweep approximation equation based on central difference (CD) and repeated trapezoidal (RT) formulas to solve linear fredholm integro-differential equations of first order. The formulation and implementation of the Full-Sweep Gauss-Seidel (FSGS) and Half- Sweep Gauss-Seidel (HSGS) methods are also presented. The HSGS method has been shown to rapid compared to the FSGS methods. Some numerical tests were illustrated to show that the HSGS method is superior to the FSGS method.Keywords: Integro-differential equations, Linear fredholm equations, Finite difference, Quadrature formulas, Half-Sweep iteration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18153285 Graph-Based Text Similarity Measurement by Exploiting Wikipedia as Background Knowledge
Authors: Lu Zhang, Chunping Li, Jun Liu, Hui Wang
Abstract:
Text similarity measurement is a fundamental issue in many textual applications such as document clustering, classification, summarization and question answering. However, prevailing approaches based on Vector Space Model (VSM) more or less suffer from the limitation of Bag of Words (BOW), which ignores the semantic relationship among words. Enriching document representation with background knowledge from Wikipedia is proven to be an effective way to solve this problem, but most existing methods still cannot avoid similar flaws of BOW in a new vector space. In this paper, we propose a novel text similarity measurement which goes beyond VSM and can find semantic affinity between documents. Specifically, it is a unified graph model that exploits Wikipedia as background knowledge and synthesizes both document representation and similarity computation. The experimental results on two different datasets show that our approach significantly improves VSM-based methods in both text clustering and classification.Keywords: Text classification, Text clustering, Text similarity, Wikipedia
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21173284 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis
Authors: A.K. Tangirala, S. Babji
Abstract:
In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15343283 Face Recognition with PCA and KPCA using Elman Neural Network and SVM
Authors: Hossein Esbati, Jalil Shirazi
Abstract:
In this paper, in order to categorize ORL database face pictures, principle Component Analysis (PCA) and Kernel Principal Component Analysis (KPCA) methods by using Elman neural network and Support Vector Machine (SVM) categorization methods are used. Elman network as a recurrent neural network is proposed for modeling storage systems and also it is used for reviewing the effect of using PCA numbers on system categorization precision rate and database pictures categorization time. Categorization stages are conducted with various components numbers and the obtained results of both Elman neural network categorization and support vector machine are compared. In optimum manner 97.41% recognition accuracy is obtained.Keywords: Face recognition, Principal Component Analysis, Kernel Principal Component Analysis, Neural network, Support Vector Machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19303282 Mechanical Properties of Pea Pods (Pisium sativum Var. Shamshiri)
Authors: M. Azadbakht, N. Tajari, R. Alimoradzade
Abstract:
Knowing pea pods mechanical resistance against dynamic forces are important for design of combine harvester. In pea combine harvesters, threshing is accomplished by two mechanical actions of impact and friction forces. In this research, the effects of initial moisture content and needed impact and friction energy on threshing of pea pods were studied. An impact device was built based on pendulum mechanism. The experiments were done at three initial moisture content levels of 12.1, 23.5 and 39.5 (%w.b.) for both impact and friction methods. Three energy levels of 0.088, 0.126 and 0.202 J were used for impact method and for friction method three energy levels of 0.784, 0.930 and 1.351 J. The threshing percentage was measured in each method. By using a frictional device, kinetic friction coefficients at above moisture contents were measured 0.257, 0.303 and 0.336, respectively. The results of variance analysis of the two methods showed that moisture content and energy have significant effects on the threshing percentage.
Keywords: Pea pod, Energy, Friction, Impact, Initial moisture content, Threshing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21153281 Shot Boundary Detection Using Octagon Square Search Pattern
Authors: J. Kavitha, S. Sowmyayani, P. Arockia Jansi Rani
Abstract:
In this paper, a shot boundary detection method is presented using octagon square search pattern. The color, edge, motion and texture features of each frame are extracted and used in shot boundary detection. The motion feature is extracted using octagon square search pattern. Then, the transition detection method is capable of detecting the shot or non-shot boundaries in the video using the feature weight values. Experimental results are evaluated in TRECVID video test set containing various types of shot transition with lighting effects, object and camera movement within the shots. Further, this paper compares the experimental results of the proposed method with existing methods. It shows that the proposed method outperforms the state-of-art methods for shot boundary detection.
Keywords: Content-based indexing and retrieval, cut transition detection, discrete wavelet transform, shot boundary detection, video source.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10013280 Metrology-Inspired Methods to Assess the Biases of Artificial Intelligence Systems
Authors: Belkacem Laimouche
Abstract:
With the field of Artificial Intelligence (AI) experiencing exponential growth, fueled by technological advancements that pave the way for increasingly innovative and promising applications, there is an escalating need to develop rigorous methods for assessing their performance in pursuit of transparency and equity. This article proposes a metrology-inspired statistical framework for evaluating bias and explainability in AI systems. Drawing from the principles of metrology, we propose a pioneering approach, using a concrete example, to evaluate the accuracy and precision of AI models, as well as to quantify the sources of measurement uncertainty that can lead to bias in their predictions. Furthermore, we explore a statistical approach for evaluating the explainability of AI systems based on their ability to provide interpretable and transparent explanations of their predictions.
Keywords: Artificial intelligence, metrology, measurement uncertainty, prediction error, bias, machine learning algorithms, probabilistic models, inter-laboratory comparison, data analysis, data reliability, bias impact assessment, bias measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1423279 ANP-based Intra and Inter-industry Analysis for Measuring Spillover Effect of ICT Industries
Authors: Yongyoon Suh, Yongtae Park
Abstract:
The interaction among information and communication technology (ICT) industries is a recently ubiquitous phenomenon through fixed-mobile integration. To monitor the impact of interaction, previous research has mainly focused on measuring spillover effect among ICT industries using various methods. Among others, inter-industry analysis is one of the useful methods for examining spillover effect between industries. However, more complex ICT industries become, more important the impact within an industry is. Inter-industry analysis is limited in mirroring intra-relationships within an industry. Thus, this study applies the analytic network process (ANP) to measure the spillover effect, capturing all of the intra and inter-relationships. Using ANP-based intra and inter-industry analysis, the spillover effect is effectively measured, mirroring the complex structure of ICT industries. A main ICT industry and its linkages are also explored to show the current structure of ICT industries. The proposed approach is expected to allow policy makers to understand interactions of ICT industries and their impact.
Keywords: ANP, intra and inter-industry analysis, spillover effect
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17373278 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation
Authors: Somayeh Komeylian
Abstract:
The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).
Keywords: DoA estimation, adaptive antenna array, Deep Neural Network, LS-SVM optimization model, radial basis function, MSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5393277 The Effect of Different Pre-Treatment Methods on the Shear Bond Strength of Orthodontic Tubes: An in vitro Study
Authors: A. C. B. C. J. Fernandes, V. C. de Jesus, S. Noruziaan, O. F. G. G. Vilela, K. K. Somarin, R. França, F. H. S. L. Pinheiro
Abstract:
Objective: This in vitro study aimed to evaluate the shear bond strength (SBS) of orthodontic tubes after different enamel pre-treatments. Materials and Methods: A total of 39 crown halves were randomly divided into 3 groups (n = 13). Group I (control group) was exposed to prophy paste (PP), 37% phosphoric acid (PA), and a self-etching primer (SEP). Group II received no prophylaxis, but only PA and SEP. Group III was exposed to PP and SEP. The SBS was used to evaluate the bond strength of the orthodontic tubes one year after bonding. One-way ANOVA and Tukey’s post-hoc test were used to compare SBS values between the three groups. The statistical significance was set to 5%. Results: The difference in SBS values of groups I (36.672 ± 9.315 Mpa), II (34.242 ± 9.986 Mpa), and III (39.055 ± 5.565 Mpa) were not statistically significant (P < 0.05). Conclusion: This study suggests that chairside time can be significantly reduced with the use of PP and a SEP without compromising adhesion. Further evidence is needed by means of a split-mouth design trial.
Keywords: Shear bond strength, orthodontic tubes, self-etching primer, pumice, prophy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4123276 Techno-Economics Study to Select Optimum Desalination Plant for Asalouyeh Combined Cycle Power Plant in Iran
Authors: Z. Gomar, H. Heidary, M. Davoudi
Abstract:
This research deals with techno economic analysis to select the most economic desalination method for Asalouyeh combined cycle power plant . Due to lack of fresh water, desalination of sea water is necessary to provide required DM water of Power Plant. The most common desalination methods are RO, MSF, MED, and MED–TVC. In this research, methods of RO, MED, and MED– TVC have been compared. Simulation results show that recovery of heat of exhaust gas of main stack is optimum case for providing DM water required for injected steam of MED desalination. This subject is very important because of improving thermal efficiency of power plant using extra heat recovery. Also, it has been shown that by adding 3 rows of finned tube to de-aerator evaporator, which is very simple and low cost, required steam for generating 5200 m3/day of desalinated water is obtainable.
Keywords: Desalination, MED, thermodynamic simulation, combined cycle power plant.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3135