Search results for: neutral sets
213 Ensembling Adaptively Constructed Polynomial Regression Models
Authors: Gints Jekabsons
Abstract:
The approach of subset selection in polynomial regression model building assumes that the chosen fixed full set of predefined basis functions contains a subset that is sufficient to describe the target relation sufficiently well. However, in most cases the necessary set of basis functions is not known and needs to be guessed – a potentially non-trivial (and long) trial and error process. In our research we consider a potentially more efficient approach – Adaptive Basis Function Construction (ABFC). It lets the model building method itself construct the basis functions necessary for creating a model of arbitrary complexity with adequate predictive performance. However, there are two issues that to some extent plague the methods of both the subset selection and the ABFC, especially when working with relatively small data samples: the selection bias and the selection instability. We try to correct these issues by model post-evaluation using Cross-Validation and model ensembling. To evaluate the proposed method, we empirically compare it to ABFC methods without ensembling, to a widely used method of subset selection, as well as to some other well-known regression modeling methods, using publicly available data sets.Keywords: Basis function construction, heuristic search, modelensembles, polynomial regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673212 Extended Well-Founded Semantics in Bilattices
Authors: Daniel Stamate
Abstract:
One of the most used assumptions in logic programming and deductive databases is the so-called Closed World Assumption (CWA), according to which the atoms that cannot be inferred from the programs are considered to be false (i.e. a pessimistic assumption). One of the most successful semantics of conventional logic programs based on the CWA is the well-founded semantics. However, the CWA is not applicable in all circumstances when information is handled. That is, the well-founded semantics, if conventionally defined, would behave inadequately in different cases. The solution we adopt in this paper is to extend the well-founded semantics in order for it to be based also on other assumptions. The basis of (default) negative information in the well-founded semantics is given by the so-called unfounded sets. We extend this concept by considering optimistic, pessimistic, skeptical and paraconsistent assumptions, used to complete missing information from a program. Our semantics, called extended well-founded semantics, expresses also imperfect information considered to be missing/incomplete, uncertain and/or inconsistent, by using bilattices as multivalued logics. We provide a method of computing the extended well-founded semantics and show that Kripke-Kleene semantics is captured by considering a skeptical assumption. We show also that the complexity of the computation of our semantics is polynomial time.Keywords: Logic programs, imperfect information, multivalued logics, bilattices, assumptions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1266211 LAYMOD; A Layered and Modular Platform for CAx Collaboration Management and Supporting Product data Integration based on STEP Standard
Authors: Omid F. Valilai, Mahmoud Houshmand
Abstract:
Nowadays companies strive to survive in a competitive global environment. To speed up product development/modifications, it is suggested to adopt a collaborative product development approach. However, despite the advantages of new IT improvements still many CAx systems work separately and locally. Collaborative design and manufacture requires a product information model that supports related CAx product data models. To solve this problem many solutions are proposed, which the most successful one is adopting the STEP standard as a product data model to develop a collaborative CAx platform. However, the improvement of the STEP-s Application Protocols (APs) over the time, huge number of STEP AP-s and cc-s, the high costs of implementation, costly process for conversion of older CAx software files to the STEP neutral file format; and lack of STEP knowledge, that usually slows down the implementation of the STEP standard in collaborative data exchange, management and integration should be considered. In this paper the requirements for a successful collaborative CAx system is discussed. The STEP standard capability for product data integration and its shortcomings as well as the dominant platforms for supporting CAx collaboration management and product data integration are reviewed. Finally a platform named LAYMOD to fulfil the requirements of CAx collaborative environment and integrating the product data is proposed. The platform is a layered platform to enable global collaboration among different CAx software packages/developers. It also adopts the STEP modular architecture and the XML data structures to enable collaboration between CAx software packages as well as overcoming the STEP standard limitations. The architecture and procedures of LAYMOD platform to manage collaboration and avoid contradicts in product data integration are introduced.Keywords: CAx, Collaboration management, STEP applicationmodules, STEP standard, XML data structures
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2218210 Modeling and Simulation of Acoustic Link Using Mackenize Propagation Speed Equation
Authors: Christhu Raj M. R., Rajeev Sukumaran
Abstract:
Underwater acoustic networks have attracted great attention in the last few years because of its numerous applications. High data rate can be achieved by efficiently modeling the physical layer in the network protocol stack. In Acoustic medium, propagation speed of the acoustic waves is dependent on many parameters such as temperature, salinity, density, and depth. Acoustic propagation speed cannot be modeled using standard empirical formulas such as Urick and Thorp descriptions. In this paper, we have modeled the acoustic channel using real time data of temperature, salinity, and speed of Bay of Bengal (Indian Coastal Region). We have modeled the acoustic channel by using Mackenzie speed equation and real time data obtained from National Institute of Oceanography and Technology. It is found that acoustic propagation speed varies between 1503 m/s to 1544 m/s as temperature and depth differs. The simulation results show that temperature, salinity, depth plays major role in acoustic propagation and data rate increases with appropriate data sets substituted in the simulated model.Keywords: Underwater Acoustics, Mackenzie Speed Equation, Temperature, Salinity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2199209 Collision Detection Algorithm Based on Data Parallelism
Authors: Zhen Peng, Baifeng Wu
Abstract:
Modern computing technology enters the era of parallel computing with the trend of sustainable and scalable parallelism. Single Instruction Multiple Data (SIMD) is an important way to go along with the trend. It is able to gather more and more computing ability by increasing the number of processor cores without the need of modifying the program. Meanwhile, in the field of scientific computing and engineering design, many computation intensive applications are facing the challenge of increasingly large amount of data. Data parallel computing will be an important way to further improve the performance of these applications. In this paper, we take the accurate collision detection in building information modeling as an example. We demonstrate a model for constructing a data parallel algorithm. According to the model, a complex object is decomposed into the sets of simple objects; collision detection among complex objects is converted into those among simple objects. The resulting algorithm is a typical SIMD algorithm, and its advantages in parallelism and scalability is unparalleled in respect to the traditional algorithms.
Keywords: Data parallelism, collision detection, single instruction multiple data, building information modeling, continuous scalability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1235208 Multivariate High Order Fuzzy Time Series Forecasting for Car Road Accidents
Authors: Tahseen A. Jilani, S. M. Aqil Burney, C. Ardil
Abstract:
In this paper, we have presented a new multivariate fuzzy time series forecasting method. This method assumes mfactors with one main factor of interest. History of past three years is used for making new forecasts. This new method is applied in forecasting total number of car accidents in Belgium using four secondary factors. We also make comparison of our proposed method with existing methods of fuzzy time series forecasting. Experimentally, it is shown that our proposed method perform better than existing fuzzy time series forecasting methods. Practically, actuaries are interested in analysis of the patterns of causalities in road accidents. Thus using fuzzy time series, actuaries can define fuzzy premium and fuzzy underwriting of car insurance and life insurance for car insurance. National Institute of Statistics, Belgium provides region of risk classification for each road. Thus using this risk classification, we can predict premium rate and underwriting of insurance policy holders.Keywords: Average forecasting error rate (AFER), Fuzziness offuzzy sets Fuzzy, If-Then rules, Multivariate fuzzy time series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2490207 Hippocampus Segmentation using a Local Prior Model on its Boundary
Authors: Dimitrios Zarpalas, Anastasios Zafeiropoulos, Petros Daras, Nicos Maglaveras
Abstract:
Segmentation techniques based on Active Contour Models have been strongly benefited from the use of prior information during their evolution. Shape prior information is captured from a training set and is introduced in the optimization procedure to restrict the evolution into allowable shapes. In this way, the evolution converges onto regions even with weak boundaries. Although significant effort has been devoted on different ways of capturing and analyzing prior information, very little thought has been devoted on the way of combining image information with prior information. This paper focuses on a more natural way of incorporating the prior information in the level set framework. For proof of concept the method is applied on hippocampus segmentation in T1-MR images. Hippocampus segmentation is a very challenging task, due to the multivariate surrounding region and the missing boundary with the neighboring amygdala, whose intensities are identical. The proposed method, mimics the human segmentation way and thus shows enhancements in the segmentation accuracy.Keywords: Medical imaging & processing, Brain MRI segmentation, hippocampus segmentation, hippocampus-amygdala missingboundary, weak boundary segmentation, region based segmentation, prior information, local weighting scheme in level sets, spatialdistribution of labels, gradient distribution on boundary.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1752206 On the Efficient Implementation of a Serial and Parallel Decomposition Algorithm for Fast Support Vector Machine Training Including a Multi-Parameter Kernel
Authors: Tatjana Eitrich, Bruno Lang
Abstract:
This work deals with aspects of support vector machine learning for large-scale data mining tasks. Based on a decomposition algorithm for support vector machine training that can be run in serial as well as shared memory parallel mode we introduce a transformation of the training data that allows for the usage of an expensive generalized kernel without additional costs. We present experiments for the Gaussian kernel, but usage of other kernel functions is possible, too. In order to further speed up the decomposition algorithm we analyze the critical problem of working set selection for large training data sets. In addition, we analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our tests and conclusions led to several modifications of the algorithm and the improvement of overall support vector machine learning performance. Our method allows for using extensive parameter search methods to optimize classification accuracy.
Keywords: Support Vector Machine Training, Multi-ParameterKernels, Shared Memory Parallel Computing, Large Data
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443205 An Energy-Efficient Distributed Unequal Clustering Protocol for Wireless Sensor Networks
Authors: Sungju Lee, Jangsoo Lee , Hongjoong Sin, Seunghwan Yoo, Sanghyuck Lee, Jaesik Lee, Yongjun Lee, Sungchun Kim
Abstract:
The wireless sensor networks have been extensively deployed and researched. One of the major issues in wireless sensor networks is a developing energy-efficient clustering protocol. Clustering algorithm provides an effective way to prolong the lifetime of a wireless sensor networks. In the paper, we compare several clustering protocols which significantly affect a balancing of energy consumption. And we propose an Energy-Efficient Distributed Unequal Clustering (EEDUC) algorithm which provides a new way of creating distributed clusters. In EEDUC, each sensor node sets the waiting time. This waiting time is considered as a function of residual energy, number of neighborhood nodes. EEDUC uses waiting time to distribute cluster heads. We also propose an unequal clustering mechanism to solve the hot-spot problem. Simulation results show that EEDUC distributes the cluster heads, balances the energy consumption well among the cluster heads and increases the network lifetime.Keywords: Wireless Sensor Network, Distributed UnequalClustering, Multi-hop, Lifetime.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2489204 Study of Equilibrium and Mass Transfer of Co- Extraction of Different Mineral Acids with Iron(III) from Aqueous Solution by Tri-n-Butyl Phosphate Using Liquid Membrane
Authors: Diptendu Das, Vikas Kumar Rahi, V. A. Juvekar, R. Bhattacharya
Abstract:
Extraction of Fe(III) from aqueous solution using Trin- butyl Phosphate (TBP) as carrier needs a highly acidic medium (>6N) as it favours formation of chelating complex FeCl3.TBP. Similarly, stripping of Iron(III) from loaded organic solvents requires neutral pH or alkaline medium to dissociate the same complex. It is observed that TBP co-extracts acids along with metal, which causes reversal of driving force of extraction and iron(III) is re-extracted back from the strip phase into the feed phase during Liquid Emulsion Membrane (LEM) pertraction. Therefore, rate of extraction of different mineral acids (HCl, HNO3, H2SO4) using TBP with and without presence of metal Fe(III) was examined. It is revealed that in presence of metal acid extraction is enhanced. Determination of mass transfer coefficient of both acid and metal extraction was performed by using Bulk Liquid Membrane (BLM). The average mass transfer coefficient was obtained by fitting the derived model equation with experimentally obtained data. The mass transfer coefficient of the mineral acid extraction is in the order of kHNO3 = 3.3x10-6m/s > kHCl = 6.05x10-7m/s > kH2SO4 = 1.85x10-7m/s. The distribution equilibria of the above mentioned acids between aqueous feed solution and a solution of tri-n-butyl-phosphate (TBP) in organic solvents have been investigated. The stoichiometry of acid extraction reveals the formation of TBP.2HCl, HNO3.2TBP, and TBP.H2SO4 complexes. Moreover, extraction of Iron(III) by TBP in HCl aqueous solution forms complex FeCl3.TBP.2HCl while in HNO3 medium forms complex 3FeCl3.TBP.2HNO3Keywords: Bulk Liquid Membrane (BLM) Transport, Iron(III) extraction, Tri-n-butyl Phosphate, Mass Transfer coefficient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2588203 Modeling Uncertainty in Multiple Criteria Decision Making Using the Technique for Order Preference by Similarity to Ideal Solution for the Selection of Stealth Combat Aircraft
Authors: C. Ardil
Abstract:
Uncertainty set theory is a generalization of fuzzy set theory and intuitionistic fuzzy set theory. It serves as an effective tool for dealing with inconsistent, imprecise, and vague information. The technique for order preference by similarity to ideal solution (TOPSIS) method is a multiple-attribute method used to identify solutions from a finite set of alternatives. It simultaneously minimizes the distance from an ideal point and maximizes the distance from a nadir point. In this paper, an extension of the TOPSIS method for multiple attribute group decision-making (MAGDM) based on uncertainty sets is presented. In uncertainty decision analysis, decision-makers express information about attribute values and weights using uncertainty numbers to select the best stealth combat aircraft.
Keywords: Uncertainty set, stealth combat aircraft selection multiple criteria decision-making analysis, MCDM, uncertainty decision analysis, TOPSIS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 144202 Combined Automatic Speech Recognition and Machine Translation in Business Correspondence Domain for English-Croatian
Authors: Sanja Seljan, Ivan Dunđer
Abstract:
The paper presents combined automatic speech recognition (ASR) of English and machine translation (MT) for English and Croatian and Croatian-English language pairs in the domain of business correspondence. The first part presents results of training the ASR commercial system on English data sets, enriched by error analysis. The second part presents results of machine translation performed by free online tool for English and Croatian and Croatian-English language pairs. Human evaluation in terms of usability is conducted and internal consistency calculated by Cronbach's alpha coefficient, enriched by error analysis. Automatic evaluation is performed by WER (Word Error Rate) and PER (Position-independent word Error Rate) metrics, followed by investigation of Pearson’s correlation with human evaluation.
Keywords: Automatic machine translation, integrated language technologies, quality evaluation, speech recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2912201 Induced Affectivity and Impact on Creativity: Personal Growth and Perceived Adjustment when Narrating an Intense Emotional Experience
Authors: S. Da Costa, D. Páez, F. Sánchez
Abstract:
We examine the causal role of positive affect on creativity, the association of creativity or innovation in the ideation phase with functional emotional regulation, successful adjustment to stress and dispositional emotional creativity, as well as the predictive role of creativity for positive emotions and social adjustment. The study examines the effects of modification of positive affect on creativity. Participants write three poems, narrate an infatuation episode, answer a scale of personal growth after this episode and perform a creativity task, answer a flow scale after creativity task and fill a dispositional emotional creativity scale. High and low positive effect was induced by asking subjects to write three poems about high and low positive connotation stimuli. In a neutral condition, tasks were performed without previous affect induction. Subjects on the condition of high positive affect report more positive and less negative emotions, more personal growth (effect size r = .24) and their last poem was rated as more original by judges (effect size r = .33). Mediational analysis showed that positive emotions explain the influence of the manipulation on personal growth - positive affect correlates r = .33 to personal growth. The emotional creativity scale correlated to creativity scores of the creative task (r = .14), to the creativity of the narration of the infatuation episode (r = .21). Emotional creativity was also associated, during performing the creativity task, with flow (r = .27) and with affect balance (r = .26). The mediational analysis showed that emotional creativity predicts flow through positive affect. Results suggest that innovation in the phase of ideation is associated with a positive affect balance and satisfactory performance, as well as dispositional emotional creativity is adaptive.
Keywords: Affectivity, creativity, induction, innovation, psychological factors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 613200 Random Projections for Dimensionality Reduction in ICA
Authors: Sabrina Gaito, Andrea Greppi, Giuliano Grossi
Abstract:
In this paper we present a technique to speed up ICA based on the idea of reducing the dimensionality of the data set preserving the quality of the results. In particular we refer to FastICA algorithm which uses the Kurtosis as statistical property to be maximized. By performing a particular Johnson-Lindenstrauss like projection of the data set, we find the minimum dimensionality reduction rate ¤ü, defined as the ratio between the size k of the reduced space and the original one d, which guarantees a narrow confidence interval of such estimator with high confidence level. The derived dimensionality reduction rate depends on a system control parameter β easily computed a priori on the basis of the observations only. Extensive simulations have been done on different sets of real world signals. They show that actually the dimensionality reduction is very high, it preserves the quality of the decomposition and impressively speeds up FastICA. On the other hand, a set of signals, on which the estimated reduction rate is greater than 1, exhibits bad decomposition results if reduced, thus validating the reliability of the parameter β. We are confident that our method will lead to a better approach to real time applications.Keywords: Independent Component Analysis, FastICA algorithm, Higher-order statistics, Johnson-Lindenstrauss lemma.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1890199 On the Performance of Information Criteria in Latent Segment Models
Authors: Jaime R. S. Fonseca
Abstract:
Nevertheless the widespread application of finite mixture models in segmentation, finite mixture model selection is still an important issue. In fact, the selection of an adequate number of segments is a key issue in deriving latent segments structures and it is desirable that the selection criteria used for this end are effective. In order to select among several information criteria, which may support the selection of the correct number of segments we conduct a simulation study. In particular, this study is intended to determine which information criteria are more appropriate for mixture model selection when considering data sets with only categorical segmentation base variables. The generation of mixtures of multinomial data supports the proposed analysis. As a result, we establish a relationship between the level of measurement of segmentation variables and some (eleven) information criteria-s performance. The criterion AIC3 shows better performance (it indicates the correct number of the simulated segments- structure more often) when referring to mixtures of multinomial segmentation base variables.Keywords: Quantitative Methods, Multivariate Data Analysis, Clustering, Finite Mixture Models, Information Theoretical Criteria, Simulation experiments.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1519198 Software Effort Estimation Models Using Radial Basis Function Network
Authors: E. Praynlin, P. Latha
Abstract:
Software Effort Estimation is the process of estimating the effort required to develop software. By estimating the effort, the cost and schedule required to estimate the software can be determined. Accurate Estimate helps the developer to allocate the resource accordingly in order to avoid cost overrun and schedule overrun. Several methods are available in order to estimate the effort among which soft computing based method plays a prominent role. Software cost estimation deals with lot of uncertainty among all soft computing methods neural network is good in handling uncertainty. In this paper Radial Basis Function Network is compared with the back propagation network and the results are validated using six data sets and it is found that RBFN is best suitable to estimate the effort. The Results are validated using two tests the error test and the statistical test.
Keywords: Software cost estimation, Radial Basis Function Network (RBFN), Back propagation function network, Mean Magnitude of Relative Error (MMRE).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2387197 Kinetic model and Simulation Analysis for Propane Dehydrogenation in an Industrial Moving Bed Reactor
Authors: Chin S. Y., Radzi, S. N. R., Maharon, I. H., Shafawi, M. A.
Abstract:
A kinetic model for propane dehydrogenation in an industrial moving bed reactor is developed based on the reported reaction scheme. The kinetic parameters and activity constant are fine tuned with several sets of balanced plant data. Plant data at different operating conditions is applied to validate the model and the results show a good agreement between the model predictions and plant observations in terms of the amount of main product, propylene produced. The simulation analysis of key variables such as inlet temperature of each reactor (Tinrx) and hydrogen to total hydrocarbon ratio (H2/THC) affecting process performance is performed to identify the operating condition to maximize the production of propylene. Within the range of operating conditions applied in the present studies, the operating condition to maximize the propylene production at the same weighted average inlet temperature (WAIT) is ΔTinrx1= -2, ΔTinrx2= +1, ΔTinrx3= +1 , ΔTinrx4= +2 and ΔH2/THC= -0.02. Under this condition, the surplus propylene produced is 7.07 tons/day as compared with base case.Keywords: kinetic model, dehydrogenation, simulation, modeling, propane
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4434196 A New Approach for Prioritization of Failure Modes in Design FMEA using ANOVA
Authors: Sellappan Narayanagounder, Karuppusami Gurusami
Abstract:
The traditional Failure Mode and Effects Analysis (FMEA) uses Risk Priority Number (RPN) to evaluate the risk level of a component or process. The RPN index is determined by calculating the product of severity, occurrence and detection indexes. The most critically debated disadvantage of this approach is that various sets of these three indexes may produce an identical value of RPN. This research paper seeks to address the drawbacks in traditional FMEA and to propose a new approach to overcome these shortcomings. The Risk Priority Code (RPC) is used to prioritize failure modes, when two or more failure modes have the same RPN. A new method is proposed to prioritize failure modes, when there is a disagreement in ranking scale for severity, occurrence and detection. An Analysis of Variance (ANOVA) is used to compare means of RPN values. SPSS (Statistical Package for the Social Sciences) statistical analysis package is used to analyze the data. The results presented are based on two case studies. It is found that the proposed new methodology/approach resolves the limitations of traditional FMEA approach.Keywords: Failure mode and effects analysis, Risk priority code, Critical failure mode, Analysis of variance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5438195 Intellectual Property Implications in the Context of Space Exploration with a Focus on European Space Agency Rules and Regulations
Authors: Linda Ana Maria Ungureanu
Abstract:
This article details the manner in which European law establishes the protection and ownership rights over works created in off-world environments or in relation to space exploration. In this sense, the analysis is focused on identifying the legal treatment applicable to creative works based on the provisions regulated under the International Space Treaties, on one side, and the International Intellectual Property (IP) Treaties and subsequent EU legislation, on the other side, with a special interest on European Space Agency (ESA) Rules and Regulations. Furthermore, the article analyses the manner in which ESA regulates the ownership regime applicable for creative works, taking into account the relationship existing between the inventor/creator and ESA and the environment in which the creative work was developed. Moreover, the article sets a series of de lege ferenda proposals for the regulation of IP matters in the context of space exploration, the main purpose being to identify legal measures and steps that need to be taken in order to ensure that creative activities are fostered and understood as a significant catalyst for encouraging space exploration.
Keywords: ESA guidelines, EU legislation, intellectual property law, international IP treaties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 479194 Advanced Neural Network Learning Applied to Pulping Modeling
Authors: Z. Zainuddin, W. D. Wan Rosli, R. Lanouette, S. Sathasivam
Abstract:
This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of pulping problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified odified problem M-1 Ax= M-1b where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.
Keywords: Convergence, pulping modeling, neural networks, preconditioned conjugate gradient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408193 Sentiment Analysis of Fake Health News Using Naive Bayes Classification Models
Authors: Danielle Shackley, Yetunde Folajimi
Abstract:
As more people turn to the internet seeking health related information, there is more risk of finding false, inaccurate, or dangerous information. Sentiment analysis is a natural language processing technique that assigns polarity scores of text, ranging from positive, neutral and negative. In this research, we evaluate the weight of a sentiment analysis feature added to fake health news classification models. The dataset consists of existing reliably labeled health article headlines that were supplemented with health information collected about COVID-19 from social media sources. We started with data preprocessing, tested out various vectorization methods such as Count and TFIDF vectorization. We implemented 3 Naive Bayes classifier models, including Bernoulli, Multinomial and Complement. To test the weight of the sentiment analysis feature on the dataset, we created benchmark Naive Bayes classification models without sentiment analysis, and those same models were reproduced and the feature was added. We evaluated using the precision and accuracy scores. The Bernoulli initial model performed with 90% precision and 75.2% accuracy, while the model supplemented with sentiment labels performed with 90.4% precision and stayed constant at 75.2% accuracy. Our results show that the addition of sentiment analysis did not improve model precision by a wide margin; while there was no evidence of improvement in accuracy, we had a 1.9% improvement margin of the precision score with the Complement model. Future expansion of this work could include replicating the experiment process, and substituting the Naive Bayes for a deep learning neural network model.
Keywords: Sentiment analysis, Naive Bayes model, natural language processing, topic analysis, fake health news classification model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 487192 Aerodynamics and Optimization of Airfoil Under Ground Effect
Authors: Kyoungwoo Park, Byeong Sam Kim, Juhee Lee, Kwang Soo Kim
Abstract:
The Prediction of aerodynamic characteristics and shape optimization of airfoil under the ground effect have been carried out by integration of computational fluid dynamics and the multiobjective Pareto-based genetic algorithm. The main flow characteristics around an airfoil of WIG craft are lift force, lift-to-drag ratio and static height stability (H.S). However, they show a strong trade-off phenomenon so that it is not easy to satisfy the design requirements simultaneously. This difficulty can be resolved by the optimal design. The above mentioned three characteristics are chosen as the objective functions and NACA0015 airfoil is considered as a baseline model in the present study. The profile of airfoil is constructed by Bezier curves with fourteen control points and these control points are adopted as the design variables. For multi-objective optimization problems, the optimal solutions are not unique but a set of non-dominated optima and they are called Pareto frontiers or Pareto sets. As the results of optimization, forty numbers of non- dominated Pareto optima can be obtained at thirty evolutions.Keywords: Aerodynamics, Shape optimization, Airfoil on WIGcraft, Genetic algorithm, Computational fluid dynamics (CFD).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3231191 Automatic Detection and Classification of Microcalcification, Mass, Architectural Distortion and Bilateral Asymmetry in Digital Mammogram
Authors: S. Shanthi, V. Muralibhaskaran
Abstract:
Mammography has been one of the most reliable methods for early detection of breast cancer. There are different lesions which are breast cancer characteristic such as microcalcifications, masses, architectural distortions and bilateral asymmetry. One of the major challenges of analysing digital mammogram is how to extract efficient features from it for accurate cancer classification. In this paper we proposed a hybrid feature extraction method to detect and classify all four signs of breast cancer. The proposed method is based on multiscale surrounding region dependence method, Gabor filters, multi fractal analysis, directional and morphological analysis. The extracted features are input to self adaptive resource allocation network (SRAN) classifier for classification. The validity of our approach is extensively demonstrated using the two benchmark data sets Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammograph (DDSM) and the results have been proved to be progressive.
Keywords: Feature extraction, fractal analysis, Gabor filters, multiscale surrounding region dependence method, SRAN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2944190 On Pattern-Based Programming towards the Discovery of Frequent Patterns
Authors: Kittisak Kerdprasop, Nittaya Kerdprasop
Abstract:
The problem of frequent pattern discovery is defined as the process of searching for patterns such as sets of features or items that appear in data frequently. Finding such frequent patterns has become an important data mining task because it reveals associations, correlations, and many other interesting relationships hidden in a database. Most of the proposed frequent pattern mining algorithms have been implemented with imperative programming languages. Such paradigm is inefficient when set of patterns is large and the frequent pattern is long. We suggest a high-level declarative style of programming apply to the problem of frequent pattern discovery. We consider two languages: Haskell and Prolog. Our intuitive idea is that the problem of finding frequent patterns should be efficiently and concisely implemented via a declarative paradigm since pattern matching is a fundamental feature supported by most functional languages and Prolog. Our frequent pattern mining implementation using the Haskell and Prolog languages confirms our hypothesis about conciseness of the program. The comparative performance studies on line-of-code, speed and memory usage of declarative versus imperative programming have been reported in the paper.Keywords: Frequent pattern mining, functional programming, pattern matching, logic programming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1343189 Efficient Tuning Parameter Selection by Cross-Validated Score in High Dimensional Models
Authors: Yoonsuh Jung
Abstract:
As DNA microarray data contain relatively small sample size compared to the number of genes, high dimensional models are often employed. In high dimensional models, the selection of tuning parameter (or, penalty parameter) is often one of the crucial parts of the modeling. Cross-validation is one of the most common methods for the tuning parameter selection, which selects a parameter value with the smallest cross-validated score. However, selecting a single value as an ‘optimal’ value for the parameter can be very unstable due to the sampling variation since the sample sizes of microarray data are often small. Our approach is to choose multiple candidates of tuning parameter first, then average the candidates with different weights depending on their performance. The additional step of estimating the weights and averaging the candidates rarely increase the computational cost, while it can considerably improve the traditional cross-validation. We show that the selected value from the suggested methods often lead to stable parameter selection as well as improved detection of significant genetic variables compared to the tradition cross-validation via real data and simulated data sets.Keywords: Cross Validation, Parameter Averaging, Parameter Selection, Regularization Parameter Search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1572188 An Improved Performance of the SRM Drives Using Z-Source Inverter with the Simplified Fuzzy Logic Rule Base
Authors: M. Hari Prabhu
Abstract:
This paper is based on the performance of the Switched Reluctance Motor (SRM) drives using Z-Source Inverter with the simplified rule base of Fuzzy Logic Controller (FLC) with the output scaling factor (SF) self-tuning mechanism are proposed. The aim of this paper is to simplify the program complexity of the controller by reducing the number of fuzzy sets of the membership functions (MFs) without losing the system performance and stability via the adjustable controller gain. ZSI exhibits both voltage-buck and voltage-boost capability. It reduces line harmonics, improves reliability, and extends output voltage range. The output SF of the controller can be tuned continuously by a gain updating factor, whose value is derived from fuzzy logic, with the plant error and error change ratio as input variables. Then the results, carried out on a four-phase 6/8 pole SRM based on the dSPACEDS1104 platform, to show the feasibility and effectiveness of the devised methods and also performance of the proposed controllers will be compared with conventional counterpart.
Keywords: Fuzzy logic controller, scaling factor (SF), switched reluctance motor (SRM), variable-speed drives.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2428187 Seismic Behavior of Steel Moment-Resisting Frames for Uplift Permitted in Near-Fault Regions
Authors: M. Tehranizadeh, E. Shoushtari Rezvani
Abstract:
Seismic performance of steel moment-resisting frame structures is investigated considering nonlinear soil-structure interaction (SSI) effects. 10-, 15-, and 20-story planar building frames with aspect ratio of 3 are designed in accordance with current building codes. Inelastic seismic demands of the superstructure are considered using concentrated plasticity model. The raft foundation system is designed for different soil types. Beam-on-nonlinear Winkler foundation (BNWF) is used to represent dynamic impedance of the underlying soil. Two sets of pulse-like as well as no-pulse near-fault earthquakes are used as input ground motions. The results show that the reduction in drift demands due to nonlinear SSI is characterized by a more uniform distribution pattern along the height when compared to the fixed-base and linear SSI condition. It is also concluded that beneficial effects of nonlinear SSI on displacement demands is more significant in case of pulse-like ground motions and performance level of the steel moment-resisting frames can be enhanced.
Keywords: Soil-structure interaction, uplifting, soil plasticity, near-fault earthquake, tall building.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1138186 Shoreline Change Estimation from Survey Image Coordinates and Neural Network Approximation
Authors: Tienfuan Kerh, Hsienchang Lu, Rob Saunders
Abstract:
Shoreline erosion problems caused by global warming and sea level rising may result in losing of land areas, so it should be examined regularly to reduce possible negative impacts. Initially in this study, three sets of survey images obtained from the years of 1990, 2001, and 2010, respectively, are digitalized by using graphical software to establish the spatial coordinates of six major beaches around the island of Taiwan. Then, by overlaying the known multi-period images, the change of shoreline can be observed from their distribution of coordinates. In addition, the neural network approximation is used to develop a model for predicting shoreline variation in the years of 2015 and 2020. The comparison results show that there is no significant change of total sandy area for all beaches in the three different periods. However, the prediction results show that two beaches may exhibit an increasing of total sandy areas under a statistical 95% confidence interval. The proposed method adopted in this study may be applicable to other shorelines of interest around the world.
Keywords: Digitalized shoreline coordinates, survey image overlaying, neural network approximation, total beach sandy areas.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2015185 Sulphur-Mediated Precipitation of Pt/Fe/Co/CrIons in Liquid-Liquid and Gas-Liquid Chloride Systems
Authors: J. Siame, H. Kasaini
Abstract:
The proof of concept experiments were conducted to determine the feasibility of using small amounts of Dissolved Sulphur (DS) from the gaseous phase to precipitate platinum ions in chloride media. Two sets of precipitation experiments were performed in which the source of sulphur atoms was either a thiosulphate solution (Na2S2O3) or a sulphur dioxide gas (SO2). In liquid-liquid (L-L) system, complete precipitation of Pt was achieved at small dosages of Na2S2O3 (0.01 – 1.0 M) in a time interval of 3-5 minutes. On the basis of this result, gas absorption tests were carried out mainly to achieve sulphur solubility equivalent to 0.018 M. The idea that huge amounts of precious metals could be recovered selectively from their dilute solutions by utilizing the waste SO2 streams at low pressure seemed attractive from the economic and environmental point of views. Therefore, mass transfer characteristics of SO2 gas associated with reactive absorption across the gas-liquid (G-L) interface were evaluated under different conditions of pressure (0.5 – 2 bar), solution temperature ranges from 20 – 50 oC and acid strength (1 – 4 M, HCl). This paper concludes with information about selective precipitation of Pt in the presence of cations (Fe2+, Co2+, and Cr3+) in a CSTR and recommendation to scale up laboratory data to industrial pilot scale operations.Keywords: CSTR, diffusivity, platinum, selective precipitation, sulphur dioxide, thiosulphate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2157184 Soft-Sensor for Estimation of Gasoline Octane Number in Platforming Processes with Adaptive Neuro-Fuzzy Inference Systems (ANFIS)
Authors: Hamed.Vezvaei, Sepideh.Ordibeheshti, Mehdi.Ardjmand
Abstract:
Gasoline Octane Number is the standard measure of the anti-knock properties of a motor in platforming processes, that is one of the important unit operations for oil refineries and can be determined with online measurement or use CFR (Cooperative Fuel Research) engines. Online measurements of the Octane number can be done using direct octane number analyzers, that it is too expensive, so we have to find feasible analyzer, like ANFIS estimators. ANFIS is the systems that neural network incorporated in fuzzy systems, using data automatically by learning algorithms of NNs. ANFIS constructs an input-output mapping based both on human knowledge and on generated input-output data pairs. In this research, 31 industrial data sets are used (21 data for training and the rest of the data used for generalization). Results show that, according to this simulation, hybrid method training algorithm in ANFIS has good agreements between industrial data and simulated results.Keywords: Adaptive Neuro-Fuzzy Inference Systems, GasolineOctane Number, Soft-sensor, Catalytic Naphtha Reforming
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2194