Search results for: Data mining andInformation Extraction
7711 Role and Effect of Temperature on LPG Sweetening Process
Authors: Ali Samadi Afshar, Sayed Reaza Hashemi
Abstract:
In the gas refineries of Iran-s South Pars Gas Complex, Sulfrex demercaptanization process is used to remove volatile and corrosive mercaptans from liquefied petroleum gases by caustic solution. This process consists of two steps. Removing low molecular weight mercaptans and regeneration exhaust caustic. Some parameters such as LPG feed temperature, caustic concentration and feed-s mercaptan in extraction step and sodium mercaptide content in caustic, catalyst concentration, caustic temperature, air injection rate in regeneration step are effective factors. In this paper was focused on temperature factor that play key role in mercaptans extraction and caustic regeneration. The experimental results demonstrated by optimization of temperature, sodium mercaptide content in caustic because of good oxidation minimized and sulfur impurities in product reduced.Keywords: Caustic regeneration, demercaptanization, LPG sweetening, mercaptan extraction, temperature.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59837710 A Preference-Based Multi-Agent Data Mining Framework for Social Network Service Users' Decision Making
Authors: Ileladewa Adeoye Abiodun, Cheng Wai Khuen
Abstract:
Multi-Agent Systems (MAS) emerged in the pursuit to improve our standard of living, and hence can manifest complex human behaviors such as communication, decision making, negotiation and self-organization. The Social Network Services (SNSs) have attracted millions of users, many of whom have integrated these sites into their daily practices. The domains of MAS and SNS have lots of similarities such as architecture, features and functions. Exploring social network users- behavior through multiagent model is therefore our research focus, in order to generate more accurate and meaningful information to SNS users. An application of MAS is the e-Auction and e-Rental services of the Universiti Cyber AgenT(UniCAT), a Social Network for students in Universiti Tunku Abdul Rahman (UTAR), Kampar, Malaysia, built around the Belief- Desire-Intention (BDI) model. However, in spite of the various advantages of the BDI model, it has also been discovered to have some shortcomings. This paper therefore proposes a multi-agent framework utilizing a modified BDI model- Belief-Desire-Intention in Dynamic and Uncertain Situations (BDIDUS), using UniCAT system as a case study.
Keywords: Distributed Data Mining, Multi-Agent Systems, Preference-Based, SNS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15027709 Feasibility Study for a Castor oil Extraction Plant in South Africa
Authors: Mohamed Belaid, Edison Muzenda, Getrude Mitilene, Mansoor Mollagee
Abstract:
A feasibility study for the design and construction of a pilot plant for the extraction of castor oil in South Africa was conducted. The study emphasized the four critical aspects of project feasibility analysis, namely technical, financial, market and managerial aspects. The technical aspect involved research on existing oil extraction technologies, namely: mechanical pressing and solvent extraction, as well as assessment of the proposed production site for both short and long term viability of the project. The site is on the outskirts of Nkomazi village in the Mpumalanga province, where connections for water and electricity are currently underway, potential raw material supply proves to be reliable since the province is known for its commercial farming. The managerial aspect was evaluated based on the fact that the current producer of castor oil will be fully involved in the project while receiving training and technical assistance from Sasol Technology, the TSC and SEDA. Market and financial aspects were evaluated and the project was considered financially viable with a Net Present Value (NPV) of R2 731 687 and an Internal Rate of Return (IRR) of 18% at an annual interest rate of 10.5%. The payback time is 6years for analysis over the first 10 years with a net income of R1 971 000 in the first year. The project was thus found to be feasible with high chance of success while contributing to socio-economic development. It was recommended for lab tests to be conducted to establish process kinetics that would be used in the initial design of the plant.Keywords: Mechanical pressing, Net Present Value, Oilextraction, Project feasibility, Solvent extraction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 60827708 Multi-Dimensional Concerns Mining for Web Applications via Concept-Analysis
Authors: Carlo Bellettini, Alessandro Marchetto, Andrea Trentini
Abstract:
Web applications have become very complex and crucial, especially when combined with areas such as CRM (Customer Relationship Management) and BPR (Business Process Reengineering), the scientific community has focused attention to Web applications design, development, analysis, and testing, by studying and proposing methodologies and tools. This paper proposes an approach to automatic multi-dimensional concern mining for Web Applications, based on concepts analysis, impact analysis, and token-based concern identification. This approach lets the user to analyse and traverse Web software relevant to a particular concern (concept, goal, purpose, etc.) via multi-dimensional separation of concerns, to document, understand and test Web applications. This technique was developed in the context of WAAT (Web Applications Analysis and Testing) project. A semi-automatic tool to support this technique is currently under development.Keywords: Concepts Analysis, Concerns Mining, Multi-Dimensional Separation of Concerns, Impact Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14737707 Mining Implicit Knowledge to Predict Political Risk by Providing Novel Framework with Using Bayesian Network
Authors: Siavash Asadi Ghajarloo
Abstract:
Nowadays predicting political risk level of country has become a critical issue for investors who intend to achieve accurate information concerning stability of the business environments. Since, most of the times investors are layman and nonprofessional IT personnel; this paper aims to propose a framework named GECR in order to help nonexpert persons to discover political risk stability across time based on the political news and events. To achieve this goal, the Bayesian Networks approach was utilized for 186 political news of Pakistan as sample dataset. Bayesian Networks as an artificial intelligence approach has been employed in presented framework, since this is a powerful technique that can be applied to model uncertain domains. The results showed that our framework along with Bayesian Networks as decision support tool, predicted the political risk level with a high degree of accuracy.Keywords: Bayesian Networks, Data mining, GECRframework, Predicting political risk.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21747706 Satellite Interferometric Investigations of Subsidence Events Associated with Groundwater Extraction in Sao Paulo, Brazil
Authors: B. Mendonça, D. Sandwell
Abstract:
The Metropolitan Region of Sao Paulo (MRSP) has suffered from serious water scarcity. Consequently, the most convenient solution has been building wells to extract groundwater from local aquifers. However, it requires constant vigilance to prevent over extraction and future events that can pose serious threat to the population, such as subsidence. Radar imaging techniques (InSAR) have allowed continuous investigation of such phenomena. The analysis of data in the present study consists of 23 SAR images dated from October 2007 to March 2011, obtained by the ALOS-1 spacecraft. Data processing was made with the software GMTSAR, by using the InSAR technique to create pairs of interferograms with ground displacement during different time spans. First results show a correlation between the location of 102 wells registered in 2009 and signals of ground displacement equal or lower than -90 millimeters (mm) in the region. The longest time span interferogram obtained dates from October 2007 to March 2010. As a result, from that interferogram, it was possible to detect the average velocity of displacement in millimeters per year (mm/y), and which areas strong signals have persisted in the MRSP. Four specific areas with signals of subsidence of 28 mm/y to 40 mm/y were chosen to investigate the phenomenon: Guarulhos (Sao Paulo International Airport), the Greater Sao Paulo, Itaquera and Sao Caetano do Sul. The coverage area of the signals was between 0.6 km and 1.65 km of length. All areas are located above a sedimentary type of aquifer. Itaquera and Sao Caetano do Sul showed signals varying from 28 mm/y to 32 mm/y. On the other hand, the places most likely to be suffering from stronger subsidence are the ones in the Greater Sao Paulo and Guarulhos, right beside the International Airport of Sao Paulo. The rate of displacement observed in both regions goes from 35 mm/y to 40 mm/y. Previous investigations of the water use at the International Airport highlight the risks of excessive water extraction that was being done through 9 deep wells. Therefore, it is affirmed that subsidence events are likely to occur and to cause serious damage in the area. This study could show a situation that has not been explored with proper importance in the city, given its social and economic consequences. Since the data were only available until 2011, the question that remains is if the situation still persists. It could be reaffirmed, however, a scenario of risk at the International Airport of Sao Paulo that needs further investigation.Keywords: Ground subsidence, interferometric satellite aperture radar (InSAR), metropolitan region of Sao Paulo, water extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13977705 A Hybrid Approach for Thread Recommendation in MOOC Forums
Authors: Ahmad. A. Kardan, Amir Narimani, Foozhan Ataiefard
Abstract:
Recommender Systems have been developed to provide contents and services compatible to users based on their behaviors and interests. Due to information overload in online discussion forums and users diverse interests, recommending relative topics and threads is considered to be helpful for improving the ease of forum usage. In order to lead learners to find relevant information in educational forums, recommendations are even more needed. We present a hybrid thread recommender system for MOOC forums by applying social network analysis and association rule mining techniques. Initial results indicate that the proposed recommender system performs comparatively well with regard to limited available data from users' previous posts in the forum.Keywords: Association rule mining, hybrid recommender system, massive open online courses, MOOCs, social network analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12637704 A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics
Authors: Hui Zhang, Ye Tian, Fang Ye, Ziming Guo
Abstract:
Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.Keywords: Communication signal, feature extraction, holder coefficient, improved cloud model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7087703 Learning and Evaluating Possibilistic Decision Trees using Information Affinity
Authors: Ilyes Jenhani, Salem Benferhat, Zied Elouedi
Abstract:
This paper investigates the issue of building decision trees from data with imprecise class values where imprecision is encoded in the form of possibility distributions. The Information Affinity similarity measure is introduced into the well-known gain ratio criterion in order to assess the homogeneity of a set of possibility distributions representing instances-s classes belonging to a given training partition. For the experimental study, we proposed an information affinity based performance criterion which we have used in order to show the performance of the approach on well-known benchmarks.Keywords: Data mining from uncertain data, Decision Trees, Possibility Theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15157702 A Study on Finding Similar Document with Multiple Categories
Authors: R. Saraçoğlu, N. Allahverdi
Abstract:
Searching similar documents and document management subjects have important place in text mining. One of the most important parts of similar document research studies is the process of classifying or clustering the documents. In this study, a similar document search approach that includes discussion of out the case of belonging to multiple categories (multiple categories problem) has been carried. The proposed method that based on Fuzzy Similarity Classification (FSC) has been compared with Rocchio algorithm and naive Bayes method which are widely used in text mining. Empirical results show that the proposed method is quite successful and can be applied effectively. For the second stage, multiple categories vector method based on information of categories regarding to frequency of being seen together has been used. Empirical results show that achievement is increased almost two times, when proposed method is compared with classical approach.
Keywords: Document similarity, Fuzzy classification, Multiple categories, Text mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17077701 Mining User-Generated Contents to Detect Service Failures with Topic Model
Authors: Kyung Bae Park, Sung Ho Ha
Abstract:
Online user-generated contents (UGC) significantly change the way customers behave (e.g., shop, travel), and a pressing need to handle the overwhelmingly plethora amount of various UGC is one of the paramount issues for management. However, a current approach (e.g., sentiment analysis) is often ineffective for leveraging textual information to detect the problems or issues that a certain management suffers from. In this paper, we employ text mining of Latent Dirichlet Allocation (LDA) on a popular online review site dedicated to complaint from users. We find that the employed LDA efficiently detects customer complaints, and a further inspection with the visualization technique is effective to categorize the problems or issues. As such, management can identify the issues at stake and prioritize them accordingly in a timely manner given the limited amount of resources. The findings provide managerial insights into how analytics on social media can help maintain and improve their reputation management. Our interdisciplinary approach also highlights several insights by applying machine learning techniques in marketing research domain. On a broader technical note, this paper illustrates the details of how to implement LDA in R program from a beginning (data collection in R) to an end (LDA analysis in R) since the instruction is still largely undocumented. In this regard, it will help lower the boundary for interdisciplinary researcher to conduct related research.Keywords: Latent Dirichlet allocation, R program, text mining, topic model, user generated contents, visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12167700 Lead and Cadmium Spatial Pattern and Risk Assessment around Coal Mine in Hyrcanian Forest, North Iran
Authors: Mahsa Tavakoli, Seyed Mohammad Hojjati, Yahya Kooch
Abstract:
In this study, the effect of coal mining activities on lead and cadmium concentrations and distribution in soil was investigated in Hyrcanian forest, North Iran. 16 plots (20×20 m2) were established by systematic-randomly (60×60 m2) in an area of 4 ha (200×200 m2-mine entrance placed at center). An area adjacent to the mine was not affected by the mining activity; considered as the controlled area. In order to investigate soil lead and cadmium concentration, one sample was taken from the 0-10 cm in each plot. To study the spatial pattern of soil properties and lead and cadmium concentrations in the mining area, an area of 80×80m2 (the mine as the center) was considered and 80 soil samples were systematic-randomly taken (10 m intervals). Geostatistical analysis was performed via Kriging method and GS+ software (version 5.1). In order to estimate the impact of coal mining activities on soil quality, pollution index was measured. Lead and cadmium concentrations were significantly higher in mine area (Pb: 10.97±0.30, Cd: 184.47±6.26 mg.kg-1) in comparison to control area (Pb: 9.42±0.17, Cd: 131.71±15.77 mg.kg-1). The mean values of the PI index indicate that Pb (1.16) and Cd (1.77) presented slightly polluted. Results of the NIPI index showed that Pb (1.44) and Cd (2.52) presented slight pollution and moderate pollution respectively. Results of variography and kriging method showed that it is possible to prepare interpolation maps of lead and cadmium around the mining areas in Hyrcanian forest. According to results of pollution and risk assessments, forest soil was contaminated by heavy metals (lead and cadmium); therefore, using reclamation and remediation techniques in these areas is necessary.
Keywords: Traditional coal mining, heavy metals, pollution indicators, geostatistics, caspian forest.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10517699 Solvent Extraction and Spectrophotometric Determination of Palladium(II) Using P-Methylphenyl Thiourea as a Complexing Agent
Authors: Shashikant R. Kuchekar, Somnath D. Bhumkar, Haribhau R. Aher, Bhaskar H. Zaware, Ponnadurai Ramasami
Abstract:
A precise, sensitive, rapid and selective method for the solvent extraction, spectrophotometric determination of palladium(II) using para-methylphenyl thiourea (PMPT) as an extractant is developed. Palladium(II) forms yellow colored complex with PMPT which shows an absorption maximum at 300 nm. The colored complex obeys Beer’s law up to 7.0 µg ml-1 of palladium. The molar absorptivity and Sandell’s sensitivity were found to be 8.486 x 103 l mol-1cm-1 and 0.0125 μg cm-2 respectively. The optimum conditions for the extraction and determination of palladium have been established by monitoring the various experimental parameters. The precision of the method has been evaluated and the relative standard deviation has been found to be less than 0.53%. The proposed method is free from interference from large number of foreign ions. The method has been successfully applied for the determination of palladium from alloy, synthetic mixtures corresponding to alloy samples.
Keywords: Para-methylphenyl thiourea, palladium, spectrophotometry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7007698 Pattern Recognition Using Feature Based Die-Map Clusteringin the Semiconductor Manufacturing Process
Authors: Seung Hwan Park, Cheng-Sool Park, Jun Seok Kim, Youngji Yoo, Daewoong An, Jun-Geol Baek
Abstract:
Depending on the big data analysis becomes important, yield prediction using data from the semiconductor process is essential. In general, yield prediction and analysis of the causes of the failure are closely related. The purpose of this study is to analyze pattern affects the final test results using a die map based clustering. Many researches have been conducted using die data from the semiconductor test process. However, analysis has limitation as the test data is less directly related to the final test results. Therefore, this study proposes a framework for analysis through clustering using more detailed data than existing die data. This study consists of three phases. In the first phase, die map is created through fail bit data in each sub-area of die. In the second phase, clustering using map data is performed. And the third stage is to find patterns that affect final test result. Finally, the proposed three steps are applied to actual industrial data and experimental results showed the potential field application.
Keywords: Die-Map Clustering, Feature Extraction, Pattern Recognition, Semiconductor Manufacturing Process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31517697 Unsupervised Outlier Detection in Streaming Data Using Weighted Clustering
Authors: Yogita, Durga Toshniwal
Abstract:
Outlier detection in streaming data is very challenging because streaming data cannot be scanned multiple times and also new concepts may keep evolving. Irrelevant attributes can be termed as noisy attributes and such attributes further magnify the challenge of working with data streams. In this paper, we propose an unsupervised outlier detection scheme for streaming data. This scheme is based on clustering as clustering is an unsupervised data mining task and it does not require labeled data, both density based and partitioning clustering are combined for outlier detection. In this scheme partitioning clustering is also used to assign weights to attributes depending upon their respective relevance and weights are adaptive. Weighted attributes are helpful to reduce or remove the effect of noisy attributes. Keeping in view the challenges of streaming data, the proposed scheme is incremental and adaptive to concept evolution. Experimental results on synthetic and real world data sets show that our proposed approach outperforms other existing approach (CORM) in terms of outlier detection rate, false alarm rate, and increasing percentages of outliers.
Keywords: Concept Evolution, Irrelevant Attributes, Streaming Data, Unsupervised Outlier Detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26377696 Clustering Unstructured Text Documents Using Fading Function
Authors: Pallav Roxy, Durga Toshniwal
Abstract:
Clustering unstructured text documents is an important issue in data mining community and has a number of applications such as document archive filtering, document organization and topic detection and subject tracing. In the real world, some of the already clustered documents may not be of importance while new documents of more significance may evolve. Most of the work done so far in clustering unstructured text documents overlooks this aspect of clustering. This paper, addresses this issue by using the Fading Function. The unstructured text documents are clustered. And for each cluster a statistics structure called Cluster Profile (CP) is implemented. The cluster profile incorporates the Fading Function. This Fading Function keeps an account of the time-dependent importance of the cluster. The work proposes a novel algorithm Clustering n-ary Merge Algorithm (CnMA) for unstructured text documents, that uses Cluster Profile and Fading Function. Experimental results illustrating the effectiveness of the proposed technique are also included.Keywords: Clustering, Text Mining, Unstructured TextDocuments, Fading Function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19857695 Impovement of a Label Extraction Method for a Risk Search System
Authors: Shigeaki Sakurai, Ryohei Orihara
Abstract:
This paper proposes an improvement method of classification efficiency in a classification model. The model is used in a risk search system and extracts specific labels from articles posted at bulletin board sites. The system can analyze the important discussions composed of the articles. The improvement method introduces ensemble learning methods that use multiple classification models. Also, it introduces expressions related to the specific labels into generation of word vectors. The paper applies the improvement method to articles collected from three bulletin board sites selected by users and verifies the effectiveness of the improvement method.Keywords: Text mining, Risk search system, Corporate reputation, Bulletin board site, Ensemble learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13257694 Skyline Extraction using a Multistage Edge Filtering
Authors: Byung-Ju Kim, Jong-Jin Shin, Hwa-Jin Nam, Jin-Soo Kim
Abstract:
Skyline extraction in mountainous images can be used for navigation of vehicles or UAV(unmanned air vehicles), but it is very hard to extract skyline shape because of clutters like clouds, sea lines and field borders in images. We developed the edge-based skyline extraction algorithm using a proposed multistage edge filtering (MEF) technique. In this method, characteristics of clutters in the image are first defined and then the lines classified as clutters are eliminated by stages using the proposed MEF technique. After this processing, we select the last line using skyline measures among the remained lines. This proposed algorithm is robust under severe environments with clutters and has even good performance for infrared sensor images with a low resolution. We tested this proposed algorithm for images obtained in the field by an infrared camera and confirmed that the proposed algorithm produced a better performance and faster processing time than conventional algorithms.Keywords: MEF, mountainous image, navigation, skyline
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18717693 A Hybrid Recommendation System Based On Association Rules
Authors: Ahmed Mohammed K. Alsalama
Abstract:
Recommendation systems are widely used in e-commerce applications. The engine of a current recommendation system recommends items to a particular user based on user preferences and previous high ratings. Various recommendation schemes such as collaborative filtering and content-based approaches are used to build a recommendation system. Most of current recommendation systems were developed to fit a certain domain such as books, articles, and movies. We propose1 a hybrid framework recommendation system to be applied on two dimensional spaces (User × Item) with a large number of Users and a small number of Items. Moreover, our proposed framework makes use of both favorite and non-favorite items of a particular user. The proposed framework is built upon the integration of association rules mining and the content-based approach. The results of experiments show that our proposed framework can provide accurate recommendations to users.
Keywords: Data Mining, Association Rules, Recommendation Systems, Hybrid Systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39897692 Analysis of Diverse Cluster Ensemble Techniques
Authors: S. Sarumathi, N. Shanthi, P. Ranjetha
Abstract:
Data mining is the procedure of determining interesting patterns from the huge amount of data. With the intention of accessing the data faster the most supporting processes needed is clustering. Clustering is the process of identifying similarity between data according to the individuality present in the data and grouping associated data objects into clusters. Cluster ensemble is the technique to combine various runs of different clustering algorithms to obtain a general partition of the original dataset, aiming for consolidation of outcomes from a collection of individual clustering outcomes. The performances of clustering ensembles are mainly affecting by two principal factors such as diversity and quality. This paper presents the overview about the different cluster ensemble algorithm along with their methods used in cluster ensemble to improve the diversity and quality in the several cluster ensemble related papers and shows the comparative analysis of different cluster ensemble also summarize various cluster ensemble methods. Henceforth this clear analysis will be very useful for the world of clustering experts and also helps in deciding the most appropriate one to determine the problem in hand.Keywords: Cluster Ensemble, Consensus Function, CSPA, Diversity, HGPA, MCLA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18417691 On-line Speech Enhancement by Time-Frequency Masking under Prior Knowledge of Source Location
Authors: Min Ah Kang, Sangbae Jeong, Minsoo Hahn
Abstract:
This paper presents the source extraction system which can extract only target signals with constraints on source localization in on-line systems. The proposed system is a kind of methods for enhancing a target signal and suppressing other interference signals. But, the performance of proposed system is superior to any other methods and the extraction of target source is comparatively complete. The method has a beamforming concept and uses an improved time-frequency (TF) mask-based BSS algorithm to separate a target signal from multiple noise sources. The target sources are assumed to be in front and test data was recorded in a reverberant room. The experimental results of the proposed method was evaluated by the PESQ score of real-recording sentences and showed a noticeable speech enhancement.
Keywords: Beam forming, Non-stationary noise reduction, Source separation, TF mask.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20227690 Data Extraction of XML Files using Searching and Indexing Techniques
Authors: Sushma Satpute, Vaishali Katkar, Nilesh Sahare
Abstract:
XML files contain data which is in well formatted manner. By studying the format or semantics of the grammar it will be helpful for fast retrieval of the data. There are many algorithms which describes about searching the data from XML files. There are no. of approaches which uses data structure or are related to the contents of the document. In these cases user must know about the structure of the document and information retrieval techniques using NLPs is related to content of the document. Hence the result may be irrelevant or not so successful and may take more time to search.. This paper presents fast XML retrieval techniques by using new indexing technique and the concept of RXML. When indexing an XML document, the system takes into account both the document content and the document structure and assigns the value to each tag from file. To query the system, a user is not constrained about fixed format of query.
Keywords: XML Retrieval, Indexed Search, Information Retrieval.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17837689 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule
Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu
Abstract:
Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.Keywords: Instance selection, data reduction, MapReduce, kNN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10177688 Verification of the Simultaneous Local Extraction Method of Base and Thermal Resistance of Bipolar Transistors
Authors: Robert Setekera, Luuk Tiemeijer, Ramses van der Toorn
Abstract:
In this paper an extensive verification of the extraction method (published earlier) that consistently accounts for self-heating and Early effect to accurately extract both base and thermal resistance of bipolar junction transistors is presented. The method verification is demonstrated on advanced RF SiGe HBTs were the extracted results for the thermal resistance are compared with those from another published method that ignores the effect of Early effect on internal base-emitter voltage and the extracted results of the base resistance are compared with those determined from noise measurements. A self-consistency of our method in the extracted base resistance and thermal resistance using compact model simulation results is also carried out in order to study the level of accuracy of the method.
Keywords: Avalanche, Base resistance, Bipolar transistor, Compact modeling, Early voltage, Thermal resistance, Self-heating, parameter extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20507687 Video Shot Detection and Key Frame Extraction Using Faber Shauder DWT and SVD
Authors: Assma Azeroual, Karim Afdel, Mohamed El Hajji, Hassan Douzi
Abstract:
Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition.
Keywords: Key Frame Extraction, Shot detection, FSDWT, Singular Value Decomposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25207686 A New Evolutionary Algorithm for Cluster Analysis
Authors: B.Bahmani Firouzi, T. Niknam, M. Nayeripour
Abstract:
Clustering is a very well known technique in data mining. One of the most widely used clustering techniques is the kmeans algorithm. Solutions obtained from this technique depend on the initialization of cluster centers and the final solution converges to local minima. In order to overcome K-means algorithm shortcomings, this paper proposes a hybrid evolutionary algorithm based on the combination of PSO, SA and K-means algorithms, called PSO-SA-K, which can find better cluster partition. The performance is evaluated through several benchmark data sets. The simulation results show that the proposed algorithm outperforms previous approaches, such as PSO, SA and K-means for partitional clustering problem.
Keywords: Data clustering, Hybrid evolutionary optimization algorithm, K-means algorithm, Simulated Annealing (SA), Particle Swarm Optimization (PSO).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22777685 Road Accidents Bigdata Mining and Visualization Using Support Vector Machines
Authors: Usha Lokala, Srinivas Nowduri, Prabhakar K. Sharma
Abstract:
Useful information has been extracted from the road accident data in United Kingdom (UK), using data analytics method, for avoiding possible accidents in rural and urban areas. This analysis make use of several methodologies such as data integration, support vector machines (SVM), correlation machines and multinomial goodness. The entire datasets have been imported from the traffic department of UK with due permission. The information extracted from these huge datasets forms a basis for several predictions, which in turn avoid unnecessary memory lapses. Since data is expected to grow continuously over a period of time, this work primarily proposes a new framework model which can be trained and adapt itself to new data and make accurate predictions. This work also throws some light on use of SVM’s methodology for text classifiers from the obtained traffic data. Finally, it emphasizes the uniqueness and adaptability of SVMs methodology appropriate for this kind of research work.Keywords: Road accident, machine learning, support vector machines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11297684 The Utility of Wavelet Transform in Surface Electromyography Feature Extraction -A Comparative Study of Different Mother Wavelets
Authors: Farzaneh Akhavan Mahdavi, Siti Anom Ahmad, Mohd Hamiruce Marhaban, Mohammad-R. Akbarzadeh-T
Abstract:
Electromyography (EMG) signal processing has been investigated remarkably regarding various applications such as in rehabilitation systems. Specifically, wavelet transform has served as a powerful technique to scrutinize EMG signals since wavelet transform is consistent with the nature of EMG as a non-stationary signal. In this paper, the efficiency of wavelet transform in surface EMG feature extraction is investigated from four levels of wavelet decomposition and a comparative study between different mother wavelets had been done. To recognize the best function and level of wavelet analysis, two evaluation criteria, scatter plot and RES index are recruited. Hereupon, four wavelet families, namely, Daubechies, Coiflets, Symlets and Biorthogonal are studied in wavelet decomposition stage. Consequently, the results show that only features from first and second level of wavelet decomposition yields good performance and some functions of various wavelet families can lead to an improvement in separability class of different hand movements.
Keywords: Electromyography signal, feature extraction, wavelettransform, means absolute value.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28407683 Appraisal of Methods for Identifying, Mapping, and Modelling of Fluvial Erosion in a Mining Environment
Authors: F. F. Howard, I. Yakubu, C. B. Boye, J. S. Y. Kuma
Abstract:
Natural and human activities, such as mining operations, expose the natural soil to adverse environmental conditions, leading to contamination of soil, groundwater, and surface water, which has negative effects on humans, flora, and fauna. Bare or partly exposed soil is most liable to fluvial erosion. This paper enumerates various methods used to identify, map, and model fluvial erosion in a mining environment. Classical, Artificial Intelligence (AI), and GIS methods have been reviewed. One of the many classical methods used to estimate river erosion is the Revised Universal Soil Loss Equation (RUSLE) model. The RUSLE model is easy to use. Its reliance on empirical relationships that may not always be applicable to specific circumstances or locations is a flaw. Other classical models for estimating fluvial erosion are the Soil and Water Assessment Tool (SWAT) and the Universal Soil Loss Equation (USLE). These models offer a more complete understanding of the underlying physical processes and encompass a wider range of situations. Although more difficult to utilise, they depend on the availability and dependability of input data for correctness. AI can help deal with multivariate and complex difficulties and predict soil loss with higher accuracy than traditional methods, and also be used to build unique models for identifying degraded areas. AI techniques have become popular as an alternative predictor for degraded environments. However, this research proposed a hybrid of classical, AI, and GIS methods for efficient and effective modelling of fluvial erosion.
Keywords: Fluvial erosion, classical methods, Artificial Intelligence, Geographic Information System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1857682 GC and GCxGC-MS Composition of Volatile Compounds from Carum carvi by Using Techniques Assisted by Microwaves
Authors: F. Benkaci-Ali, R. Mékaoui, G. Scholl, G. Eppe
Abstract:
The new methods as accelerated steam distillation assisted by microwave (ASDAM) is a combination of microwave heating and steam distillation, performed at atmospheric pressure at very short extraction time. Isolation and concentration of volatile compounds are performed by a single stage. (ASDAM) has been compared with (ASDAM) with cryogrinding of seeds (CG) and a conventional technique, hydrodistillation assisted by microwave (HDAM), hydro-distillation (HD) for the extraction of essential oil from aromatic herb as caraway and cumin seeds. The essential oils extracted by (ASDAM) for 1 min were quantitatively (yield) and qualitatively (aromatic profile) no similar to those obtained by ASDAM-CG (1 min) and HD (for 3 h). The accelerated microwave extraction with cryogrinding inhibits numerous enzymatic reactions as hydrolysis of oils. Microwave radiations constitute the adequate mean for the extraction operations from the yields and high content in major component majority point view, and allow to minimise considerably the energy consumption, but especially heating time too, which is one of essential parameters of artifacts formation. The ASDAM and ASDAM-CG are green techniques and yields an essential oil with higher amounts of more valuable oxygenated compounds comparable to the biosynthesis compounds, and allows substantial savings of costs, in terms of time, energy and plant material.Keywords: Microwave, steam distillation, caraway, cumin, cryogrinding, GC-MS, GCxGC-MS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2035