Search results for: context-based fuzzy clustering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1254

Search results for: context-based fuzzy clustering

564 The Analyzer: Clustering Based System for Improving Business Productivity by Analyzing User Profiles to Enhance Human Computer Interaction

Authors: Dona Shaini Abhilasha Nanayakkara, Kurugamage Jude Pravinda Gregory Perera

Abstract:

E-commerce platforms have revolutionized the shopping experience, offering convenient ways for consumers to make purchases. To improve interactions with customers and optimize marketing strategies, it is essential for businesses to understand user behavior, preferences, and needs on these platforms. This paper focuses on recommending businesses to customize interactions with users based on their behavioral patterns, leveraging data-driven analysis and machine learning techniques. Businesses can improve engagement and boost the adoption of e-commerce platforms by aligning behavioral patterns with user goals of usability and satisfaction. We propose TheAnalyzer, a clustering-based system designed to enhance business productivity by analyzing user-profiles and improving human-computer interaction. The Analyzer seamlessly integrates with business applications, collecting relevant data points based on users' natural interactions without additional burdens such as questionnaires or surveys. It defines five key user analytics as features for its dataset, which are easily captured through users' interactions with e-commerce platforms. This research presents a study demonstrating the successful distinction of users into specific groups based on the five key analytics considered by TheAnalyzer. With the assistance of domain experts, customized business rules can be attached to each group, enabling The Analyzer to influence business applications and provide an enhanced personalized user experience. The outcomes are evaluated quantitatively and qualitatively, demonstrating that utilizing TheAnalyzer’s capabilities can optimize business outcomes, enhance customer satisfaction, and drive sustainable growth. The findings of this research contribute to the advancement of personalized interactions in e-commerce platforms. By leveraging user behavioral patterns and analyzing both new and existing users, businesses can effectively tailor their interactions to improve customer satisfaction, loyalty and ultimately drive sales.

Keywords: data clustering, data standardization, dimensionality reduction, human computer interaction, user profiling

Procedia PDF Downloads 75
563 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities

Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun

Abstract:

As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.

Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning

Procedia PDF Downloads 58
562 Modified Fuzzy Delphi Method to Incorporate Healthcare Stakeholders’ Perspectives in Selecting Quality Improvement Projects’ Criteria

Authors: Alia Aldarmaki, Ahmad Elshennawy

Abstract:

There is a global shift in healthcare systems’ emphasizing engaging different stakeholders in selecting quality improvement initiatives and incorporating their preferences to improve the healthcare efficiency and outcomes. Although experts bring scientific knowledge based on the scientific model and their personal experience, other stakeholders can bring new insights and information into the decision-making process. This study attempts to explore the impact of incorporating different stakeholders’ preference in identifying the most significant criteria that should be considered in healthcare for electing the improvement projects. A Framework based on a modified Fuzzy Delphi Method (FDM) was built. In addition to, the subject matter experts, doctors/physicians, nurses, administrators, and managers groups contribute to the selection process. The research identifies potential criteria for evaluating projects in healthcare, then utilizes FDM to capture expertise knowledge. The first round in FDM is intended to validate the identified list of criteria from experts; which includes collecting additional criteria from experts that the literature might have overlooked. When an acceptable level of consensus has been reached, a second round is conducted to obtain experts’ and other related stakeholders’ opinions on the appropriate weight of each criterion’s importance using linguistic variables. FDM analyses eliminate or retain the criteria to produce a final list of the critical criteria to select improvement projects in healthcare. Finally, reliability and validity were investigated using Cronbach’s alpha and factor analysis, respectively. Two case studies were carried out in a public hospital in the United Arab Emirates to test the framework. Both cases demonstrate that even though there were common criteria between the experts and the stakeholders, still stakeholders’ perceptions bring additional critical criteria into the evaluation process, which can impact the outcomes. Experts selected criteria related to strategical and managerial aspects, while the other participants preferred criteria related to social aspects such as health and safety and patients’ satisfaction. The health and safety criterion had the highest important weight in both cases. The analysis showed that Cronbach’s alpha value is 0.977 and all criteria have factor loading greater than 0.3. In conclusion, the inclusion of stakeholders’ perspectives is intended to enhance stakeholders’ engagement, improve transparency throughout the decision process, and take robust decisions.

Keywords: Fuzzy Delphi Method, fuzzy number, healthcare, stakeholders

Procedia PDF Downloads 129
561 The Use of Geographic Information System for Selecting Landfill Sites in Osogbo

Authors: Nureni Amoo, Sunday Aroge, Oluranti Akintola, Hakeem Olujide, Ibrahim Alabi

Abstract:

This study investigated the optimum landfill site in Osogbo so as to identify suitable solid waste dumpsite for proper waste management in the capital city. Despite an increase in alternative techniques for disposing of waste, landfilling remains the primary means of waste disposal. These changes in attitudes in many parts of the world have been supported by changes in laws and policies regarding the environment and waste disposal. Selecting the most suitable site for landfill can avoid any ecological and socio-economic effects. The increase in industrial and economic development, along with the increase of population growth in Osogbo town, generates a tremendous amount of solid waste within the region. Factors such as the scarcity of land, the lifespan of the landfill, and environmental considerations warrant that the scientific and fundamental studies are carried out in determining the suitability of a landfill site. The analysis of spatial data and consideration of regulations and accepted criteria are part of the important elements in the site selection. This paper presents a multi-criteria decision-making method using geographic information system (GIS) with the integration of the fuzzy logic multi-criteria decision making (FMCDM) technique for landfill suitability site evaluation. By using the fuzzy logic method (classification of suitable areas in the range of 0 to 1 scale), the superposing of the information layers related to drainage, soil, land use/land cover, slope, land use, and geology maps were performed in the study. Based on the result obtained in this study, five (5) potential sites are suitable for the construction of a landfill are proposed, two of which belong to the most suitable zone, and the existing waste disposal site belonged to the unsuitable zone.

Keywords: fuzzy logic multi-criteria decision making, geographic information system, landfill, suitable site, waste disposal

Procedia PDF Downloads 144
560 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos

Authors: Nassima Noufail, Sara Bouhali

Abstract:

In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.

Keywords: video segmentation, action detection, classification, Kmeans, C3D

Procedia PDF Downloads 79
559 Spatiotemporal Propagation and Pattern of Epileptic Spike Predict Seizure Onset Zone

Authors: Mostafa Mohammadpour, Christoph Kapeller, Christy Li, Josef Scharinger, Christoph Guger

Abstract:

Interictal spikes provide valuable information on electrocorticography (ECoG), which aids in surgical planning for patients who suffer from refractory epilepsy. However, the shape and temporal dynamics of these spikes remain unclear. The purpose of this work was to analyze the shape of interictal spikes and measure their distance to the seizure onset zone (SOZ) to use in epilepsy surgery. Thirteen patients' data from the iEEG portal were retrospectively studied. For analysis, half an hour of ECoG data was used from each patient, with the data being truncated before the onset of a seizure. Spikes were first detected and grouped in a sequence, then clustered into interictal epileptiform discharges (IEDs) and non-IED groups using two-step clustering. The distance of the spikes from IED and non-IED groups to SOZ was quantified and compared using the Wilcoxon rank-sum test. Spikes in the IED group tended to be in SOZ or close to it, while spikes in the non-IED group were in distance of SOZ or non-SOZ area. At the group level, the distribution for sharp wave, positive baseline shift, slow wave, and slow wave to sharp wave ratio was significantly different for IED and non-IED groups. The distance of the IED cluster was 10.00mm and significantly closer to the SOZ than the 17.65mm for non-IEDs. These findings provide insights into the shape and spatiotemporal dynamics of spikes that could influence the network mechanisms underlying refractory epilepsy.

Keywords: spike propagation, spike pattern, clustering, SOZ

Procedia PDF Downloads 70
558 Finding the Longest Common Subsequence in Normal DNA and Disease Affected Human DNA Using Self Organizing Map

Authors: G. Tamilpavai, C. Vishnuppriya

Abstract:

Bioinformatics is an active research area which combines biological matter as well as computer science research. The longest common subsequence (LCSS) is one of the major challenges in various bioinformatics applications. The computation of the LCSS plays a vital role in biomedicine and also it is an essential task in DNA sequence analysis in genetics. It includes wide range of disease diagnosing steps. The objective of this proposed system is to find the longest common subsequence which presents in a normal and various disease affected human DNA sequence using Self Organizing Map (SOM) and LCSS. The human DNA sequence is collected from National Center for Biotechnology Information (NCBI) database. Initially, the human DNA sequence is separated as k-mer using k-mer separation rule. Mean and median values are calculated from each separated k-mer. These calculated values are fed as input to the Self Organizing Map for the purpose of clustering. Then obtained clusters are given to the Longest Common Sub Sequence (LCSS) algorithm for finding common subsequence which presents in every clusters. It returns nx(n-1)/2 subsequence for each cluster where n is number of k-mer in a specific cluster. Experimental outcomes of this proposed system produce the possible number of longest common subsequence of normal and disease affected DNA data. Thus the proposed system will be a good initiative aid for finding disease causing sequence. Finally, performance analysis is carried out for different DNA sequences. The obtained values show that the retrieval of LCSS is done in a shorter time than the existing system.

Keywords: clustering, k-mers, longest common subsequence, SOM

Procedia PDF Downloads 267
557 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 292
556 Fuzzy Optimization for Identifying Anticancer Targets in Genome-Scale Metabolic Models of Colon Cancer

Authors: Feng-Sheng Wang, Chao-Ting Cheng

Abstract:

Developing a drug from conception to launch is costly and time-consuming. Computer-aided methods can reduce research costs and accelerate the development process during the early drug discovery and development stages. This study developed a fuzzy multi-objective hierarchical optimization framework for identifying potential anticancer targets in a metabolic model. First, RNA-seq expression data of colorectal cancer samples and their healthy counterparts were used to reconstruct tissue-specific genome-scale metabolic models. The aim of the optimization framework was to identify anticancer targets that lead to cancer cell death and evaluate metabolic flux perturbations in normal cells that have been caused by cancer treatment. Four objectives were established in the optimization framework to evaluate the mortality of cancer cells for treatment and to minimize side effects causing toxicity-induced tumorigenesis on normal cells and smaller metabolic perturbations. Through fuzzy set theory, a multiobjective optimization problem was converted into a trilevel maximizing decision-making (MDM) problem. The applied nested hybrid differential evolution was applied to solve the trilevel MDM problem using two nutrient media to identify anticancer targets in the genome-scale metabolic model of colorectal cancer, respectively. Using Dulbecco’s Modified Eagle Medium (DMEM), the computational results reveal that the identified anticancer targets were mostly involved in cholesterol biosynthesis, pyrimidine and purine metabolisms, glycerophospholipid biosynthetic pathway and sphingolipid pathway. However, using Ham’s medium, the genes involved in cholesterol biosynthesis were unidentifiable. A comparison of the uptake reactions for the DMEM and Ham’s medium revealed that no cholesterol uptake reaction was included in DMEM. Two additional media, i.e., a cholesterol uptake reaction was included in DMEM and excluded in HAM, were respectively used to investigate the relationship of tumor cell growth with nutrient components and anticancer target genes. The genes involved in the cholesterol biosynthesis were also revealed to be determinable if a cholesterol uptake reaction was not induced when the cells were in the culture medium. However, the genes involved in cholesterol biosynthesis became unidentifiable if such a reaction was induced.

Keywords: Cancer metabolism, genome-scale metabolic model, constraint-based model, multilevel optimization, fuzzy optimization, hybrid differential evolution

Procedia PDF Downloads 81
555 A Sustainable Supplier Selection and Order Allocation Based on Manufacturing Processes and Product Tolerances: A Multi-Criteria Decision Making and Multi-Objective Optimization Approach

Authors: Ravi Patel, Krishna K. Krishnan

Abstract:

In global supply chains, appropriate and sustainable suppliers play a vital role in supply chain development and feasibility. In a larger organization with huge number of suppliers, it is necessary to divide suppliers based on their past history of quality and delivery of each product category. Since performance of any organization widely depends on their suppliers, well evaluated selection criteria and decision-making models lead to improved supplier assessment and development. In this paper, SCOR® performance evaluation approach and ISO standards are used to determine selection criteria for better utilization of supplier assessment by using hybrid model of Analytic Hierchchy Problem (AHP) and Fuzzy Techniques for Order Preference by Similarity to Ideal Solution (FTOPSIS). AHP is used to determine the global weightage of criteria which helps TOPSIS to get supplier score by using triangular fuzzy set theory. Both qualitative and quantitative criteria are taken into consideration for the proposed model. In addition, a multi-product and multi-time period model is selected for order allocation. The optimization model integrates multi-objective integer linear programming (MOILP) for order allocation and a hybrid approach for supplier selection. The proposed MOILP model optimizes order allocation based on manufacturing process and product tolerances as per manufacturer’s requirement for quality product. The integrated model and solution approach are tested to find optimized solutions for different scenario. The detailed analysis shows the superiority of proposed model over other solutions which considered individual decision making models.

Keywords: AHP, fuzzy set theory, multi-criteria decision making, multi-objective integer linear programming, TOPSIS

Procedia PDF Downloads 172
554 Wind Velocity Climate Zonation Based on Observation Data in Indonesia Using Cluster and Principal Component Analysis

Authors: I Dewa Gede Arya Putra

Abstract:

Principal Component Analysis (PCA) is a mathematical procedure that uses orthogonal transformation techniques to change a set of data with components that may be related become components that are not related to each other. This can have an impact on clustering wind speed characteristics in Indonesia. This study uses data daily wind speed observations of the Site Meteorological Station network for 30 years. Multicollinearity tests were also performed on all of these data before doing clustering with PCA. The results show that the four main components have a total diversity of above 80% which will be used for clusters. Division of clusters using Ward's method obtained 3 types of clusters. Cluster 1 covers the central part of Sumatra Island, northern Kalimantan, northern Sulawesi, and northern Maluku with the climatological pattern of wind speed that does not have an annual cycle and a weak speed throughout the year with a low-speed ranging from 0 to 1,5 m/s². Cluster 2 covers the northern part of Sumatra Island, South Sulawesi, Bali, northern Papua with the climatological pattern conditions of wind speed that have annual cycle variations with low speeds ranging from 1 to 3 m/s². Cluster 3 covers the eastern part of Java Island, the Southeast Nusa Islands, and the southern Maluku Islands with the climatological pattern of wind speed conditions that have annual cycle variations with high speeds ranging from 1 to 4.5 m/s².

Keywords: PCA, cluster, Ward's method, wind speed

Procedia PDF Downloads 197
553 Designing Floor Planning in 2D and 3D with an Efficient Topological Structure

Authors: V. Nagammai

Abstract:

Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining thousands of transistors into a single chip. Development of technology increases the complexity in IC manufacturing which may vary the power consumption, increase the size and latency period. Topology defines a number of connections between network. In this project, NoC topology is generated using atlas tool which will increase performance in turn determination of constraints are effective. The routing is performed by XY routing algorithm and wormhole flow control. In NoC topology generation, the value of power, area and latency are predetermined. In previous work, placement, routing and shortest path evaluation is performed using an algorithm called floor planning with cluster reconstruction and path allocation algorithm (FCRPA) with the account of 4 3x3 switch, 6 4x4 switch, and 2 5x5 switches. The usage of the 4x4 and 5x5 switch will increase the power consumption and area of the block. In order to avoid the problem, this paper has used one 8x8 switch and 4 3x3 switches. This paper uses IPRCA which of 3 steps they are placement, clustering, and shortest path evaluation. The placement is performed using min – cut placement and clustering are performed using an algorithm called cluster generation. The shortest path is evaluated using an algorithm called Dijkstra's algorithm. The power consumption of each block is determined. The experimental result shows that the area, power, and wire length improved simultaneously.

Keywords: application specific noc, b* tree representation, floor planning, t tree representation

Procedia PDF Downloads 394
552 Shear Strength Evaluation of Ultra-High-Performance Concrete Flexural Members Using Adaptive Neuro-Fuzzy System

Authors: Minsu Kim, Hae-Chang Cho, Jae Hoon Chung, Inwook Heo, Kang Su Kim

Abstract:

For safe design of the UHPC flexural members, accurate estimations of their shear strengths are very important. However, since the shear strengths are significantly affected by various factors such as tensile strength of concrete, shear span to depth ratio, volume ratio of steel fiber, and steel fiber factor, the accurate estimations of their shear strengths are very challenging. In this study, therefore, the Adaptive Neuro-Fuzzy System (ANFIS), which has been widely used to solve many complex problems in engineering fields, was introduced to estimate the shear strengths of UHPC flexural members. A total of 32 experimental results has been collected from previous studies for training of the ANFIS algorithm, and the well-trained ANFIS algorithm provided good estimations on the shear strengths of the UHPC test specimens. Acknowledgement: This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT & Future Planning(NRF-2016R1A2B2010277).

Keywords: ultra-high-performance concrete, ANFIS, shear strength, flexural member

Procedia PDF Downloads 189
551 Feature Based Unsupervised Intrusion Detection

Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein

Abstract:

The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.

Keywords: information gain (IG), intrusion detection system (IDS), k-means clustering, Weka

Procedia PDF Downloads 296
550 Web Proxy Detection via Bipartite Graphs and One-Mode Projections

Authors: Zhipeng Chen, Peng Zhang, Qingyun Liu, Li Guo

Abstract:

With the Internet becoming the dominant channel for business and life, many IPs are increasingly masked using web proxies for illegal purposes such as propagating malware, impersonate phishing pages to steal sensitive data or redirect victims to other malicious targets. Moreover, as Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to detect the proxy service due to their dynamic update and high anonymity. In this paper, we present an approach based on behavioral graph analysis to study the behavior similarity of web proxy users. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of web proxy users. Based on the similarity matrices of end-users from the derived one-mode projection graphs, we apply a simple yet effective spectral clustering algorithm to discover the inherent web proxy users behavior clusters. The web proxy URL may vary from time to time. Still, the inherent interest would not. So, based on the intuition, by dint of our private tools implemented by WebDriver, we examine whether the top URLs visited by the web proxy users are web proxies. Our experiment results based on real datasets show that the behavior clusters not only reduce the number of URLs analysis but also provide an effective way to detect the web proxies, especially for the unknown web proxies.

Keywords: bipartite graph, one-mode projection, clustering, web proxy detection

Procedia PDF Downloads 246
549 Short Association Bundle Atlas for Lateralization Studies from dMRI Data

Authors: C. Román, M. Guevara, P. Salas, D. Duclap, J. Houenou, C. Poupon, J. F. Mangin, P. Guevara

Abstract:

Diffusion Magnetic Resonance Imaging (dMRI) allows the non-invasive study of human brain white matter. From diffusion data, it is possible to reconstruct fiber trajectories using tractography algorithms. Our previous work consists in an automatic method for the identification of short association bundles of the superficial white matter (SWM), based on a whole brain inter-subject hierarchical clustering applied to a HARDI database. The method finds representative clusters of similar fibers, belonging to a group of subjects, according to a distance measure between fibers, using a non-linear registration (DTI-TK). The algorithm performs an automatic labeling based on the anatomy, defined by a cortex mesh parcelated with FreeSurfer software. The clustering was applied to two independent groups of 37 subjects. The clusters resulting from both groups were compared using a restrictive threshold of mean distance between each pair of bundles from different groups, in order to keep reproducible connections. In the left hemisphere, 48 reproducible bundles were found, while 43 bundles where found in the right hemisphere. An inter-hemispheric bundle correspondence was then applied. The symmetric horizontal reflection of the right bundles was calculated, in order to obtain the position of them in the left hemisphere. Next, the intersection between similar bundles was calculated. The pairs of bundles with a fiber intersection percentage higher than 50% were considered similar. The similar bundles between both hemispheres were fused and symmetrized. We obtained 30 common bundles between hemispheres. An atlas was created with the resulting bundles and used to segment 78 new subjects from another HARDI database, using a distance threshold between 6-8 mm according to the bundle length. Finally, a laterality index was calculated based on the bundle volume. Seven bundles of the atlas presented right laterality (IP_SP_1i, LO_LO_1i, Op_Tr_0i, PoC_PoC_0i, PoC_PreC_2i, PreC_SM_0i, y RoMF_RoMF_0i) and one presented left laterality (IP_SP_2i), there is no tendency of lateralization according to the brain region. Many factors can affect the results, like tractography artifacts, subject registration, and bundle segmentation. Further studies are necessary in order to establish the influence of these factors and evaluate SWM laterality.

Keywords: dMRI, hierarchical clustering, lateralization index, tractography

Procedia PDF Downloads 331
548 Streamlining the Fuzzy Front-End and Improving the Usability of the Tools Involved

Authors: Michael N. O'Sullivan, Con Sheahan

Abstract:

Researchers have spent decades developing tools and techniques to aid teams in the new product development (NPD) process. Despite this, it is evident that there is a huge gap between their academic prevalence and their industry adoption. For the fuzzy front-end, in particular, there is a wide range of tools to choose from, including the Kano Model, the House of Quality, and many others. In fact, there are so many tools that it can often be difficult for teams to know which ones to use and how they interact with one another. Moreover, while the benefits of using these tools are obvious to industrialists, they are rarely used as they carry a learning curve that is too steep and they become too complex to manage over time. In essence, it is commonly believed that they are simply not worth the effort required to learn and use them. This research explores a streamlined process for the fuzzy front-end, assembling the most effective tools and making them accessible to everyone. The process was developed iteratively over the course of 3 years, following over 80 final year NPD teams from engineering, design, technology, and construction as they carried a product from concept through to production specification. Questionnaires, focus groups, and observations were used to understand the usability issues with the tools involved, and a human-centred design approach was adopted to produce a solution to these issues. The solution takes the form of physical toolkit, similar to a board game, which allows the team to play through an example of a new product development in order to understand the process and the tools, before using it for their own product development efforts. A complimentary website is used to enhance the physical toolkit, and it provides more examples of the tools being used, as well as deeper discussions on each of the topics, allowing teams to adapt the process to their skills, preferences and product type. Teams found the solution very useful and intuitive and experienced significantly less confusion and mistakes with the process than teams who did not use it. Those with a design background found it especially useful for the engineering principles like Quality Function Deployment, while those with an engineering or technology background found it especially useful for design and customer requirements acquisition principles, like Voice of the Customer. Products developed using the toolkit are added to the website as more examples of how it can be used, creating a loop which helps future teams understand how the toolkit can be adapted to their project, whether it be a small consumer product or a large B2B service. The toolkit unlocks the potential of these beneficial tools to those in industry, both for large, experienced teams and for inexperienced start-ups. It allows users to assess the market potential of their product concept faster and more effectively, arriving at the product design stage with technical requirements prioritized according to their customers’ needs and wants.

Keywords: new product development, fuzzy front-end, usability, Kano model, quality function deployment, voice of customer

Procedia PDF Downloads 108
547 An Adaptive Neuro-Fuzzy Inference System (ANFIS) Modelling of Bleeding

Authors: Seyed Abbas Tabatabaei, Fereydoon Moghadas Nejad, Mohammad Saed

Abstract:

The bleeding prediction of the asphalt is one of the most complex subjects in the pavement engineering. In this paper, an Adaptive Neuro Fuzzy Inference System (ANFIS) is used for modeling the effect of important parameters on bleeding is trained and tested with the experimental results. bleeding index based on the asphalt film thickness differential as target parameter,asphalt content, temperature depth of two centemeter, heavy traffic, dust to effective binder, Marshall strength, passing 3/4 sieves, passing 3/8 sieves,passing 3/16 sieves, passing NO8, passing NO50, passing NO100, passing NO200 as input parameters. Then, we randomly divided empirical data into train and test sections in order to accomplish modeling. We instructed ANFIS network by 72 percent of empirical data. 28 percent of primary data which had been considered for testing the approprativity of the modeling were entered into ANFIS model. Results were compared by two statistical criterions (R2, RMSE) with empirical ones. Considering the results, it is obvious that our proposed modeling by ANFIS is efficient and valid and it can also be promoted to more general states.

Keywords: bleeding, asphalt film thickness differential, Anfis Modeling

Procedia PDF Downloads 270
546 Landfill Site Selection Using Multi-Criteria Decision Analysis A Case Study for Gulshan-e-Iqbal Town, Karachi

Authors: Javeria Arain, Saad Malik

Abstract:

The management of solid waste is a crucial and essential aspect of urban environmental management especially in a city with an ever increasing population such as Karachi. The total amount of municipal solid waste generated from Gulshan e Iqbal town on average is 444.48 tons per day and landfill sites are a widely accepted solution for final disposal of this waste. However, an improperly selected site can have immense environmental, economical and ecological impacts. To select an appropriate landfill site a number of factors should be kept into consideration to minimize the potential hazards of solid waste. The purpose of this research is to analyse the study area for the construction of an appropriate landfill site for disposal of municipal solid waste generated from Gulshan e-Iqbal Town by using geospatial techniques considering hydrological, geological, social and geomorphological factors. This was achieved using analytical hierarchy process and fuzzy analysis as a decision support tool with integration of geographic information sciences techniques. Eight most critical parameters, relevant to the study area, were selected. After generation of thematic layers for each parameter, overlay analysis was performed in ArcGIS 10.0 software. The results produced by both methods were then compared with each other and the final suitability map using AHP shows that 19% of the total area is Least Suitable, 6% is Suitable but avoided, 46% is Moderately Suitable, 26% is Suitable, 2% is Most Suitable and 1% is Restricted. In comparison the output map of fuzzy set theory is not in crisp logic rather it provides an output map with a range of 0-1, where 0 indicates least suitable and 1 indicates most suitable site. Considering the results it is deduced that the northern part of the city is appropriate for constructing the landfill site though a final decision for an optimal site could be made after field survey and considering economical and political factors.

Keywords: Analytical Hierarchy Process (AHP), fuzzy set theory, Geographic Information Sciences (GIS), Multi-Criteria Decision Analysis (MCDA)

Procedia PDF Downloads 506
545 Graph-Based Semantical Extractive Text Analysis

Authors: Mina Samizadeh

Abstract:

In the past few decades, there has been an explosion in the amount of available data produced from various sources with different topics. The availability of this enormous data necessitates us to adopt effective computational tools to explore the data. This leads to an intense growing interest in the research community to develop computational methods focused on processing this text data. A line of study focused on condensing the text so that we are able to get a higher level of understanding in a shorter time. The two important tasks to do this are keyword extraction and text summarization. In keyword extraction, we are interested in finding the key important words from a text. This makes us familiar with the general topic of a text. In text summarization, we are interested in producing a short-length text which includes important information about the document. The TextRank algorithm, an unsupervised learning method that is an extension of the PageRank (algorithm which is the base algorithm of Google search engine for searching pages and ranking them), has shown its efficacy in large-scale text mining, especially for text summarization and keyword extraction. This algorithm can automatically extract the important parts of a text (keywords or sentences) and declare them as a result. However, this algorithm neglects the semantic similarity between the different parts. In this work, we improved the results of the TextRank algorithm by incorporating the semantic similarity between parts of the text. Aside from keyword extraction and text summarization, we develop a topic clustering algorithm based on our framework, which can be used individually or as a part of generating the summary to overcome coverage problems.

Keywords: keyword extraction, n-gram extraction, text summarization, topic clustering, semantic analysis

Procedia PDF Downloads 72
544 The Survey Research and Evaluation of Green Residential Building Based on the Improved Group Analytical Hierarchy Process Method in Yinchuan

Authors: Yun-na Wu, Zhen Wang

Abstract:

Due to the economic downturn and the deterioration of the living environment, the development of residential buildings as high energy consuming building is gradually changing from “extensive” to green building in China. So, the evaluation system of green building is continuously improved, but the current evaluation work has the following problems: (1) There are differences in the cost of the actual investment and the purchasing power of residents, also construction target of green residential building is single and lacks multi-objective performance development. (2) Green building evaluation lacks regional characteristics and cannot reflect the different regional residents demand. (3) In the process of determining the criteria weight, the experts’ judgment matrix is difficult to meet the requirement of consistency. Therefore, to solve those problems, questionnaires which are about the green residential building for Ningxia area are distributed, and the results of questionnaires can feedback the purchasing power of residents and the acceptance of the green building cost. Secondly, combined with the geographical features of Ningxia minority areas, the evaluation criteria system of green residential building is constructed. Finally, using the improved group AHP method and the grey clustering method, the criteria weight is determined, and a real case is evaluated, which is located in Xing Qing district, Ningxia. A conclusion can be obtained that the professional evaluation for this project and good social recognition is basically the same.

Keywords: evaluation, green residential building, grey clustering method, group AHP

Procedia PDF Downloads 397
543 Active Islanding Detection Method Using Intelligent Controller

Authors: Kuang-Hsiung Tan, Chih-Chan Hu, Chien-Wu Lan, Shih-Sung Lin, Te-Jen Chang

Abstract:

An active islanding detection method using disturbance signal injection with intelligent controller is proposed in this study. First, a DC\AC power inverter is emulated in the distributed generator (DG) system to implement the tracking control of active power, reactive power outputs and the islanding detection. The proposed active islanding detection method is based on injecting a disturbance signal into the power inverter system through the d-axis current which leads to a frequency deviation at the terminal of the RLC load when the utility power is disconnected. Moreover, in order to improve the transient and steady-state responses of the active power and reactive power outputs of the power inverter, and to further improve the performance of the islanding detection method, two probabilistic fuzzy neural networks (PFNN) are adopted to replace the traditional proportional-integral (PI) controllers for the tracking control and the islanding detection. Furthermore, the network structure and the online learning algorithm of the PFNN are introduced in detail. Finally, the feasibility and effectiveness of the tracking control and the proposed active islanding detection method are verified with experimental results.

Keywords: distributed generators, probabilistic fuzzy neural network, islanding detection, non-detection zone

Procedia PDF Downloads 390
542 Prioritization in a Maintenance, Repair and Overhaul (MRO) System Based on Fuzzy Logic at Iran Khodro (IKCO)

Authors: Izadi Banafsheh, Sedaghat Reza

Abstract:

Maintenance, Repair, and Overhaul (MRO) of machinery are a key recent issue concerning the automotive industry. It has always been a debated question what order or priority should be adopted for the MRO of machinery. This study attempts to examine several criteria including process sensitivity, average time between machine failures, average duration of repair, availability of parts, availability of maintenance personnel and workload through a literature review and experts survey so as to determine the condition of the machine. According to the mentioned criteria, the machinery were ranked in four modes below: A) Need for inspection, B) Need for minor repair, C) Need for part replacement, and D) Need for major repair. The Fuzzy AHP was employed to determine the weighting of criteria. At the end, the obtained weights were ranked through the AHP for each criterion, three groups were specified: shaving machines, assembly and painting in four modes. The statistical population comprises the elite in the Iranian automotive industry at IKCO covering operation managers, CEOs and maintenance professionals who are highly specialized in MRO and perfectly knowledgeable in how the machinery function. The information required for this study were collected from both desk research and field review, which eventually led to construction of a questionnaire handed out to the sample respondents in order to collect information on the subject matter. The results of the AHP for weighting the criteria revealed that the availability of maintenance personnel was the top priority at coefficient of 0.206, while the process sensitivity took the last priority at coefficient of 0.066. Furthermore, the results of TOPSIS for prioritizing the IKCO machinery suggested that at the mode where there is need for inspection, the assembly machines took the top priority while paining machines took the third priority. As for the mode where there is need for minor repairs, the assembly machines took the top priority while the third priority belonged to the shaving machines. As for the mode where there is need for parts replacement, the assembly machines took the top priority while the third belonged to the paining machinery. Finally, as for the mode where there is need for major repair, the assembly machines took the top priority while the third belonged to the paining machinery.

Keywords: maintenance, repair, overhaul, MRO, prioritization of machinery, fuzzy logic, AHP, TOPSIS

Procedia PDF Downloads 288
541 Comparison Analysis of Fuzzy Logic Controler Based PV-Pumped Hydro and PV-Battery Storage Systems

Authors: Seada Hussen, Frie Ayalew

Abstract:

Integrating different energy resources, like solar PV and hydro, is used to ensure reliable power to rural communities like Hara village in Ethiopia. Hybrid power system offers power supply for rural villages by providing an alternative supply for the intermittent nature of renewable energy resources. The intermittent nature of renewable energy resources is a challenge to electrifying rural communities in a sustainable manner with solar resources. Major rural villages in Ethiopia are suffering from a lack of electrification, that cause our people to suffer deforestation, travel for long distance to fetch water, and lack good services like clinic and school sufficiently. The main objective of this project is to provide a balanced, stable, reliable supply for Hara village, Ethiopia using solar power with a pumped hydro energy storage system. The design of this project starts by collecting data from villages and taking solar irradiance data from NASA. In addition to this, geographical arrangement and location are also taken into consideration. After collecting this, all data analysis and cost estimation or optimal sizing of the system and comparison of solar with pumped hydro and solar with battery storage system is done using Homer Software. And since solar power only works in the daytime and pumped hydro works at night time and also at night and morning, both load will share to cover the load demand; this need controller designed to control multiple switch and scheduling in this project fuzzy logic controller is used to control this scenario. The result of the simulation shows that solar with pumped hydro energy storage system achieves good results than with a battery storage system since the comparison is done considering storage reliability, cost, storage capacity, life span, and efficiency.

Keywords: pumped hydro storage, solar energy, solar PV, battery energy storage, fuzzy logic controller

Procedia PDF Downloads 80
540 Global Low Carbon Transitions in the Power Sector: A Machine Learning Archetypical Clustering Approach

Authors: Abdullah Alotaiq, David Wallom, Malcolm McCulloch

Abstract:

This study presents an archetype-based approach to designing effective strategies for low-carbon transitions in the power sector. To achieve global energy transition goals, a renewable energy transition is critical, and understanding diverse energy landscapes across different countries is essential to design effective renewable energy policies and strategies. Using a clustering approach, this study identifies 12 energy archetypes based on the electricity mix, socio-economic indicators, and renewable energy contribution potential of 187 UN countries. Each archetype is characterized by distinct challenges and opportunities, ranging from high dependence on fossil fuels to low electricity access, low economic growth, and insufficient contribution potential of renewables. Archetype A, for instance, consists of countries with low electricity access, high poverty rates, and limited power infrastructure, while Archetype J comprises developed countries with high electricity demand and installed renewables. The study findings have significant implications for renewable energy policymaking and investment decisions, with policymakers and investors able to use the archetype approach to identify suitable renewable energy policies and measures and assess renewable energy potential and risks. Overall, the archetype approach provides a comprehensive framework for understanding diverse energy landscapes and accelerating decarbonisation of the power sector.

Keywords: fossil fuels, power plants, energy transition, renewable energy, archetypes

Procedia PDF Downloads 53
539 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets

Authors: Kothuri Sriraman, Mattupalli Komal Teja

Abstract:

In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).

Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm

Procedia PDF Downloads 350
538 Genetic Trait Analysis of RIL Barley Genotypes to Sort-out the Top Ranked Elites for Advanced Yield Breeding Across Multi Environments of Tigray, Ethiopia

Authors: Hailekiros Tadesse Tekle, Yemane Tsehaye, Fetien Abay

Abstract:

Barley (Hordeum vulgare L.) is one of the most important cereal crops in the world, grown for the poor farmers in Tigray with low yield production. The purpose of this research was to estimate the performance of 166 barley genotypes against the quantitative traits with detailed analysis of the variance component, heritability, genetic advance, and genetic usefulness parameters. The finding of ANOVA was highly significant variation (p ≤ 0:01) for all the genotypes. We found significant differences in coefficient of variance (CV of 15%) for 5 traits out of the 12 quantitative traits. The topmost broad sense heritability (H2) was recorded for seeds per spike (98.8%), followed by thousand seed weight (96.5%) with 79.16% and 56.25%, respectively, of GAM. The traits with H2 ≥ 60% and GA/GAM ≥ 20% suggested the least influenced by the environment, governed by the additive genes and direct selection for improvement of such beneficial traits for the studied genotypes. Hence, the 20 outstanding recombinant inbred lines (RIL) barley genotypes performing early maturity, high yield, and 1000 seed weight traits simultaneously were the top ranked group barley genotypes out of the 166 genotypes. These are; G5, G25, G33, G118, G36, G123, G28, G34, G14, G10, G3, G13, G11, G32, G8, G39, G23, G30, G37, and G26. They were early in maturity, high TSW and GYP (TSW ≥ 55 g, GYP ≥ 15.22 g/plant, and DTM below 106 days). In general, the 166 genotypes were classified as high (group 1), medium (group 2), and low yield production (group 3) genotypes in terms of yield and yield component trait analysis by clustering; and genotype parameter analysis such as the heritability, genetic advance, and genetic usefulness traits in this investigation.

Keywords: barley, clustering, genetic advance, heritability, usefulness, variability, yield

Procedia PDF Downloads 90
537 The Development of Leisure and Endowment Characteristic Villages in the Perspective of Balancing the Dwellers and Aged Visitors:A Case Study of Villages in Hangzhou Metropolitan Area

Authors: Zijiao Chai, Wangming Li

Abstract:

Under the background of increasing aging population, the situation of city endowment resources shortage gradually revealed. And many villages in the metropolitan area with the good natural ecological environment and leisure tourism base, have become one of the main destinations of urban old people for the off-site pension. This paper is based on a survey of more than ten villages which are characterized by leisure and endowment in Hangzhou metropolitan area, China. The satisfaction degree of the two main groups in the villages, dwellers, and aged visitors, is researched using the method of fuzzy comprehensive evaluation. The statistics are obtained from 535 questionnaires and qualitative interview. According to the satisfaction scores, it could be determined whether the dwellers and aged visitors have reached the equilibrium state. The equilibrium state is the development target of the villages, and it`s defined by environmentally friendly, proper for employment and pension, facilities sharing and harmonious life for each other. Furthermore, this paper comes up with some planning countermeasures in order to avoid "imbalance between dwellers and aged visitors" and obtain sustainable development while maintaining the economic benefit.

Keywords: aged visitors, balance between dwellers and aged visitors, dwellers, fuzzy comprehensive evaluation, Hangzhou metropolitan area, leisure and endowment characteristic villages

Procedia PDF Downloads 292
536 A High Efficiency Reduced Rules Neuro-Fuzzy Based Maximum Power Point Tracking Controller for Photovoltaic Array Connected to Grid

Authors: Lotfi Farah, Nadir Farah, Zaiem Kamar

Abstract:

This paper achieves a maximum power point tracking (MPPT) controller using a high-efficiency reduced rules neuro-fuzzy inference system (HE2RNF) for a 100 kW stand-alone photovoltaic (PV) system connected to the grid. The suggested HE2RNF based MPPT seeks the optimal duty cycle for the boost DC-DC converter, making the designed PV system working at the maximum power point (MPP), then transferring this power to the grid via a three levels voltage source converter (VSC). PV current variation and voltage variation are chosen as HE2RNF-based MPPT controller inputs. By using these inputs with the duty cycle as the only single output, a six rules ANFIS is generated. The high performance of the proposed HE2RNF numerically in the MATLAB/Simulink environment is shown. The 0.006% steady-state error, 0.006s of tracking time, and 0.088s of starting time prove the robustness of this six reduced rules against the widely used twenty-five ones.

Keywords: PV, MPPT, ANFIS, HE2RNF-based MPPT controller, VSC, grid connection

Procedia PDF Downloads 185
535 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model

Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You

Abstract:

The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.

Keywords: DBSCAN, potential function, speech signal, the UBSS model

Procedia PDF Downloads 135