Search results for: type-2 fuzzy relational database.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1692

Search results for: type-2 fuzzy relational database.

162 Centralized Monitoring and Self-protected against Fiber Fault in FTTH Access Network

Authors: Mohammad Syuhaimi Ab-Rahman, Boonchuan Ng, Kasmiran Jumari

Abstract:

This paper presented a new approach for centralized monitoring and self-protected against fiber fault in fiber-to-the-home (FTTH) access network by using Smart Access Network Testing, Analyzing and Database (SANTAD). SANTAD will be installed with optical line terminal (OLT) at central office (CO) for in-service transmission surveillance and fiber fault localization within FTTH with point-to-multipoint (P2MP) configuration downwardly from CO towards customer residential locations based on the graphical user interface (GUI) processing capabilities of MATLAB software. SANTAD is able to detect any fiber fault as well as identify the failure location in the network system. SANTAD enable the status of each optical network unit (ONU) connected line is displayed onto one screen with capability to configure the attenuation and detect the failure simultaneously. The analysis results and information will be delivered to the field engineer for promptly actions, meanwhile the failure line will be diverted to protection line to ensure the traffic flow continuously. This approach has a bright prospect to improve the survivability and reliability as well as increase the efficiency and monitoring capabilities in FTTH.

Keywords: Fiber fault, FTTH, SANTAD, transmission surveillance, MATLAB.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2551
161 Enhanced Clustering Analysis and Visualization Using Kohonen's Self-Organizing Feature Map Networks

Authors: Kasthurirangan Gopalakrishnan, Siddhartha Khaitan, Anshu Manik

Abstract:

Cluster analysis is the name given to a diverse collection of techniques that can be used to classify objects (e.g. individuals, quadrats, species etc). While Kohonen's Self-Organizing Feature Map (SOFM) or Self-Organizing Map (SOM) networks have been successfully applied as a classification tool to various problem domains, including speech recognition, image data compression, image or character recognition, robot control and medical diagnosis, its potential as a robust substitute for clustering analysis remains relatively unresearched. SOM networks combine competitive learning with dimensionality reduction by smoothing the clusters with respect to an a priori grid and provide a powerful tool for data visualization. In this paper, SOM is used for creating a toroidal mapping of two-dimensional lattice to perform cluster analysis on results of a chemical analysis of wines produced in the same region in Italy but derived from three different cultivators, referred to as the “wine recognition data" located in the University of California-Irvine database. The results are encouraging and it is believed that SOM would make an appealing and powerful decision-support system tool for clustering tasks and for data visualization.

Keywords: Artificial neural networks, cluster analysis, Kohonen maps, wine recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2121
160 Content-Based Image Retrieval Using HSV Color Space Features

Authors: Hamed Qazanfari, Hamid Hassanpour, Kazem Qazanfari

Abstract:

In this paper, a method is provided for content-based image retrieval. Content-based image retrieval system searches query an image based on its visual content in an image database to retrieve similar images. In this paper, with the aim of simulating the human visual system sensitivity to image's edges and color features, the concept of color difference histogram (CDH) is used. CDH includes the perceptually color difference between two neighboring pixels with regard to colors and edge orientations. Since the HSV color space is close to the human visual system, the CDH is calculated in this color space. In addition, to improve the color features, the color histogram in HSV color space is also used as a feature. Among the extracted features, efficient features are selected using entropy and correlation criteria. The final features extract the content of images most efficiently. The proposed method has been evaluated on three standard databases Corel 5k, Corel 10k and UKBench. Experimental results show that the accuracy of the proposed image retrieval method is significantly improved compared to the recently developed methods.

Keywords: Content-based image retrieval, color difference histogram, efficient features selection, entropy, correlation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 660
159 Illumination Invariant Face Recognition using Supervised and Unsupervised Learning Algorithms

Authors: Shashank N. Mathur, Anil K. Ahlawat, Virendra P. Vishwakarma

Abstract:

In this paper, a comparative study of application of supervised and unsupervised learning algorithms on illumination invariant face recognition has been carried out. The supervised learning has been carried out with the help of using a bi-layered artificial neural network having one input, two hidden and one output layer. The gradient descent with momentum and adaptive learning rate back propagation learning algorithm has been used to implement the supervised learning in a way that both the inputs and corresponding outputs are provided at the time of training the network, thus here is an inherent clustering and optimized learning of weights which provide us with efficient results.. The unsupervised learning has been implemented with the help of a modified Counterpropagation network. The Counterpropagation network involves the process of clustering followed by application of Outstar rule to obtain the recognized face. The face recognition system has been developed for recognizing faces which have varying illumination intensities, where the database images vary in lighting with respect to angle of illumination with horizontal and vertical planes. The supervised and unsupervised learning algorithms have been implemented and have been tested exhaustively, with and without application of histogram equalization to get efficient results.

Keywords: Artificial Neural Networks, back propagation, Counterpropagation networks, face recognition, learning algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685
158 Service Business Model Canvas: A Boundary Object Operating as a Business Development Tool

Authors: Taru Hakanen, Mervi Murtonen

Abstract:

This study aims to increase understanding of the transition of business models in servitization. The significance of service in all business has increased dramatically during the past decades. Service-dominant logic (SDL) describes this change in the economy and questions the goods-dominant logic on which business has primarily been based in the past. A business model canvas is one of the most cited and used tools in defining end developing business models. The starting point of this paper lies in the notion that the traditional business model canvas is inherently goods-oriented and best suits for product-based business. However, the basic differences between goods and services necessitate changes in business model representations when proceeding in servitization. Therefore, new knowledge is needed on how the conception of business model and the business model canvas as its representation should be altered in servitized firms in order to better serve business developers and interfirm co-creation. That is to say, compared to products, services are intangible and they are co-produced between the supplier and the customer. Value is always co-created in interaction between a supplier and a customer, and customer experience primarily depends on how well the interaction succeeds between the actors. The role of service experience is even stronger in service business compared to product business, as services are co-produced with the customer. This paper provides business model developers with a service business model canvas, which takes into account the intangible, interactive, and relational nature of service. The study employs a design science approach that contributes to theory development via design artifacts. This study utilizes qualitative data gathered in workshops with ten companies from various industries. In particular, key differences between Goods-dominant logic (GDL) and SDLbased business models are identified when an industrial firm proceeds in servitization. As the result of the study, an updated version of the business model canvas is provided based on service-dominant logic. The service business model canvas ensures a stronger customer focus and includes aspects salient for services, such as interaction between companies, service co-production, and customer experience. It can be used for the analysis and development of a current service business model of a company or for designing a new business model. It facilitates customer-focused new service design and service development. It aids in the identification of development needs, and facilitates the creation of a common view of the business model. Therefore, the service business model canvas can be regarded as a boundary object, which facilitates the creation of a common understanding of the business model between several actors involved. The study contributes to the business model and service business development disciplines by providing a managerial tool for practitioners in service development. It also provides research insight into how servitization challenges companies’ business models.

Keywords: Boundary object, business model canvas, managerial tool, service-dominant logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3351
157 Application of RP Technology with Polycarbonate Material for Wind Tunnel Model Fabrication

Authors: A. Ahmadi Nadooshan, S. Daneshmand, C. Aghanajafi

Abstract:

Traditionally, wind tunnel models are made of metal and are very expensive. In these years, everyone is looking for ways to do more with less. Under the right test conditions, a rapid prototype part could be tested in a wind tunnel. Using rapid prototype manufacturing techniques and materials in this way significantly reduces time and cost of production of wind tunnel models. This study was done of fused deposition modeling (FDM) and their ability to make components for wind tunnel models in a timely and cost effective manner. This paper discusses the application of wind tunnel model configuration constructed using FDM for transonic wind tunnel testing. A study was undertaken comparing a rapid prototyping model constructed of FDM Technologies using polycarbonate to that of a standard machined steel model. Testing covered the Mach range of Mach 0.3 to Mach 0.75 at an angle-ofattack range of - 2° to +12°. Results from this study show relatively good agreement between the two models and rapid prototyping Method reduces time and cost of production of wind tunnel models. It can be concluded from this study that wind tunnel models constructed using rapid prototyping method and materials can be used in wind tunnel testing for initial baseline aerodynamic database development.

Keywords: Polycarbonate, Fabrication, FDM, Model, RapidPrototyping, Wind Tunnel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2445
156 Attention Based Fully Convolutional Neural Network for Simultaneous Detection and Segmentation of Optic Disc in Retinal Fundus Images

Authors: Sandip Sadhukhan, Arpita Sarkar, Debprasad Sinha, Goutam Kumar Ghorai, Gautam Sarkar, Ashis K. Dhara

Abstract:

Accurate segmentation of the optic disc is very important for computer-aided diagnosis of several ocular diseases such as glaucoma, diabetic retinopathy, and hypertensive retinopathy. The paper presents an accurate and fast optic disc detection and segmentation method using an attention based fully convolutional network. The network is trained from scratch using the fundus images of extended MESSIDOR database and the trained model is used for segmentation of optic disc. The false positives are removed based on morphological operation and shape features. The result is evaluated using three-fold cross-validation on six public fundus image databases such as DIARETDB0, DIARETDB1, DRIVE, AV-INSPIRE, CHASE DB1 and MESSIDOR. The attention based fully convolutional network is robust and effective for detection and segmentation of optic disc in the images affected by diabetic retinopathy and it outperforms existing techniques.

Keywords: Ocular diseases, retinal fundus image, optic disc detection and segmentation, fully convolutional network, overlap measure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 780
155 Environmental Decision Making Model for Assessing On-Site Performances of Building Subcontractors

Authors: Buket Metin

Abstract:

Buildings cause a variety of loads on the environment due to activities performed at each stage of the building life cycle. Construction is the first stage that affects both the natural and built environments at different steps of the process, which can be defined as transportation of materials within the construction site, formation and preparation of materials on-site and the application of materials to realize the building subsystems. All of these steps require the use of technology, which varies based on the facilities that contractors and subcontractors have. Hence, environmental consequences of the construction process should be tackled by focusing on construction technology options used in every step of the process. This paper presents an environmental decision-making model for assessing on-site performances of subcontractors based on the construction technology options which they can supply. First, construction technologies, which constitute information, tools and methods, are classified. Then, environmental performance criteria are set forth related to resource consumption, ecosystem quality, and human health issues. Finally, the model is developed based on the relationships between the construction technology components and the environmental performance criteria. The Fuzzy Analytical Hierarchy Process (FAHP) method is used for weighting the environmental performance criteria according to environmental priorities of decision-maker(s), while the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method is used for ranking on-site environmental performances of subcontractors using quantitative data related to the construction technology components. Thus, the model aims to provide an insight to decision-maker(s) about the environmental consequences of the construction process and to provide an opportunity to improve the overall environmental performance of construction sites.

Keywords: Construction process, construction technology, decision making, environmental performance, subcontractors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1171
154 Tool Tracker: A Toolkit Ensembling Useful Online Networking Tools for Efficient Management and Operation of a Network

Authors: Onkar Bhat Kodical, Sridhar Srinivasan, N.K. Srinath

Abstract:

Tool Tracker is a client-server based application. It is essentially a catalogue of various network monitoring and management tools that are available online. There is a database maintained on the server side that contains the information about various tools. Several clients can access this information simultaneously and utilize this information. The various categories of tools considered are packet sniffers, port mappers, port scanners, encryption tools, and vulnerability scanners etc for the development of this application. This application provides a front end through which the user can invoke any tool from a central repository for the purpose of packet sniffing, port scanning, network analysis etc. Apart from the tool, its description and the help files associated with it would also be stored in the central repository. This facility will enable the user to view the documentation pertaining to the tool without having to download and install the tool. The application would update the central repository with the latest versions of the tools. The application would inform the user about the availability of a newer version of the tool currently being used and give the choice of installing the newer version to the user. Thus ToolTracker provides any network administrator that much needed abstraction and ease-ofuse with respect to the tools that he can use to efficiently monitor a network.

Keywords: Network monitoring, single platform, client/server application, version management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1299
153 An Empirical Assessment of Sustainability of an Urban Water Supply Service Delivery

Authors: Olayinka Gafar Okeola, Akinola Muyiwa Moore

Abstract:

Urban population is rapidly increasing in Ilorin, (the capital of Kwara State of Nigeria) along with related increased water demand. The inadequacies of water supply services have forced the populace to depend on dug wells, boreholes, water tankers, street vendors etc. for their water needs. People spend hours daily carrying jerry can all around to collect and queue for water at the public water tap with high opportunity cost both in time and economic wastage. This situation motivated this study to assess the sustainability of an urban water supply services to unravel the factors undermining the effective delivery of services. Contingent Valuation Method was used to place value on water supply services using the Double Bounded Dichotomous Choice format for willingness to pay elicitation. A database was created with Microsoft Excel and Stata 12 Software to model and evaluate the variables that affect household willingness to pay. The results of the study reveal that about 92% of the total households surveyed were connected to the Government water supply out of which 87% reported that they were not satisfied with the existing services. The results furthered revealed that respondents are willing to pay ₦2500 monthly to enjoy sustainable water supply service delivery.

Keywords: Willingness-to-pay, contingent valuation method, Nigeria, service, delivery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 432
152 Triangular Geometric Feature for Offline Signature Verification

Authors: Zuraidasahana Zulkarnain, Mohd Shafry Mohd Rahim, Nor Anita Fairos Ismail, Mohd Azhar M. Arsad

Abstract:

Handwritten signature is accepted widely as a biometric characteristic for personal authentication. The use of appropriate features plays an important role in determining accuracy of signature verification; therefore, this paper presents a feature based on the geometrical concept. To achieve the aim, triangle attributes are exploited to design a new feature since the triangle possesses orientation, angle and transformation that would improve accuracy. The proposed feature uses triangulation geometric set comprising of sides, angles and perimeter of a triangle which is derived from the center of gravity of a signature image. For classification purpose, Euclidean classifier along with Voting-based classifier is used to verify the tendency of forgery signature. This classification process is experimented using triangular geometric feature and selected global features. Based on an experiment that was validated using Grupo de Senales 960 (GPDS-960) signature database, the proposed triangular geometric feature achieves a lower Average Error Rates (AER) value with a percentage of 34% as compared to 43% of the selected global feature. As a conclusion, the proposed triangular geometric feature proves to be a more reliable feature for accurate signature verification.

Keywords: biometrics, euclidean classifier, feature extraction, offline signature verification, VOTING-based classifier

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978
151 Multivariate Output-Associative RVM for Multi-Dimensional Affect Predictions

Authors: Achut Manandhar, Kenneth D. Morton, Peter A. Torrione, Leslie M. Collins

Abstract:

The current trends in affect recognition research are to consider continuous observations from spontaneous natural interactions in people using multiple feature modalities, and to represent affect in terms of continuous dimensions, incorporate spatio-temporal correlation among affect dimensions, and provide fast affect predictions. These research efforts have been propelled by a growing effort to develop affect recognition system that can be implemented to enable seamless real-time human-computer interaction in a wide variety of applications. Motivated by these desired attributes of an affect recognition system, in this work a multi-dimensional affect prediction approach is proposed by integrating multivariate Relevance Vector Machine (MVRVM) with a recently developed Output-associative Relevance Vector Machine (OARVM) approach. The resulting approach can provide fast continuous affect predictions by jointly modeling the multiple affect dimensions and their correlations. Experiments on the RECOLA database show that the proposed approach performs competitively with the OARVM while providing faster predictions during testing.

Keywords: Dimensional affect prediction, Output-associative RVM, Multivariate regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668
150 Random Subspace Neural Classifier for Meteor Recognition in the Night Sky

Authors: Carlos Vera, Tetyana Baydyk, Ernst Kussul, Graciela Velasco, Miguel Aparicio

Abstract:

This article describes the Random Subspace Neural Classifier (RSC) for the recognition of meteors in the night sky. We used images of meteors entering the atmosphere at night between 8:00 p.m.-5: 00 a.m. The objective of this project is to classify meteor and star images (with stars as the image background). The monitoring of the sky and the classification of meteors are made for future applications by scientists. The image database was collected from different websites. We worked with RGB-type images with dimensions of 220x220 pixels stored in the BitMap Protocol (BMP) format. Subsequent window scanning and processing were carried out for each image. The scan window where the characteristics were extracted had the size of 20x20 pixels with a scanning step size of 10 pixels. Brightness, contrast and contour orientation histograms were used as inputs for the RSC. The RSC worked with two classes and classified into: 1) with meteors and 2) without meteors. Different tests were carried out by varying the number of training cycles and the number of images for training and recognition. The percentage error for the neural classifier was calculated. The results show a good RSC classifier response with 89% correct recognition. The results of these experiments are presented and discussed.

Keywords: Contour orientation histogram, meteors, night sky, RSC neural classifier, stars.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 406
149 The Application of Queuing Theory in Multi-Stage Production Lines

Authors: Hani Shafeek, Muhammed Marsudi

Abstract:

The purpose of this work is examining the multiproduct multi-stage in a battery production line. To improve the performances of an assembly production line by determine the efficiency of each workstation. Data collected from every workstation. The data are throughput rate, number of operator, and number of parts that arrive and leaves during part processing. Data for the number of parts that arrives and leaves are collected at least at the amount of ten samples to make the data is possible to be analyzed by Chi-Squared Goodness Test and queuing theory. Measures of this model served as the comparison with the standard data available in the company. Validation of the task time value resulted by comparing it with the task time value based on the company database. Some performance factors for the multi-product multi-stage in a battery production line in this work are shown. The efficiency in each workstation was also shown. Total production time to produce each part can be determined by adding the total task time in each workstation. To reduce the queuing time and increase the efficiency based on the analysis any probably improvement should be done. One probably action is by increasing the number of operators how manually operate this workstation.

Keywords: Production line, manufacturing, performance measurement, queuing theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3146
148 Comparison of Real-Time PCR and FTIR with Chemometrics Technique in Analysing Halal Supplement Capsules

Authors: Mohd Sukri Hassan, Ahlam Inayatullah Badrul Munir, M. Husaini A. Rahman

Abstract:

Halal authentication and verification in supplement capsules are highly required as the gelatine available in the market can be from halal or non-halal sources. It is an obligation for Muslim to consume and use the halal consumer goods. At present, real-time polymerase chain reaction (RT-PCR) is the most common technique being used for the detection of porcine and bovine DNA in gelatine due to high sensitivity of the technique and higher stability of DNA compared to protein. In this study, twenty samples of supplements capsules from different products with different Halal logos were analyzed for porcine and bovine DNA using RT-PCR. Standard bovine and porcine gelatine from eurofins at a range of concentration from 10-1 to 10-5 ng/µl were used to determine the linearity range, limit of detection and specificity on RT-PCR (SYBR Green method). RT-PCR detected porcine (two samples), bovine (four samples) and mixture of porcine and bovine (six samples). The samples were also tested using FT-IR technique where normalized peak of IR spectra were pre-processed using Savitsky Golay method before Principal Components Analysis (PCA) was performed on the database. Scores plot of PCA shows three clusters of samples; bovine, porcine and mixture (bovine and porcine). The RT-PCR and FT-IR with chemometrics technique were found to give same results for porcine gelatine samples which can be used for Halal authentication.

Keywords: Halal, real-time PCR, gelatin, FTIR and chemometrics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 918
147 CFD Prediction of the Round Elbow Fitting Loss Coefficient

Authors: Ana Paula P. dos Santos, Claudia R. Andrade, Edson L. Zaparoli

Abstract:

Pressure loss in ductworks is an important factor to be considered in design of engineering systems such as power-plants, refineries, HVAC systems to reduce energy costs. Ductwork can be composed by straight ducts and different types of fittings (elbows, transitions, converging and diverging tees and wyes). Duct fittings are significant sources of pressure loss in fluid distribution systems. Fitting losses can be even more significant than equipment components such as coils, filters, and dampers. At the present work, a conventional 90o round elbow under turbulent incompressible airflow is studied. Mass, momentum, and k-e turbulence model equations are solved employing the finite volume method. The SIMPLE algorithm is used for the pressure-velocity coupling. In order to validate the numerical tool, the elbow pressure loss coefficient is determined using the same conditions to compare with ASHRAE database. Furthermore, the effect of Reynolds number variation on the elbow pressure loss coefficient is investigated. These results can be useful to perform better preliminary design of air distribution ductworks in air conditioning systems.

Keywords: Duct fitting, Pressure loss, Elbow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4851
146 Organization Model of Semantic Document Repository and Search Techniques for Studying Information Technology

Authors: Nhon Do, Thuong Huynh, An Pham

Abstract:

Nowadays, organizing a repository of documents and resources for learning on a special field as Information Technology (IT), together with search techniques based on domain knowledge or document-s content is an urgent need in practice of teaching, learning and researching. There have been several works related to methods of organization and search by content. However, the results are still limited and insufficient to meet user-s demand for semantic document retrieval. This paper presents a solution for the organization of a repository that supports semantic representation and processing in search. The proposed solution is a model which integrates components such as an ontology describing domain knowledge, a database of document repository, semantic representation for documents and a file system; with problems, semantic processing techniques and advanced search techniques based on measuring semantic similarity. The solution is applied to build a IT learning materials management system of a university with semantic search function serving students, teachers, and manager as well. The application has been implemented, tested at the University of Information Technology, Ho Chi Minh City, Vietnam and has achieved good results.

Keywords: document retrieval system, knowledgerepresentation, document representation, semantic search, ontology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1709
145 VoIP and Database Traffic Co-existence over IEEE 802.11b WLAN with Redundancy

Authors: Rizik Al-Sayyed, Colin Pattinson, Tony Dacre

Abstract:

This paper presents the findings of two experiments that were performed on the Redundancy in Wireless Connection Model (RiWC) using the 802.11b standard. The experiments were simulated using OPNET 11.5 Modeler software. The first was aimed at finding the maximum number of simultaneous Voice over Internet Protocol (VoIP) users the model would support under the G.711 and G.729 codec standards when the packetization interval was 10 milliseconds (ms). The second experiment examined the model?s VoIP user capacity using the G.729 codec standard along with background traffic using the same packetization interval as in the first experiment. To determine the capacity of the model under various experiments, we checked three metrics: jitter, delay and data loss. When background traffic was added, we checked the response time in addition to the previous three metrics. The findings of the first experiment indicated that the maximum number of simultaneous VoIP users the model was able to support was 5, which is consistent with recent research findings. When using the G.729 codec, the model was able to support up to 16 VoIP users; similar experiments in current literature have indicated a maximum of 7 users. The finding of the second experiment demonstrated that the maximum number of VoIP users the model was able to support was 12, with the existence of background traffic.

Keywords: WLAN, IEEE 802.11b, Codec, VoIP, OPNET, Background traffic, and QoS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1691
144 A Hybrid Feature Selection and Deep Learning Algorithm for Cancer Disease Classification

Authors: Niousha Bagheri Khulenjani, Mohammad Saniee Abadeh

Abstract:

Learning from very big datasets is a significant problem for most present data mining and machine learning algorithms. MicroRNA (miRNA) is one of the important big genomic and non-coding datasets presenting the genome sequences. In this paper, a hybrid method for the classification of the miRNA data is proposed. Due to the variety of cancers and high number of genes, analyzing the miRNA dataset has been a challenging problem for researchers. The number of features corresponding to the number of samples is high and the data suffer from being imbalanced. The feature selection method has been used to select features having more ability to distinguish classes and eliminating obscures features. Afterward, a Convolutional Neural Network (CNN) classifier for classification of cancer types is utilized, which employs a Genetic Algorithm to highlight optimized hyper-parameters of CNN. In order to make the process of classification by CNN faster, Graphics Processing Unit (GPU) is recommended for calculating the mathematic equation in a parallel way. The proposed method is tested on a real-world dataset with 8,129 patients, 29 different types of tumors, and 1,046 miRNA biomarkers, taken from The Cancer Genome Atlas (TCGA) database.

Keywords: Cancer classification, feature selection, deep learning, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1271
143 Using HMM-based Classifier Adapted to Background Noises with Improved Sounds Features for Audio Surveillance Application

Authors: Asma Rabaoui, Zied Lachiri, Noureddine Ellouze

Abstract:

Discrimination between different classes of environmental sounds is the goal of our work. The use of a sound recognition system can offer concrete potentialities for surveillance and security applications. The first paper contribution to this research field is represented by a thorough investigation of the applicability of state-of-the-art audio features in the domain of environmental sound recognition. Additionally, a set of novel features obtained by combining the basic parameters is introduced. The quality of the features investigated is evaluated by a HMM-based classifier to which a great interest was done. In fact, we propose to use a Multi-Style training system based on HMMs: one recognizer is trained on a database including different levels of background noises and is used as a universal recognizer for every environment. In order to enhance the system robustness by reducing the environmental variability, we explore different adaptation algorithms including Maximum Likelihood Linear Regression (MLLR), Maximum A Posteriori (MAP) and the MAP/MLLR algorithm that combines MAP and MLLR. Experimental evaluation shows that a rather good recognition rate can be reached, even under important noise degradation conditions when the system is fed by the convenient set of features.

Keywords: Sounds recognition, HMM classifier, Multi-style training, Environmental Adaptation, Feature combinations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643
142 Spatio-Temporal Data Mining with Association Rules for Lake Van

Authors: T. Aydin, M. F. Alaeddinoglu

Abstract:

People, throughout the history, have made estimates and inferences about the future by using their past experiences. Developing information technologies and the improvements in the database management systems make it possible to extract useful information from knowledge in hand for the strategic decisions. Therefore, different methods have been developed. Data mining by association rules learning is one of such methods. Apriori algorithm, one of the well-known association rules learning algorithms, is not commonly used in spatio-temporal data sets. However, it is possible to embed time and space features into the data sets and make Apriori algorithm a suitable data mining technique for learning spatiotemporal association rules. Lake Van, the largest lake of Turkey, is a closed basin. This feature causes the volume of the lake to increase or decrease as a result of change in water amount it holds. In this study, evaporation, humidity, lake altitude, amount of rainfall and temperature parameters recorded in Lake Van region throughout the years are used by the Apriori algorithm and a spatio-temporal data mining application is developed to identify overflows and newlyformed soil regions (underflows) occurring in the coastal parts of Lake Van. Identifying possible reasons of overflows and underflows may be used to alert the experts to take precautions and make the necessary investments.

Keywords: Apriori algorithm, association rules, data mining, spatio-temporal data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1405
141 Speaker Independent Quranic Recognizer Basedon Maximum Likelihood Linear Regression

Authors: Ehab Mourtaga, Ahmad Sharieh, Mousa Abdallah

Abstract:

An automatic speech recognition system for the formal Arabic language is needed. The Quran is the most formal spoken book in Arabic, it is spoken all over the world. In this research, an automatic speech recognizer for Quranic based speakerindependent was developed and tested. The system was developed based on the tri-phone Hidden Markov Model and Maximum Likelihood Linear Regression (MLLR). The MLLR computes a set of transformations which reduces the mismatch between an initial model set and the adaptation data. It uses the regression class tree, as well as, estimates a set of linear transformations for the mean and variance parameters of a Gaussian mixture HMM system. The 30th Chapter of the Quran, with five of the most famous readers of the Quran, was used for the training and testing of the data. The chapter includes about 2000 distinct words. The advantages of using the Quranic verses as the database in this developed recognizer are the uniqueness of the words and the high level of orderliness between verses. The level of accuracy from the tested data ranged 68 to 85%.

Keywords: Hidden Markov Model (HMM), MaximumLikelihood Linear Regression (MLLR), Quran, Regression ClassTree, Speech Recognition, Speaker-independent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1914
140 Image Indexing Using a Color Similarity Metric based on the Human Visual System

Authors: Angelo Nodari, Ignazio Gallo

Abstract:

The novelty proposed in this study is twofold and consists in the developing of a new color similarity metric based on the human visual system and a new color indexing based on a textual approach. The new color similarity metric proposed is based on the color perception of the human visual system. Consequently the results returned by the indexing system can fulfill as much as possibile the user expectations. We developed a web application to collect the users judgments about the similarities between colors, whose results are used to estimate the metric proposed in this study. In order to index the image's colors, we used a text indexing engine to facilitate the integration of visual features in a database of text documents. The textual signature is build by weighting the image's colors in according to their occurrence in the image. The use of a textual indexing engine, provide us a simple, fast and robust solution to index images. A typical usage of the system proposed in this study, is the development of applications whose data type is both visual and textual. In order to evaluate the proposed method we chose a price comparison engine as a case of study, collecting a series of commercial offers containing the textual description and the image representing a specific commercial offer.

Keywords: Color Extraction, Content-Based Image Retrieval, Indexing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3026
139 Vehicle Risk Evaluation in Low Speed Accidents: Consequences for Relevant Test Scenarios

Authors: Philip Feig, Klaus Gschwendtner, Julian Schatz, Frank Diermeyer

Abstract:

Projects of accident research analysis are mostly focused on accidents involving personal damage. Property damage only has a high frequency of occurrence combined with high economic impact. This paper describes main influencing parameters for the extent of damage and presents a repair cost model. For a prospective evaluation method of the monetary effect of advanced driver assistance systems (ADAS), it is necessary to be aware of and quantify all influencing parameters. Furthermore, this method allows the evaluation of vehicle concepts in combination with an ADAS at an early point in time of the product development process. In combination with a property damage database and the introduced repair cost model relevant test scenarios for specific vehicle configurations and their individual property damage risk may be determined. Currently, equipment rates of ADAS are low and a purchase incentive for customers would be beneficial. The next ADAS generation will prevent property damage to a large extent or at least reduce damage severity. Both effects may be a purchasing incentive for the customer and furthermore contribute to increased traffic safety.

Keywords: Property damage analysis, effectiveness, ADAS, damage risk, accident research, accident scenarios.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1388
138 Nonlinear Estimation Model for Rail Track Deterioration

Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami

Abstract:

Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.

Keywords: ANFIS, MGT, Prediction modeling, rail track degradation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1594
137 Revised PLWAP Tree with Non-frequent Items for Mining Sequential Pattern

Authors: R. Vishnu Priya, A. Vadivel

Abstract:

Sequential pattern mining is a challenging task in data mining area with large applications. One among those applications is mining patterns from weblog. Recent times, weblog is highly dynamic and some of them may become absolute over time. In addition, users may frequently change the threshold value during the data mining process until acquiring required output or mining interesting rules. Some of the recently proposed algorithms for mining weblog, build the tree with two scans and always consume large time and space. In this paper, we build Revised PLWAP with Non-frequent Items (RePLNI-tree) with single scan for all items. While mining sequential patterns, the links related to the nonfrequent items are not considered. Hence, it is not required to delete or maintain the information of nodes while revising the tree for mining updated transactions. The algorithm supports both incremental and interactive mining. It is not required to re-compute the patterns each time, while weblog is updated or minimum support changed. The performance of the proposed tree is better, even the size of incremental database is more than 50% of existing one. For evaluation purpose, we have used the benchmark weblog dataset and found that the performance of proposed tree is encouraging compared to some of the recently proposed approaches.

Keywords: Sequential pattern mining, weblog, frequent and non-frequent items, incremental and interactive mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1930
136 A Formal Approach for Proof Constructions in Cryptography

Authors: Markus Kaiser, Johannes Buchmann

Abstract:

In this article we explore the application of a formal proof system to verification problems in cryptography. Cryptographic properties concerning correctness or security of some cryptographic algorithms are of great interest. Beside some basic lemmata, we explore an implementation of a complex function that is used in cryptography. More precisely, we describe formal properties of this implementation that we computer prove. We describe formalized probability distributions (σ-algebras, probability spaces and conditional probabilities). These are given in the formal language of the formal proof system Isabelle/HOL. Moreover, we computer prove Bayes- Formula. Besides, we describe an application of the presented formalized probability distributions to cryptography. Furthermore, this article shows that computer proofs of complex cryptographic functions are possible by presenting an implementation of the Miller- Rabin primality test that admits formal verification. Our achievements are a step towards computer verification of cryptographic primitives. They describe a basis for computer verification in cryptography. Computer verification can be applied to further problems in cryptographic research, if the corresponding basic mathematical knowledge is available in a database.

Keywords: prime numbers, primality tests, (conditional) probabilitydistributions, formal proof system, higher-order logic, formalverification, Bayes' Formula, Miller-Rabin primality test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469
135 Computer Verification in Cryptography

Authors: Markus Kaiser, Johannes Buchmann

Abstract:

In this paper we explore the application of a formal proof system to verification problems in cryptography. Cryptographic properties concerning correctness or security of some cryptographic algorithms are of great interest. Beside some basic lemmata, we explore an implementation of a complex function that is used in cryptography. More precisely, we describe formal properties of this implementation that we computer prove. We describe formalized probability distributions (o--algebras, probability spaces and condi¬tional probabilities). These are given in the formal language of the formal proof system Isabelle/HOL. Moreover, we computer prove Bayes' Formula. Besides we describe an application of the presented formalized probability distributions to cryptography. Furthermore, this paper shows that computer proofs of complex cryptographic functions are possible by presenting an implementation of the Miller- Rabin primality test that admits formal verification. Our achievements are a step towards computer verification of cryptographic primitives. They describe a basis for computer verification in cryptography. Computer verification can be applied to further problems in crypto-graphic research, if the corresponding basic mathematical knowledge is available in a database.

Keywords: prime numbers, primality tests, (conditional) proba¬bility distributions, formal proof system, higher-order logic, formal verification, Bayes' Formula, Miller-Rabin primality test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2180
134 Comparative Advantage of Mobile Agent Application in Procuring Software Products on the Internet

Authors: Michael K. Adu, Boniface K. Alese, Olumide S. Ogunnusi

Abstract:

This paper brings to fore the inherent advantages in application of mobile agents to procure software products rather than downloading software content on the Internet. It proposes a system whereby the products come on compact disk with mobile agent as deliverable. The client/user purchases a software product, but must connect to the remote server of the software developer before installation. The user provides an activation code that activates mobile agent which is part of the software product on compact disk. The validity of the activation code is checked on connection at the developer’s end to ascertain authenticity and prevent piracy. The system is implemented by downloading two different software products as compare with installing same products on compact disk with mobile agent’s application. Downloading software contents from developer’s database as in the traditional method requires a continuously open connection between the client and the developer’s end, a fixed network is not economically or technically feasible. Mobile agent after being dispatched into the network becomes independent of the creating process and can operate asynchronously and autonomously. It can reconnect later after completing its task and return for result delivery. Response Time and Network Load are very minimal with application of Mobile agent.

Keywords: Activation code, internet, mobile agent, software developer, software products.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 627
133 Rotation Invariant Face Recognition Based on Hybrid LPT/DCT Features

Authors: Rehab F. Abdel-Kader, Rabab M. Ramadan, Rawya Y. Rizk

Abstract:

The recognition of human faces, especially those with different orientations is a challenging and important problem in image analysis and classification. This paper proposes an effective scheme for rotation invariant face recognition using Log-Polar Transform and Discrete Cosine Transform combined features. The rotation invariant feature extraction for a given face image involves applying the logpolar transform to eliminate the rotation effect and to produce a row shifted log-polar image. The discrete cosine transform is then applied to eliminate the row shift effect and to generate the low-dimensional feature vector. A PSO-based feature selection algorithm is utilized to search the feature vector space for the optimal feature subset. Evolution is driven by a fitness function defined in terms of maximizing the between-class separation (scatter index). Experimental results, based on the ORL face database using testing data sets for images with different orientations; show that the proposed system outperforms other face recognition methods. The overall recognition rate for the rotated test images being 97%, demonstrating that the extracted feature vector is an effective rotation invariant feature set with minimal set of selected features.

Keywords: Discrete Cosine Transform, Face Recognition, Feature Extraction, Log Polar Transform, Particle SwarmOptimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1872