Search results for: thermochemical database.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 725

Search results for: thermochemical database.

185 Efficient DTW-Based Speech Recognition System for Isolated Words of Arabic Language

Authors: Khalid A. Darabkh, Ala F. Khalifeh, Baraa A. Bathech, Saed W. Sabah

Abstract:

Despite the fact that Arabic language is currently one of the most common languages worldwide, there has been only a little research on Arabic speech recognition relative to other languages such as English and Japanese. Generally, digital speech processing and voice recognition algorithms are of special importance for designing efficient, accurate, as well as fast automatic speech recognition systems. However, the speech recognition process carried out in this paper is divided into three stages as follows: firstly, the signal is preprocessed to reduce noise effects. After that, the signal is digitized and hearingized. Consequently, the voice activity regions are segmented using voice activity detection (VAD) algorithm. Secondly, features are extracted from the speech signal using Mel-frequency cepstral coefficients (MFCC) algorithm. Moreover, delta and acceleration (delta-delta) coefficients have been added for the reason of improving the recognition accuracy. Finally, each test word-s features are compared to the training database using dynamic time warping (DTW) algorithm. Utilizing the best set up made for all affected parameters to the aforementioned techniques, the proposed system achieved a recognition rate of about 98.5% which outperformed other HMM and ANN-based approaches available in the literature.

Keywords: Arabic speech recognition, MFCC, DTW, VAD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4075
184 Exploring the Destination Image of Mainland China Tourists to Taiwan by Word-of-Mouth on Web

Authors: Y. R. Li, Y. Y. Wang

Abstract:

After allowing direct flights from Mainland China to Taiwan, Chinese tourists increased according to Tourism Bureaustatistics. There are from 0.19 to 2 million tourists from 2008 to 2011. Mainland China has become the main source of Taiwan developing tourism industry. Taiwanese government should know more about comments from Chinese tourists to Taiwan in order toproperly market Taiwan tourism and enhance the overall quality of tourism. In order to understand Chinese visitors’ comments, this study adopts content analysis to analyze electronic word-of-mouth on Web. This study collects 375 blog articles of Chinese tourists from Ctrip.com as a database during 2009 to 2011. Through the qualitative data analysis the traveling destination imagesis divided into seven dimensions, such as senic spots, shopping, food and beverages, accommodations, transportation, festivals and recreation activities. Finally, this study proposes some practical managerial implication to know both positive and negative images of the seven dimensions from Chinese tourists, providing marketing strategies and suggestions to traveling agency industry.

Keywords: Destination Image, Content Analysis, Electronic Word-of-Mouth.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2540
183 Protein Graph Partitioning by Mutually Maximization of cycle-distributions

Authors: Frank Emmert Streib

Abstract:

The classification of the protein structure is commonly not performed for the whole protein but for structural domains, i.e., compact functional units preserved during evolution. Hence, a first step to a protein structure classification is the separation of the protein into its domains. We approach the problem of protein domain identification by proposing a novel graph theoretical algorithm. We represent the protein structure as an undirected, unweighted and unlabeled graph which nodes correspond the secondary structure elements of the protein. This graph is call the protein graph. The domains are then identified as partitions of the graph corresponding to vertices sets obtained by the maximization of an objective function, which mutually maximizes the cycle distributions found in the partitions of the graph. Our algorithm does not utilize any other kind of information besides the cycle-distribution to find the partitions. If a partition is found, the algorithm is iteratively applied to each of the resulting subgraphs. As stop criterion, we calculate numerically a significance level which indicates the stability of the predicted partition against a random rewiring of the protein graph. Hence, our algorithm terminates automatically its iterative application. We present results for one and two domain proteins and compare our results with the manually assigned domains by the SCOP database and differences are discussed.

Keywords: Graph partitioning, unweighted graph, protein domains.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1356
182 Research Trends on Magnetic Graphene for Water Treatment: A Bibliometric Analysis

Authors: J. C. M. Santos, J. C. A. Sousa, A. J. Rubio, L. S. Soletti, F. Gasparotto, N. U. Yamaguchi

Abstract:

Magnetic graphene has received widespread attention for their capability of water and wastewater treatment, which has been attracted many researchers in this field. A bibliometric analysis based on the Web of Science database was employed to analyze the global scientific outputs of magnetic graphene for water treatment until the present time (2012 to 2017), to improve the understanding of the research trends. The publication year, place of publication, institutes, funding agencies, journals, most cited articles, distribution outputs in thematic categories and applications were analyzed. Three major aspects analyzed including type of pollutant, treatment process and composite composition have further contributed to revealing the research trends. The most relevant research aspects of the main technologies using magnetic graphene for water treatment were summarized in this paper. The results showed that research on magnetic graphene for water treatment goes through a period of decline that might be related to a saturated field and a lack of bibliometric studies. Thus, the result of the present work will lead researchers to establish future directions in further studies using magnetic graphene for water treatment.

Keywords: Composite, graphene oxide, nanomaterials, scientometrics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1192
181 A Novel In-Place Sorting Algorithm with O(n log z) Comparisons and O(n log z) Moves

Authors: Hanan Ahmed-Hosni Mahmoud, Nadia Al-Ghreimil

Abstract:

In-place sorting algorithms play an important role in many fields such as very large database systems, data warehouses, data mining, etc. Such algorithms maximize the size of data that can be processed in main memory without input/output operations. In this paper, a novel in-place sorting algorithm is presented. The algorithm comprises two phases; rearranging the input unsorted array in place, resulting segments that are ordered relative to each other but whose elements are yet to be sorted. The first phase requires linear time, while, in the second phase, elements of each segment are sorted inplace in the order of z log (z), where z is the size of the segment, and O(1) auxiliary storage. The algorithm performs, in the worst case, for an array of size n, an O(n log z) element comparisons and O(n log z) element moves. Further, no auxiliary arithmetic operations with indices are required. Besides these theoretical achievements of this algorithm, it is of practical interest, because of its simplicity. Experimental results also show that it outperforms other in-place sorting algorithms. Finally, the analysis of time and space complexity, and required number of moves are presented, along with the auxiliary storage requirements of the proposed algorithm.

Keywords: Auxiliary storage sorting, in-place sorting, sorting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1909
180 Forensic Speaker Verification in Noisy Environmental by Enhancing the Speech Signal Using ICA Approach

Authors: Ahmed Kamil Hasan Al-Ali, Bouchra Senadji, Ganesh Naik

Abstract:

We propose a system to real environmental noise and channel mismatch for forensic speaker verification systems. This method is based on suppressing various types of real environmental noise by using independent component analysis (ICA) algorithm. The enhanced speech signal is applied to mel frequency cepstral coefficients (MFCC) or MFCC feature warping to extract the essential characteristics of the speech signal. Channel effects are reduced using an intermediate vector (i-vector) and probabilistic linear discriminant analysis (PLDA) approach for classification. The proposed algorithm is evaluated by using an Australian forensic voice comparison database, combined with car, street and home noises from QUT-NOISE at a signal to noise ratio (SNR) ranging from -10 dB to 10 dB. Experimental results indicate that the MFCC feature warping-ICA achieves a reduction in equal error rate about (48.22%, 44.66%, and 50.07%) over using MFCC feature warping when the test speech signals are corrupted with random sessions of street, car, and home noises at -10 dB SNR.

Keywords: Noisy forensic speaker verification, ICA algorithm, MFCC, MFCC feature warping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 990
179 An Improved k Nearest Neighbor Classifier Using Interestingness Measures for Medical Image Mining

Authors: J. Alamelu Mangai, Satej Wagle, V. Santhosh Kumar

Abstract:

The exponential increase in the volume of medical image database has imposed new challenges to clinical routine in maintaining patient history, diagnosis, treatment and monitoring. With the advent of data mining and machine learning techniques it is possible to automate and/or assist physicians in clinical diagnosis. In this research a medical image classification framework using data mining techniques is proposed. It involves feature extraction, feature selection, feature discretization and classification. In the classification phase, the performance of the traditional kNN k nearest neighbor classifier is improved using a feature weighting scheme and a distance weighted voting instead of simple majority voting. Feature weights are calculated using the interestingness measures used in association rule mining. Experiments on the retinal fundus images show that the proposed framework improves the classification accuracy of traditional kNN from 78.57 % to 92.85 %.

Keywords: Medical Image Mining, Data Mining, Feature Weighting, Association Rule Mining, k nearest neighbor classifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3308
178 Family History of Obesity and Risk of Childhood Overweight and Obesity: A Meta-Analysis

Authors: Martina Kanciruk, Jac W. Andrews, Tyrone Donnon

Abstract:

The purpose of this study was to determine the significance of history of obesity for the development of childhood overweight and/or obesity. Accordingly, a systematic literature review of English-language studies published from 1980 to 2012 using the following data bases: MEDLINE, PsychINFO, Cochrane Database of Systematic Reviews, and Dissertation Abstracts International was conducted. The following terms were used in the search: pregnancy, overweight, obesity, family history, parents, childhood, risk factors. Eleven studies of family history and obesity conducted in Europe, Asia, North America, and South America met the inclusion criteria. A meta-analysis of these studies indicated that family history of obesity is a significant risk factor of overweight and /or obesity in offspring; risk for offspring overweight and/or obesity associated with family history varies depending of the family members included in the analysis; and when family history of obesity is present, the offspring are at greater risk for developing obesity or overweight. In addition, the results from moderator analyses suggest that part of the heterogeneity discovered between the studies can be explained by the region of world that the study occurred in and the age of the child at the time of weight assessment.

Keywords: Childhood obesity, overweight, family history, risk factors, meta-analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3565
177 An Improved Fast Video Clip Search Algorithm for Copy Detection using Histogram-based Features

Authors: Feifei Lee, Qiu Chen, Koji Kotani, Tadahiro Ohmi

Abstract:

In this paper, we present an improved fast and robust search algorithm for copy detection using histogram-based features for short MPEG video clips from large video database. There are two types of histogram features used to generate more robust features. The first one is based on the adjacent pixel intensity difference quantization (APIDQ) algorithm, which had been reliably applied to human face recognition previously. An APIDQ histogram is utilized as the feature vector of the frame image. Another one is ordinal histogram feature which is robust to color distortion. Furthermore, by Combining with a temporal division method, the spatial and temporal features of the video sequence are integrated to realize fast and robust video search for copy detection. Experimental results show the proposed algorithm can detect the similar video clip more accurately and robust than conventional fast video search algorithm.

Keywords: Fast search, Copy detection, Adjacent pixel intensity difference quantization (APIDQ), DC image, Histogram feature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450
176 A New Face Detection Technique using 2D DCT and Self Organizing Feature Map

Authors: Abdallah S. Abdallah, A. Lynn Abbott, Mohamad Abou El-Nasr

Abstract:

This paper presents a new technique for detection of human faces within color images. The approach relies on image segmentation based on skin color, features extracted from the two-dimensional discrete cosine transform (DCT), and self-organizing maps (SOM). After candidate skin regions are extracted, feature vectors are constructed using DCT coefficients computed from those regions. A supervised SOM training session is used to cluster feature vectors into groups, and to assign “face" or “non-face" labels to those clusters. Evaluation was performed using a new image database of 286 images, containing 1027 faces. After training, our detection technique achieved a detection rate of 77.94% during subsequent tests, with a false positive rate of 5.14%. To our knowledge, the proposed technique is the first to combine DCT-based feature extraction with a SOM for detecting human faces within color images. It is also one of a few attempts to combine a feature-invariant approach, such as color-based skin segmentation, together with appearance-based face detection. The main advantage of the new technique is its low computational requirements, in terms of both processing speed and memory utilization.

Keywords: Face detection, skin color segmentation, self-organizingmap.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2543
175 JENOSYS: Application of a Web-Based Online Energy Performance Reporting Tool for Government Buildings in Malaysia

Authors: Norhayati Mat Wajid, Abdul Murad Zainal Abidin, Faiz Fadzil, Mohd Yusof Aizad Mukhtar

Abstract:

One of the areas that present an opportunity to reduce the national carbon emission is the energy management of public buildings. To our present knowledge, there is no easy-to-use and centralized mechanism that enables the government to monitor the overall energy performance, as well as the carbon footprint, of Malaysia’s public buildings. Therefore, the Public Works Department Malaysia, or PWD, has developed a web-based energy performance reporting tool called JENOSYS (JKR Energy Online System), which incorporates a database of utility account numbers acquired from the utility service provider for analysis and reporting. For test case purposes, 23 buildings under PWD were selected and monitored for their monthly energy performance (in kWh), carbon emission reduction (in tCO₂eq) and utility cost (in MYR), against the baseline. This paper demonstrates the simplicity with which buildings without energy metering can be monitored centrally and the benefits that can be accrued by the government in terms of building energy disclosure and concludes with the recommendation of expanding the system to all the public buildings in Malaysia.

Keywords: Energy-efficient buildings. energy management systems, government buildings, JENOSYS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 932
174 Elections Management Information Communication System Voter Ballot

Authors: Zaza Tabagari, Zaza Sanikidze, George Giorgobiani

Abstract:

Abovepresented work deals with the new scope of application of information and communication technologies for the improvement of the election process in the biased environment. We are introducing a new concept of construction of the information-communication system for the election participant. It consists of four main components: Software, Physical Infrastructure, Structured Information and the Trained Stuff. The Structured Information is the bases of the whole system and is the collection of all possible events (irregularities among them) at the polling stations, which are structured in special templates, forms and integrated in mobile devices.The software represents a package of analytic modules, which operates with the dynamic database. The application of modern communication technologies facilities the immediate exchange of information and of relevant documents between the polling stations and the Server of the participant. No less important is the training of the staff for the proper functioning of the system. The e-training system with various modules should be applied in this respect. The presented methodology is primarily focused on the election processes in the countries of emerging democracies.It can be regarded as the tool for the monitoring of elections process by the political organization(s) and as one of the instruments to foster the spread of democracy in these countries.

Keywords: ICT, elections, structured information, dynamic databases, e-training.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743
173 Developing a Coronavirus Academic Paper Sorting Application

Authors: Christina A. van Hal, Xiaoqian Jiang, Luyao Chen, Yan Chu, Robert D. Jolly, Yaobin Lin, Jitian Zhao, Kang Lin Hsieh

Abstract:

The COVID-19 Literature Summary App, now live on the university website, was created for the primary purpose of enabling academicians and clinicians to quickly sort through the vast array of recent coronavirus publications by topics of interest. Multiple methods of summarizing and sorting the manuscripts were created. A summary page introduces the application function and capabilities, while an interactive map provides daily updates on infection, death, and recovery rates. A page with a pivot table allows publication sorting by topic, with an interactive data table that allows sorting topics by columns, as wells as the capability to view abstracts. Additionally, publications may be sorted by the medical topics they cover. We used the CORD-19 database to compile lists of publications. The data table can sort binary variables, allowing the user to pick desired publication topics, such as papers that describe COVID-19 symptoms. The application is primarily designed for use by researchers but can be used by anybody who wants a faster and more efficient means of locating papers of interest. 

Keywords: COVID-19, literature summary, information retrieval, snorkel

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 469
172 A Study on the Assessment of Prosthetic Infection after Total Knee Replacement Surgery

Authors: Chang, Chun-Lang, Liu, Chun-Kai

Abstract:

This study, for its research subjects, uses patients who had undergone total knee replacement surgery from the database of the National Health Insurance Administration. Through the review of literatures and the interviews with physicians, important factors are selected after careful screening. Then using Cross Entropy Method, Genetic Algorithm Logistic Regression, and Particle Swarm Optimization, the weight of each factor is calculated and obtained. In the meantime, Excel VBA and Case Based Reasoning are combined and adopted to evaluate the system. Results show no significant difference found through Genetic Algorithm Logistic Regression and Particle Swarm Optimization with over 97% accuracy in both methods. Both ROC areas are above 0.87. This study can provide critical reference to medical personnel as clinical assessment to effectively enhance medical care quality and efficiency, prevent unnecessary waste, and provide practical advantages to resource allocation to medical institutes.

Keywords: Total knee replacement, Case Based Reasoning, Cross Entropy Method, Genetic Algorithm Logistic Regression, Particle Swarm Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2033
171 Reliability Analysis of Computer Centre at Yobe State University Using LRU Algorithm

Authors: V. V. Singh, Yusuf Ibrahim Gwanda, Rajesh Prasad

Abstract:

In this paper, we focus on the reliability and performance analysis of Computer Centre (CC) at Yobe State University, Damaturu, Nigeria. The CC consists of three servers: one database mail server, one redundant and one for sharing with the client computers in the CC (called as a local server). Observing the different possibilities of the functioning of the CC, the analysis has been done to evaluate the various popular measures of reliability such as availability, reliability, mean time to failure (MTTF), profit analysis due to the operation of the system. The system can ultimately fail due to the failure of router, redundant server before repairing the mail server and switch failure. The system can also partially fail when a local server fails. The failed devices have restored according to Least Recently Used (LRU) techniques. The system can also fail entirely due to a cooling failure of the server, electricity failure or some natural calamity like earthquake, fire tsunami, etc. All the failure rates are assumed to be constant and follow exponential time distribution, while the repair follows two types of distributions: i.e. general and Gumbel-Hougaard family copula distribution.

Keywords: Reliability, availability Gumbel-Hougaard family copula, MTTF, internet data center.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 870
170 Satellite Imagery Classification Based on Deep Convolution Network

Authors: Zhong Ma, Zhuping Wang, Congxin Liu, Xiangzeng Liu

Abstract:

Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method.

Keywords: Satellite imagery classification, deep convolution network, genetic algorithm, hyper-parameter optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2346
169 Comparison of Irradiance Decomposition and Energy Production Methods in a Solar Photovoltaic System

Authors: Tisciane Perpetuo e Oliveira, Dante Inga Narvaez, Marcelo Gradella Villalva

Abstract:

Installations of solar photovoltaic systems have increased considerably in the last decade. Therefore, it has been noticed that monitoring of meteorological data (solar irradiance, air temperature, wind velocity, etc.) is important to predict the potential of a given geographical area in solar energy production. In this sense, the present work compares two computational tools that are capable of estimating the energy generation of a photovoltaic system through correlation analyzes of solar radiation data: PVsyst software and an algorithm based on the PVlib package implemented in MATLAB. In order to achieve the objective, it was necessary to obtain solar radiation data (measured and from a solarimetric database), analyze the decomposition of global solar irradiance in direct normal and horizontal diffuse components, as well as analyze the modeling of the devices of a photovoltaic system (solar modules and inverters) for energy production calculations. Simulated results were compared with experimental data in order to evaluate the performance of the studied methods. Errors in estimation of energy production were less than 30% for the MATLAB algorithm and less than 20% for the PVsyst software.

Keywords: Energy production, meteorological data, irradiance decomposition, solar photovoltaic system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 766
168 Rheological Characteristics of Ice Slurries Based on Propylene- and Ethylene-Glycol at High Ice Fractions

Authors: Senda Trabelsi, Sébastien Poncet, Michel Poirier

Abstract:

Ice slurries are considered as a promising phase-changing secondary fluids for air-conditioning, packaging or cooling industrial processes. An experimental study has been here carried out to measure the rheological characteristics of ice slurries. Ice slurries consist in a solid phase (flake ice crystals) and a liquid phase. The later is composed of a mixture of liquid water and an additive being here either (1) Propylene-Glycol (PG) or (2) Ethylene-Glycol (EG) used to lower the freezing point of water. Concentrations of 5%, 14% and 24% of both additives are investigated with ice mass fractions ranging from 5% to 85%. The rheological measurements are carried out using a Discovery HR-2 vane-concentric cylinder with four full-length blades. The experimental results show that the behavior of ice slurries is generally non-Newtonian with shear-thinning or shear-thickening behaviors depending on the experimental conditions. In order to determine the consistency and the flow index, the Herschel-Bulkley model is used to describe the behavior of ice slurries. The present results are finally validated against an experimental database found in the literature and the predictions of an Artificial Neural Network model.

Keywords: Ice slurry, propylene-glycol, ethylene-glycol, rheology, artificial neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1127
167 Grouping and Indexing Color Features for Efficient Image Retrieval

Authors: M. V. Sudhamani, C. R. Venugopal

Abstract:

Content-based Image Retrieval (CBIR) aims at searching image databases for specific images that are similar to a given query image based on matching of features derived from the image content. This paper focuses on a low-dimensional color based indexing technique for achieving efficient and effective retrieval performance. In our approach, the color features are extracted using the mean shift algorithm, a robust clustering technique. Then the cluster (region) mode is used as representative of the image in 3-D color space. The feature descriptor consists of the representative color of a region and is indexed using a spatial indexing method that uses *R -tree thus avoiding the high-dimensional indexing problems associated with the traditional color histogram. Alternatively, the images in the database are clustered based on region feature similarity using Euclidian distance. Only representative (centroids) features of these clusters are indexed using *R -tree thus improving the efficiency. For similarity retrieval, each representative color in the query image or region is used independently to find regions containing that color. The results of these methods are compared. A JAVA based query engine supporting query-by- example is built to retrieve images by color.

Keywords: Content-based, indexing, cluster, region.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1811
166 Speaker Identification Using Admissible Wavelet Packet Based Decomposition

Authors: Mangesh S. Deshpande, Raghunath S. Holambe

Abstract:

Mel Frequency Cepstral Coefficient (MFCC) features are widely used as acoustic features for speech recognition as well as speaker recognition. In MFCC feature representation, the Mel frequency scale is used to get a high resolution in low frequency region, and a low resolution in high frequency region. This kind of processing is good for obtaining stable phonetic information, but not suitable for speaker features that are located in high frequency regions. The speaker individual information, which is non-uniformly distributed in the high frequencies, is equally important for speaker recognition. Based on this fact we proposed an admissible wavelet packet based filter structure for speaker identification. Multiresolution capabilities of wavelet packet transform are used to derive the new features. The proposed scheme differs from previous wavelet based works, mainly in designing the filter structure. Unlike others, the proposed filter structure does not follow Mel scale. The closed-set speaker identification experiments performed on the TIMIT database shows improved identification performance compared to other commonly used Mel scale based filter structures using wavelets.

Keywords: Speaker identification, Wavelet transform, Feature extraction, MFCC, GMM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1982
165 A New Approach to Face Recognition Using Dual Dimension Reduction

Authors: M. Almas Anjum, M. Younus Javed, A. Basit

Abstract:

In this paper a new approach to face recognition is presented that achieves double dimension reduction, making the system computationally efficient with better recognition results and out perform common DCT technique of face recognition. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results change with change in face image resolution and provide optimal results when arriving at a certain resolution level. In the proposed model of face recognition, initially image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to increased computational speed and feature extraction potential of Discrete Cosine Transform (DCT), it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A tradeoff between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL , Yale and EME color database.

Keywords: Biometrics, DCT, Face Recognition, Illumination, Computation, Feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686
164 Searching for Forensic Evidence in a Compromised Virtual Web Server against SQL Injection Attacks and PHP Web Shell

Authors: Gigih Supriyatno

Abstract:

SQL injection is one of the most common types of attacks and has a very critical impact on web servers. In the worst case, an attacker can perform post-exploitation after a successful SQL injection attack. In the case of forensics web servers, web server analysis is closely related to log file analysis. But sometimes large file sizes and different log types make it difficult for investigators to look for traces of attackers on the server. The purpose of this paper is to help investigator take appropriate steps to investigate when the web server gets attacked. We use attack scenarios using SQL injection attacks including PHP backdoor injection as post-exploitation. We perform post-mortem analysis of web server logs based on Hypertext Transfer Protocol (HTTP) POST and HTTP GET method approaches that are characteristic of SQL injection attacks. In addition, we also propose structured analysis method between the web server application log file, database application, and other additional logs that exist on the webserver. This method makes the investigator more structured to analyze the log file so as to produce evidence of attack with acceptable time. There is also the possibility that other attack techniques can be detected with this method. On the other side, it can help web administrators to prepare their systems for the forensic readiness.

Keywords: Web forensic, SQL injection, web shell, investigation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1257
163 Educational Data Mining: The Case of Department of Mathematics and Computing in the Period 2009-2018

Authors: M. Sitoe, O. Zacarias

Abstract:

University education is influenced by several factors that range from the adoption of strategies to strengthen the whole process to the academic performance improvement of the students themselves. This work uses data mining techniques to develop a predictive model to identify students with a tendency to evasion and retention. To this end, a database of real students’ data from the Department of University Admission (DAU) and the Department of Mathematics and Informatics (DMI) was used. The data comprised 388 undergraduate students admitted in the years 2009 to 2014. The Weka tool was used for model building, using three different techniques, namely: K-nearest neighbor, random forest, and logistic regression. To allow for training on multiple train-test splits, a cross-validation approach was employed with a varying number of folds. To reduce bias variance and improve the performance of the models, ensemble methods of Bagging and Stacking were used. After comparing the results obtained by the three classifiers, Logistic Regression using Bagging with seven folds obtained the best performance, showing results above 90% in all evaluated metrics: accuracy, rate of true positives, and precision. Retention is the most common tendency.

Keywords: Evasion and retention, cross validation, bagging, stacking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 119
162 On the Computation of a Common n-finger Robotic Grasp for a Set of Objects

Authors: Avishai Sintov, Roland Menassa, Amir Shapiro

Abstract:

Industrial robotic arms utilize multiple end-effectors, each for a specific part and for a specific task. We propose a novel algorithm which will define a single end-effector’s configuration able to grasp a given set of objects with different geometries. The algorithm will have great benefit in production lines allowing a single robot to grasp various parts. Hence, reducing the number of endeffectors needed. Moreover, the algorithm will reduce end-effector design and manufacturing time and final product cost. The algorithm searches for a common grasp over the set of objects. The search algorithm maps all possible grasps for each object which satisfy a quality criterion and takes into account possible external wrenches (forces and torques) applied to the object. The mapped grasps are- represented by high-dimensional feature vectors which describes the shape of the gripper. We generate a database of all possible grasps for each object in the feature space. Then we use a search and classification algorithm for intersecting all possible grasps over all parts and finding a single common grasp suitable for all objects. We present simulations of planar and spatial objects to validate the feasibility of the approach.

Keywords: Common Grasping, Search Algorithm, Robotic End-Effector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1675
161 Classifying Biomedical Text Abstracts based on Hierarchical 'Concept' Structure

Authors: Rozilawati Binti Dollah, Masaki Aono

Abstract:

Classifying biomedical literature is a difficult and challenging task, especially when a large number of biomedical articles should be organized into a hierarchical structure. In this paper, we present an approach for classifying a collection of biomedical text abstracts downloaded from Medline database with the help of ontology alignment. To accomplish our goal, we construct two types of hierarchies, the OHSUMED disease hierarchy and the Medline abstract disease hierarchies from the OHSUMED dataset and the Medline abstracts, respectively. Then, we enrich the OHSUMED disease hierarchy before adapting it to ontology alignment process for finding probable concepts or categories. Subsequently, we compute the cosine similarity between the vector in probable concepts (in the “enriched" OHSUMED disease hierarchy) and the vector in Medline abstract disease hierarchies. Finally, we assign category to the new Medline abstracts based on the similarity score. The results obtained from the experiments show the performance of our proposed approach for hierarchical classification is slightly better than the performance of the multi-class flat classification.

Keywords: Biomedical literature, hierarchical text classification, ontology alignment, text mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2011
160 Application of Geographic Information Systems(GIS) in the History of Cartography

Authors: Bangbo Hu

Abstract:

This paper discusses applications of a revolutionary information technology, Geographic Information Systems (GIS), in the field of the history of cartography by examples, including assessing accuracy of early maps, establishing a database of places and historical administrative units in history, integrating early maps in GIS or digital images, and analyzing social, political, and economic information related to production of early maps. GIS provides a new mean to evaluate the accuracy of early maps. Four basic steps using GIS for this type of study are discussed. In addition, several historical geographical information systems are introduced. These include China Historical Geographic Information Systems (CHGIS), the United States National Historical Geographic Information System (NHGIS), and the Great Britain Historical Geographical Information System. GIS also provides digital means to display and analyze the spatial information on the early maps or to layer them with modern spatial data. How GIS relational data structure may be used to analyze social, political, and economic information related to production of early maps is also discussed in this paper. Through discussion on these examples, this paper reveals value of GIS applications in this field.

Keywords: Cartography, GIS, history, maps.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3156
159 An Approach for Data Analysis, Evaluation and Correction: A Case Study from Man-Made River Project in Libya

Authors: Nasser M. Amaitik, Nabil A. Alfagi

Abstract:

The world-s largest Pre-stressed Concrete Cylinder Pipe (PCCP) water supply project had a series of pipe failures which occurred between 1999 and 2001. This has led the Man-Made River Authority (MMRA), the authority in charge of the implementation and operation of the project, to setup a rehabilitation plan for the conveyance system while maintaining the uninterrupted flow of water to consumers. At the same time, MMRA recognized the need for a long term management tool that would facilitate repair and maintenance decisions and enable taking the appropriate preventive measures through continuous monitoring and estimation of the remaining life of each pipe. This management tool is known as the Pipe Risk Management System (PRMS) and now in operation at MMRA. Both the rehabilitation plan and the PRMS require the availability of complete and accurate pipe construction and manufacturing data This paper describes a systematic approach of data collection, analysis, evaluation and correction for the construction and manufacturing data files of phase I pipes which are the platform for the PRMS database and any other related decision support system.

Keywords: Asbuilt, History, IMD, MMRA, PDBMS & PRMS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2016
158 When Construction Material Traders Goes Electronic: Analysis of SMEs in Malaysian Construction Industry

Authors: Dzul Fahmi Nordin, Rosmini Omar

Abstract:

This paper analyzed the perception of e-commerce application services by construction material traders in Malaysia. Five attributes were tested: usability, reputation, trust, privacy and familiarity. Study methodology consists of survey questionnaire and statistical analysis that includes reliability analysis, factor analysis, ANOVA and regression analysis. The respondents were construction material traders, including hardware stores in Klang Valley, Kuala Lumpur. Findings support that usability and familiarity with e-commerce services in Malaysia have insignificant influence on the acceptance of e-commerce application. However, reputation, trust and privacy attributes have significant influence on the choice of e-commerce acceptance by construction material traders. E-commerce applications studied included customer database, e-selling, emarketing, e-payment, e-buying and online advertising. Assumptions are made that traders have basic knowledge and exposure to ICT services. i.e. internet service and computers. Study concludes that reputation, privacy and trust are the three website attributes that influence the acceptance of e-commerce by construction material traders.

Keywords: Electronic Commerce (e-Commerce), Information and Communications Technology (ICT), Small Medium Enterprise (SME)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1843
157 Centralized Monitoring and Self-protected against Fiber Fault in FTTH Access Network

Authors: Mohammad Syuhaimi Ab-Rahman, Boonchuan Ng, Kasmiran Jumari

Abstract:

This paper presented a new approach for centralized monitoring and self-protected against fiber fault in fiber-to-the-home (FTTH) access network by using Smart Access Network Testing, Analyzing and Database (SANTAD). SANTAD will be installed with optical line terminal (OLT) at central office (CO) for in-service transmission surveillance and fiber fault localization within FTTH with point-to-multipoint (P2MP) configuration downwardly from CO towards customer residential locations based on the graphical user interface (GUI) processing capabilities of MATLAB software. SANTAD is able to detect any fiber fault as well as identify the failure location in the network system. SANTAD enable the status of each optical network unit (ONU) connected line is displayed onto one screen with capability to configure the attenuation and detect the failure simultaneously. The analysis results and information will be delivered to the field engineer for promptly actions, meanwhile the failure line will be diverted to protection line to ensure the traffic flow continuously. This approach has a bright prospect to improve the survivability and reliability as well as increase the efficiency and monitoring capabilities in FTTH.

Keywords: Fiber fault, FTTH, SANTAD, transmission surveillance, MATLAB.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2552
156 Enhanced Clustering Analysis and Visualization Using Kohonen's Self-Organizing Feature Map Networks

Authors: Kasthurirangan Gopalakrishnan, Siddhartha Khaitan, Anshu Manik

Abstract:

Cluster analysis is the name given to a diverse collection of techniques that can be used to classify objects (e.g. individuals, quadrats, species etc). While Kohonen's Self-Organizing Feature Map (SOFM) or Self-Organizing Map (SOM) networks have been successfully applied as a classification tool to various problem domains, including speech recognition, image data compression, image or character recognition, robot control and medical diagnosis, its potential as a robust substitute for clustering analysis remains relatively unresearched. SOM networks combine competitive learning with dimensionality reduction by smoothing the clusters with respect to an a priori grid and provide a powerful tool for data visualization. In this paper, SOM is used for creating a toroidal mapping of two-dimensional lattice to perform cluster analysis on results of a chemical analysis of wines produced in the same region in Italy but derived from three different cultivators, referred to as the “wine recognition data" located in the University of California-Irvine database. The results are encouraging and it is believed that SOM would make an appealing and powerful decision-support system tool for clustering tasks and for data visualization.

Keywords: Artificial neural networks, cluster analysis, Kohonen maps, wine recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2123