Search results for: Relational Databases
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 359

Search results for: Relational Databases

59 Attention Based Fully Convolutional Neural Network for Simultaneous Detection and Segmentation of Optic Disc in Retinal Fundus Images

Authors: Sandip Sadhukhan, Arpita Sarkar, Debprasad Sinha, Goutam Kumar Ghorai, Gautam Sarkar, Ashis K. Dhara

Abstract:

Accurate segmentation of the optic disc is very important for computer-aided diagnosis of several ocular diseases such as glaucoma, diabetic retinopathy, and hypertensive retinopathy. The paper presents an accurate and fast optic disc detection and segmentation method using an attention based fully convolutional network. The network is trained from scratch using the fundus images of extended MESSIDOR database and the trained model is used for segmentation of optic disc. The false positives are removed based on morphological operation and shape features. The result is evaluated using three-fold cross-validation on six public fundus image databases such as DIARETDB0, DIARETDB1, DRIVE, AV-INSPIRE, CHASE DB1 and MESSIDOR. The attention based fully convolutional network is robust and effective for detection and segmentation of optic disc in the images affected by diabetic retinopathy and it outperforms existing techniques.

Keywords: Ocular diseases, retinal fundus image, optic disc detection and segmentation, fully convolutional network, overlap measure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 705
58 Riemannian Manifolds for Brain Extraction on Multi-modal Resonance Magnetic Images

Authors: Mohamed Gouskir, Belaid Bouikhalene, Hicham Aissaoui, Benachir Elhadadi

Abstract:

In this paper, we present an application of Riemannian geometry for processing non-Euclidean image data. We consider the image as residing in a Riemannian manifold, for developing a new method to brain edge detection and brain extraction. Automating this process is a challenge due to the high diversity in appearance brain tissue, among different patients and sequences. The main contribution, in this paper, is the use of an edge-based anisotropic diffusion tensor for the segmentation task by integrating both image edge geometry and Riemannian manifold (geodesic, metric tensor) to regularize the convergence contour and extract complex anatomical structures. We check the accuracy of the segmentation results on simulated brain MRI scans of single T1-weighted, T2-weighted and Proton Density sequences. We validate our approach using two different databases: BrainWeb database, and MRI Multiple sclerosis Database (MRI MS DB). We have compared, qualitatively and quantitatively, our approach with the well-known brain extraction algorithms. We show that using a Riemannian manifolds to medical image analysis improves the efficient results to brain extraction, in real time, outperforming the results of the standard techniques.

Keywords: Riemannian manifolds, Riemannian Tensor, Brain Segmentation, Non-Euclidean data, Brain Extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1622
57 A Medical Images Based Retrieval System using Soft Computing Techniques

Authors: Pardeep Singh, Sanjay Sharma

Abstract:

Content-Based Image Retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. Many programs and tools have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of difering sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever increasing quantities and used for diagnostics and therapy. In several articles, content based access to medical images for supporting clinical decision making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into Picture Archiving and Communication Systems (PACS) have been created. This paper gives an overview of soft computing techniques. New research directions are being defined that can prove to be useful. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text based retrieval methods as they exist at the moment.

Keywords: CBIR, GA, Rough sets, CBMIR

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2566
56 Optic Disc Detection by Earth Mover's Distance Template Matching

Authors: Fernando C. Monteiro, Vasco Cadavez

Abstract:

This paper presents a method for the detection of OD in the retina which takes advantage of the powerful preprocessing techniques such as the contrast enhancement, Gabor wavelet transform for vessel segmentation, mathematical morphology and Earth Mover-s distance (EMD) as the matching process. The OD detection algorithm is based on matching the expected directional pattern of the retinal blood vessels. Vessel segmentation method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel-s feature vector. Feature vectors are composed of the pixel-s intensity and 2D Gabor wavelet transform responses taken at multiple scales. A simple matched filter is proposed to roughly match the direction of the vessels at the OD vicinity using the EMD. The minimum distance provides an estimate of the OD center coordinates. The method-s performance is evaluated on publicly available DRIVE and STARE databases. On the DRIVE database the OD center was detected correctly in all of the 40 images (100%) and on the STARE database the OD was detected correctly in 76 out of the 81 images, even in rather difficult pathological situations.

Keywords: Diabetic retinopathy, Earth Mover's distance, Gabor wavelets, optic disc detection, retinal images

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1948
55 Extraction of Craniofacial Landmarks for Preoperative to Intraoperative Registration

Authors: M. Gooroochurn, D. Kerr, K. Bouazza-Marouf, M. Vloeberghs

Abstract:

This paper presents the automated methods employed for extracting craniofacial landmarks in white light images as part of a registration framework designed to support three neurosurgical procedures. The intraoperative space is characterised by white light stereo imaging while the preoperative plan is performed on CT scans. The registration aims at aligning these two modalities to provide a calibrated environment to enable image-guided solutions. The neurosurgical procedures can then be carried out by mapping the entry and target points from CT space onto the patient-s space. The registration basis adopted consists of natural landmarks (eye corner and ear tragus). A 5mm accuracy is deemed sufficient for these three procedures and the validity of the selected registration basis in achieving this accuracy has been assessed by simulation studies. The registration protocol is briefly described, followed by a presentation of the automated techniques developed for the extraction of the craniofacial features and results obtained from tests on the AR and FERET databases. Since the three targeted neurosurgical procedures are routinely used for head injury management, the effect of bruised/swollen faces on the automated algorithms is assessed. A user-interactive method is proposed to deal with such unpredictable circumstances.

Keywords: Face Processing, Craniofacial Feature Extraction, Preoperative to Intraoperative Registration, Registration Basis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1374
54 The Use Management of the Knowledge Management and the Information Technologies in the Competitive Strategy of a Self-Propelling Industry

Authors: Guerrero Ramírez Sandra, Ramos Salinas Norma Maricela, Muriel Amezcua Vanesa

Abstract:

This article presents the beginning of a wider study that intends to demonstrate how within organizations of the automotive industry from the city of Querétaro. Knowledge management and technological management are required, as well as people’s initiative and the interaction embedded at the interior of it, with the appropriate environment that facilitates information conversion with wide information technologies management (ITM) range. A company was identified for the pilot study of this research, where descriptive and inferential research information was obtained. The results of the pilot suggest that some respondents did noted entity the knowledge management topic, even if staffs have access to information technology (IT) that serve to enhance access to knowledge (through internet, email, databases, external and internal company personnel, suppliers, customers and competitors) data, this implicates that there are Knowledge Management (KM) problems. The data shows that academically well-prepared organizations normally do not recognize the importance of knowledge in the business, nor in the implementation of it, which at the end is a great influence on how to manage it, so that it should guide the company to greater in sight towards a competitive strategy search, given that the company has an excellent technological infrastructure and KM was not exploited. Cultural diversity is another factor that was observed by the staff.

Keywords: Knowledge management, technological knowledge management, technology information management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 882
53 M2LGP: Mining Multiple Level Gradual Patterns

Authors: Yogi Satrya Aryadinata, Anne Laurent, Michel Sala

Abstract:

Gradual patterns have been studied for many years as they contain precious information. They have been integrated in many expert systems and rule-based systems, for instance to reason on knowledge such as “the greater the number of turns, the greater the number of car crashes”. In many cases, this knowledge has been considered as a rule “the greater the number of turns → the greater the number of car crashes” Historically, works have thus been focused on the representation of such rules, studying how implication could be defined, especially fuzzy implication. These rules were defined by experts who were in charge to describe the systems they were working on in order to turn them to operate automatically. More recently, approaches have been proposed in order to mine databases for automatically discovering such knowledge. Several approaches have been studied, the main scientific topics being: how to determine what is an relevant gradual pattern, and how to discover them as efficiently as possible (in terms of both memory and CPU usage). However, in some cases, end-users are not interested in raw level knowledge, and are rather interested in trends. Moreover, it may be the case that no relevant pattern can be discovered at a low level of granularity (e.g. city), whereas some can be discovered at a higher level (e.g. county). In this paper, we thus extend gradual pattern approaches in order to consider multiple level gradual patterns. For this purpose, we consider two aggregation policies, namely horizontal and vertical.

Keywords: Gradual Pattern.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451
52 MONPAR - A Page Replacement Algorithm for a Spatiotemporal Database

Authors: U. Kalay, O. Kalıpsız

Abstract:

For a spatiotemporal database management system, I/O cost of queries and other operations is an important performance criterion. In order to optimize this cost, an intense research on designing robust index structures has been done in the past decade. With these major considerations, there are still other design issues that deserve addressing due to their direct impact on the I/O cost. Having said this, an efficient buffer management strategy plays a key role on reducing redundant disk access. In this paper, we proposed an efficient buffer strategy for a spatiotemporal database index structure, specifically indexing objects moving over a network of roads. The proposed strategy, namely MONPAR, is based on the data type (i.e. spatiotemporal data) and the structure of the index structure. For the purpose of an experimental evaluation, we set up a simulation environment that counts the number of disk accesses while executing a number of spatiotemporal range-queries over the index. We reiterated simulations with query sets with different distributions, such as uniform query distribution and skewed query distribution. Based on the comparison of our strategy with wellknown page-replacement techniques, like LRU-based and Prioritybased buffers, we conclude that MONPAR behaves better than its competitors for small and medium size buffers under all used query-distributions.

Keywords: Buffer Management, Spatiotemporal databases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436
51 GIS-based Non-point Sources of Pollution Simulation in Cameron Highlands, Malaysia

Authors: M. Eisakhani, A. Pauzi, O. Karim, A. Malakahmad, S.R. Mohamed Kutty, M. H. Isa

Abstract:

Cameron Highlands is a mountainous area subjected to torrential tropical showers. It extracts 5.8 million liters of water per day for drinking supply from its rivers at several intake points. The water quality of rivers in Cameron Highlands, however, has deteriorated significantly due to land clearing for agriculture, excessive usage of pesticides and fertilizers as well as construction activities in rapidly developing urban areas. On the other hand, these pollution sources known as non-point pollution sources are diverse and hard to identify and therefore they are difficult to estimate. Hence, Geographical Information Systems (GIS) was used to provide an extensive approach to evaluate landuse and other mapping characteristics to explain the spatial distribution of non-point sources of contamination in Cameron Highlands. The method to assess pollution sources has been developed by using Cameron Highlands Master Plan (2006-2010) for integrating GIS, databases, as well as pollution loads in the area of study. The results show highest annual runoff is created by forest, 3.56 × 108 m3/yr followed by urban development, 1.46 × 108 m3/yr. Furthermore, urban development causes highest BOD load (1.31 × 106 kgBOD/yr) while agricultural activities and forest contribute the highest annual loads for phosphorus (6.91 × 104 kgP/yr) and nitrogen (2.50 × 105 kgN/yr), respectively. Therefore, best management practices (BMPs) are suggested to be applied to reduce pollution level in the area.

Keywords: Cameron Highlands, Land use, Non-point Sources of Pollution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2826
50 Speaker Identification using Neural Networks

Authors: R.V Pawar, P.P.Kajave, S.N.Mali

Abstract:

The speech signal conveys information about the identity of the speaker. The area of speaker identification is concerned with extracting the identity of the person speaking the utterance. As speech interaction with computers becomes more pervasive in activities such as the telephone, financial transactions and information retrieval from speech databases, the utility of automatically identifying a speaker is based solely on vocal characteristic. This paper emphasizes on text dependent speaker identification, which deals with detecting a particular speaker from a known population. The system prompts the user to provide speech utterance. System identifies the user by comparing the codebook of speech utterance with those of the stored in the database and lists, which contain the most likely speakers, could have given that speech utterance. The speech signal is recorded for N speakers further the features are extracted. Feature extraction is done by means of LPC coefficients, calculating AMDF, and DFT. The neural network is trained by applying these features as input parameters. The features are stored in templates for further comparison. The features for the speaker who has to be identified are extracted and compared with the stored templates using Back Propogation Algorithm. Here, the trained network corresponds to the output; the input is the extracted features of the speaker to be identified. The network does the weight adjustment and the best match is found to identify the speaker. The number of epochs required to get the target decides the network performance.

Keywords: Average Mean Distance function, Backpropogation, Linear Predictive Coding, MultilayeredPerceptron,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852
49 Through Biometric Card in Romania: Person Identification by Face, Fingerprint and Voice Recognition

Authors: Hariton N. Costin, Iulian Ciocoiu, Tudor Barbu, Cristian Rotariu

Abstract:

In this paper three different approaches for person verification and identification, i.e. by means of fingerprints, face and voice recognition, are studied. Face recognition uses parts-based representation methods and a manifold learning approach. The assessment criterion is recognition accuracy. The techniques under investigation are: a) Local Non-negative Matrix Factorization (LNMF); b) Independent Components Analysis (ICA); c) NMF with sparse constraints (NMFsc); d) Locality Preserving Projections (Laplacianfaces). Fingerprint detection was approached by classical minutiae (small graphical patterns) matching through image segmentation by using a structural approach and a neural network as decision block. As to voice / speaker recognition, melodic cepstral and delta delta mel cepstral analysis were used as main methods, in order to construct a supervised speaker-dependent voice recognition system. The final decision (e.g. “accept-reject" for a verification task) is taken by using a majority voting technique applied to the three biometrics. The preliminary results, obtained for medium databases of fingerprints, faces and voice recordings, indicate the feasibility of our study and an overall recognition precision (about 92%) permitting the utilization of our system for a future complex biometric card.

Keywords: Biometry, image processing, pattern recognition, speech analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1903
48 Face Recognition Using Double Dimension Reduction

Authors: M. A Anjum, M. Y. Javed, A. Basit

Abstract:

In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.

Keywords: Biometrics, DCT, Face Recognition, Feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1446
47 Practical Method for Digital Music Matching Robust to Various Sound Qualities

Authors: Bokyung Sung, Jungsoo Kim, Jinman Kwun, Junhyung Park, Jihye Ryeo, Ilju Ko

Abstract:

In this paper, we propose a practical digital music matching system that is robust to variation in sound qualities. The proposed system is subdivided into two parts: client and server. The client part consists of the input, preprocessing and feature extraction modules. The preprocessing module, including the music onset module, revises the value gap occurring on the time axis between identical songs of different formats. The proposed method uses delta-grouped Mel frequency cepstral coefficients (MFCCs) to extract music features that are robust to changes in sound quality. According to the number of sound quality formats (SQFs) used, a music server is constructed with a feature database (FD) that contains different sub feature databases (SFDs). When the proposed system receives a music file, the selection module selects an appropriate SFD from a feature database; the selected SFD is subsequently used by the matching module. In this study, we used 3,000 queries for matching experiments in three cases with different FDs. In each case, we used 1,000 queries constructed by mixing 8 SQFs and 125 songs. The success rate of music matching improved from 88.6% when using single a single SFD to 93.2% when using quadruple SFDs. By this experiment, we proved that the proposed method is robust to various sound qualities.

Keywords: Digital Music, Music Matching, Variation in Sound Qualities, Robust Matching method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1327
46 A New Fast Skin Color Detection Technique

Authors: Tarek M. Mahmoud

Abstract:

Skin color can provide a useful and robust cue for human-related image analysis, such as face detection, pornographic image filtering, hand detection and tracking, people retrieval in databases and Internet, etc. The major problem of such kinds of skin color detection algorithms is that it is time consuming and hence cannot be applied to a real time system. To overcome this problem, we introduce a new fast technique for skin detection which can be applied in a real time system. In this technique, instead of testing each image pixel to label it as skin or non-skin (as in classic techniques), we skip a set of pixels. The reason of the skipping process is the high probability that neighbors of the skin color pixels are also skin pixels, especially in adult images and vise versa. The proposed method can rapidly detect skin and non-skin color pixels, which in turn dramatically reduce the CPU time required for the protection process. Since many fast detection techniques are based on image resizing, we apply our proposed pixel skipping technique with image resizing to obtain better results. The performance evaluation of the proposed skipping and hybrid techniques in terms of the measured CPU time is presented. Experimental results demonstrate that the proposed methods achieve better result than the relevant classic method.

Keywords: Adult images filtering, image resizing, skin color detection, YcbCr color space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3924
45 Interoperability Maturity Models for Consideration When Using School Management Systems in South Africa: A Scoping Review

Authors: Keneilwe Maremi, Marlien Herselman, Adele Botha

Abstract:

The main purpose and focus of this paper are to determine the Interoperability Maturity Models to consider when using School Management Systems (SMS). The importance of this is to inform and help schools with knowing which Interoperability Maturity Model is best suited for their SMS. To address the purpose, this paper will apply a scoping review to ensure that all aspects are provided. The scoping review will include papers written from 2012-2019 and a comparison of the different types of Interoperability Maturity Models will be discussed in detail, which includes the background information, the levels of interoperability, and area for consideration in each Maturity Model. The literature was obtained from the following databases: IEEE Xplore and Scopus, the following search engines were used: Harzings, and Google Scholar. The topic of the paper was used as a search term for the literature and the term ‘Interoperability Maturity Models’ was used as a keyword. The data were analyzed in terms of the definition of Interoperability, Interoperability Maturity Models, and levels of interoperability. The results provide a table that shows the focus area of concern for each Maturity Model (based on the scoping review where only 24 papers were found to be best suited for the paper out of 740 publications initially identified in the field). This resulted in the most discussed Interoperability Maturity Model for consideration (Information Systems Interoperability Maturity Model (ISIMM) and Organizational Interoperability Maturity Model for C2 (OIM)).

Keywords: Interoperability, Interoperability Maturity Model, School Management System, scoping review.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 707
44 A Two-Stage Adaptation towards Automatic Speech Recognition System for Malay-Speaking Children

Authors: Mumtaz Begum Mustafa, Siti Salwah Salim, Feizal Dani Rahman

Abstract:

Recently, Automatic Speech Recognition (ASR) systems were used to assist children in language acquisition as it has the ability to detect human speech signal. Despite the benefits offered by the ASR system, there is a lack of ASR systems for Malay-speaking children. One of the contributing factors for this is the lack of continuous speech database for the target users. Though cross-lingual adaptation is a common solution for developing ASR systems for under-resourced language, it is not viable for children as there are very limited speech databases as a source model. In this research, we propose a two-stage adaptation for the development of ASR system for Malay-speaking children using a very limited database. The two stage adaptation comprises the cross-lingual adaptation (first stage) and cross-age adaptation. For the first stage, a well-known speech database that is phonetically rich and balanced, is adapted to the medium-sized Malay adults using supervised MLLR. The second stage adaptation uses the speech acoustic model generated from the first adaptation, and the target database is a small-sized database of the target users. We have measured the performance of the proposed technique using word error rate, and then compare them with the conventional benchmark adaptation. The two stage adaptation proposed in this research has better recognition accuracy as compared to the benchmark adaptation in recognizing children’s speech.

Keywords: Automatic speech recognition system, children speech, adaptation, Malay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1711
43 A Multiple Linear Regression Model to Predict the Price of Cement in Nigeria

Authors: Kenneth M. Oba

Abstract:

This study investigated factors affecting the price of cement in Nigeria, and developed a mathematical model that can predict future cement prices. Cement is key in the Nigerian construction industry. The changes in price caused by certain factors could affect economic and infrastructural development; hence there is need for proper proactive planning. Secondary data were collected from published information on cement between 2014 and 2019. In addition, questionnaires were sent to some domestic cement retailers in Port Harcourt in Nigeria, to obtain the actual prices of cement between the same periods. The study revealed that the most critical factors affecting the price of cement in Nigeria are inflation rate, population growth rate, and Gross Domestic Product (GDP) growth rate. With the use of data from United Nations, International Monetary Fund, and Central Bank of Nigeria databases, amongst others, a Multiple Linear Regression model was formulated. The model was used to predict the price of cement for 2020-2025. The model was then tested with 95% confidence level, using a two-tailed t-test and an F-test, resulting in an R2 of 0.8428 and R2 (adj.) of 0.6069. The results of the tests and the correlation factors confirm the model to be fit and adequate. This study will equip researchers and stakeholders in the construction industry with information for planning, monitoring, and management of present and future construction projects that involve the use of cement.

Keywords: Cement price, multiple linear regression model, Nigerian Construction Industry, price prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 736
42 Towards a Broader Understanding of Journal Impact: Measuring Relationships between Journal Characteristics and Scholarly Impact

Authors: X. Gu, K. L. Blackmore

Abstract:

The impact factor was introduced to measure the quality of journals. Various impact measures exist from multiple bibliographic databases. In this research, we aim to provide a broader understanding of the relationship between scholarly impact and other characteristics of academic journals. Data used for this research were collected from Ulrich’s Periodicals Directory (Ulrichs), Cabell’s (Cabells), and SCImago Journal & Country Rank (SJR) from 1999 to 2015. A master journal dataset was consolidated via Journal Title and ISSN. We adopted a two-step analysis process to study the quantitative relationships between scholarly impact and other journal characteristics. Firstly, we conducted a correlation analysis over the data attributes, with results indicating that there are no correlations between any of the identified journal characteristics. Secondly, we examined the quantitative relationship between scholarly impact and other characteristics using quartile analysis. The results show interesting patterns, including some expected and others less anticipated. Results show that higher quartile journals publish more in both frequency and quantity, and charge more for subscription cost. Top quartile journals also have the lowest acceptance rates. Non-English journals are more likely to be categorized in lower quartiles, which are more likely to stop publishing than higher quartiles. Future work is suggested, which includes analysis of the relationship between scholars and their publications, based on the quartile ranking of journals in which they publish.

Keywords: Academic journal, acceptance rate, impact factor, journal characteristics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 910
41 Selecting Negative Examples for Protein-Protein Interaction

Authors: Mohammad Shoyaib, M. Abdullah-Al-Wadud, Oksam Chae

Abstract:

Proteomics is one of the largest areas of research for bioinformatics and medical science. An ambitious goal of proteomics is to elucidate the structure, interactions and functions of all proteins within cells and organisms. Predicting Protein-Protein Interaction (PPI) is one of the crucial and decisive problems in current research. Genomic data offer a great opportunity and at the same time a lot of challenges for the identification of these interactions. Many methods have already been proposed in this regard. In case of in-silico identification, most of the methods require both positive and negative examples of protein interaction and the perfection of these examples are very much crucial for the final prediction accuracy. Positive examples are relatively easy to obtain from well known databases. But the generation of negative examples is not a trivial task. Current PPI identification methods generate negative examples based on some assumptions, which are likely to affect their prediction accuracy. Hence, if more reliable negative examples are used, the PPI prediction methods may achieve even more accuracy. Focusing on this issue, a graph based negative example generation method is proposed, which is simple and more accurate than the existing approaches. An interaction graph of the protein sequences is created. The basic assumption is that the longer the shortest path between two protein-sequences in the interaction graph, the less is the possibility of their interaction. A well established PPI detection algorithm is employed with our negative examples and in most cases it increases the accuracy more than 10% in comparison with the negative pair selection method in that paper.

Keywords: Interaction graph, Negative training data, Protein-Protein interaction, Support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658
40 ECG Based Reliable User Identification Using Deep Learning

Authors: R. N. Begum, Ambalika Sharma, G. K. Singh

Abstract:

Identity theft has serious ramifications beyond data and personal information loss. This necessitates the implementation of robust and efficient user identification systems. Therefore, automatic biometric recognition systems are the need of the hour, and electrocardiogram (ECG)-based systems are unquestionably the best choice due to their appealing inherent characteristics. The Convolutional Neural Networks (CNNs) are the recent state-of-the-art techniques for ECG-based user identification systems. However, the results obtained are significantly below standards, and the situation worsens as the number of users and types of heartbeats in the dataset grows. As a result, this study proposes a highly accurate and resilient ECG-based person identification system using CNN's dense learning framework. The proposed research explores explicitly the caliber of dense CNNs in the field of ECG-based human recognition. The study tests four different configurations of dense CNN which are trained on a dataset of recordings collected from eight popular ECG databases. With the highest False Acceptance Rate (FAR)  of 0.04% and the highest False Rejection Rate (FRR)  of 5%, the best performing network achieved an identification accuracy of 99.94%. The best network is also tested with various train/test split ratios. The findings show that DenseNets are not only extremely reliable, but also highly efficient. Thus, they might also be implemented in real-time ECG-based human recognition systems.

Keywords: Biometrics, dense networks, identification rate, train/test split ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 454
39 Modeling Spatial Distributions of Point and Nonpoint Source Pollution Loadings in the Great Lakes Watersheds

Authors: Chansheng He, Carlo DeMarchi

Abstract:

A physically based, spatially-distributed water quality model is being developed to simulate spatial and temporal distributions of material transport in the Great Lakes Watersheds of the U.S. Multiple databases of meteorology, land use, topography, hydrography, soils, agricultural statistics, and water quality were used to estimate nonpoint source loading potential in the study watersheds. Animal manure production was computed from tabulations of animals by zip code area for the census years of 1987, 1992, 1997, and 2002. Relative chemical loadings for agricultural land use were calculated from fertilizer and pesticide estimates by crop for the same periods. Comparison of these estimates to the monitored total phosphorous load indicates that both point and nonpoint sources are major contributors to the total nutrient loads in the study watersheds, with nonpoint sources being the largest contributor, particularly in the rural watersheds. These estimates are used as the input to the distributed water quality model for simulating pollutant transport through surface and subsurface processes to Great Lakes waters. Visualization and GIS interfaces are developed to visualize the spatial and temporal distribution of the pollutant transport in support of water management programs.

Keywords: Distributed Large Basin Runoff Model, Great LakesWatersheds, nonpoint source pollution, and point sources.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1484
38 A Pattern Recognition Neural Network Model for Detection and Classification of SQL Injection Attacks

Authors: Naghmeh Moradpoor Sheykhkanloo

Abstract:

Thousands of organisations store important and confidential information related to them, their customers, and their business partners in databases all across the world. The stored data ranges from less sensitive (e.g. first name, last name, date of birth) to more sensitive data (e.g. password, pin code, and credit card information). Losing data, disclosing confidential information or even changing the value of data are the severe damages that Structured Query Language injection (SQLi) attack can cause on a given database. It is a code injection technique where malicious SQL statements are inserted into a given SQL database by simply using a web browser. In this paper, we propose an effective pattern recognition neural network model for detection and classification of SQLi attacks. The proposed model is built from three main elements of: a Uniform Resource Locator (URL) generator in order to generate thousands of malicious and benign URLs, a URL classifier in order to: 1) classify each generated URL to either a benign URL or a malicious URL and 2) classify the malicious URLs into different SQLi attack categories, and a NN model in order to: 1) detect either a given URL is a malicious URL or a benign URL and 2) identify the type of SQLi attack for each malicious URL. The model is first trained and then evaluated by employing thousands of benign and malicious URLs. The results of the experiments are presented in order to demonstrate the effectiveness of the proposed approach.

Keywords: Neural Networks, pattern recognition, SQL injection attacks, SQL injection attack classification, SQL injection attack detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2773
37 Information Retrieval: A Comparative Study of Textual Indexing Using an Oriented Object Database (db4o) and the Inverted File

Authors: Mohammed Erritali

Abstract:

The growth in the volume of text data such as books and articles in libraries for centuries has imposed to establish effective mechanisms to locate them. Early techniques such as abstraction, indexing and the use of classification categories have marked the birth of a new field of research called "Information Retrieval". Information Retrieval (IR) can be defined as the task of defining models and systems whose purpose is to facilitate access to a set of documents in electronic form (corpus) to allow a user to find the relevant ones for him, that is to say, the contents which matches with the information needs of the user. Most of the models of information retrieval use a specific data structure to index a corpus which is called "inverted file" or "reverse index". This inverted file collects information on all terms over the corpus documents specifying the identifiers of documents that contain the term in question, the frequency of each term in the documents of the corpus, the positions of the occurrences of the word... In this paper we use an oriented object database (db4o) instead of the inverted file, that is to say, instead to search a term in the inverted file, we will search it in the db4o database. The purpose of this work is to make a comparative study to see if the oriented object databases may be competing for the inverse index in terms of access speed and resource consumption using a large volume of data.

Keywords: Information Retrieval, indexation, oriented object database (db4o), inverted file.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695
36 Hospital-Pharmacy Management System: A UAE Case Study

Authors: A. Khelifi, D. Ahmed, R. Salem, N. Ali

Abstract:

Large patients’ queues at pharmacies and hospitals are a problem that faces the supposedly smooth and healthy environment in United Arab Emirates. As this sometimes leads to dissatisfaction from visiting patients, we tried to solve this problem with additional beneficial functions by developing the Hospital-Pharmacy Management System. The primary purpose of this research is to develop a system that joins the databases of a hospital and a pharmacy together for a better integrated system that provides a better coherent working environment. Three methods are used to design the system. These methods are detailed literature review, an extensive feasibility study and surveys for doctors, hospital IT managers and End-users. Interviews and surveys with related stakeholders were done to depict system’s requirements; design and prototype. The prototype illustrates system’s features and its client and server architecture. The system has a mobile application for visiting patients to, mainly, keep track of their prescriptions and access to their personal information. The server side allows doctors to submit the prescriptions online to pharmacists who will process them. This system is expected to reduce the long waiting queues of patients and increase their satisfaction while also reducing doctors and pharmacists’ stress and facilitating their work. It will be deployed to users of Android devices only. This limitation will be resolved, as one of main future enhancements, once the system finds acceptance from hospitals and pharmacies in United Arab Emirates.

Keywords: Hospital, Information System, Integration, Pharmacy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11670
35 Urban Growth Analysis Using Multi-Temporal Satellite Images, Non-stationary Decomposition Methods and Stochastic Modeling

Authors: Ali Ben Abbes, ImedRiadh Farah, Vincent Barra

Abstract:

Remotely sensed data are a significant source for monitoring and updating databases for land use/cover. Nowadays, changes detection of urban area has been a subject of intensive researches. Timely and accurate data on spatio-temporal changes of urban areas are therefore required. The data extracted from multi-temporal satellite images are usually non-stationary. In fact, the changes evolve in time and space. This paper is an attempt to propose a methodology for changes detection in urban area by combining a non-stationary decomposition method and stochastic modeling. We consider as input of our methodology a sequence of satellite images I1, I2, … In at different periods (t = 1, 2, ..., n). Firstly, a preprocessing of multi-temporal satellite images is applied. (e.g. radiometric, atmospheric and geometric). The systematic study of global urban expansion in our methodology can be approached in two ways: The first considers the urban area as one same object as opposed to non-urban areas (e.g. vegetation, bare soil and water). The objective is to extract the urban mask. The second one aims to obtain a more knowledge of urban area, distinguishing different types of tissue within the urban area. In order to validate our approach, we used a database of Tres Cantos-Madrid in Spain, which is derived from Landsat for a period (from January 2004 to July 2013) by collecting two frames per year at a spatial resolution of 25 meters. The obtained results show the effectiveness of our method.

Keywords: Multi-temporal satellite image, urban growth, Non-stationarity, stochastic modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1461
34 Socio-Spatial Resilience Strategic Planning Through Understanding Strategic Perspectives on Tehran and Bath

Authors: Aynaz Lotfata

Abstract:

Planning community has been long discussing emerging paradigms within the planning theory in the face of the changing conditions of the world order. The paradigm shift concept was introduced by Thomas Kuhn, in 1960, who claimed the necessity of shifting within scientific knowledge boundaries; and following him in 1970 Imre Loktas also gave priority to the emergence of multi-paradigm societies [24]. Multi-paradigm is changing our predetermined lifeworld through uncertainties. Those uncertainties are reflected in two sides, the first one is uncertainty as a concept of possibility and creativity in public sphere and the second one is uncertainty as a risk. Therefore, it is necessary to apply a resilience planning approach to be more dynamic in controlling uncertainties which have the potential to transfigure present time and space definitions. In this way, stability of system can be achieved. Uncertainty is not only an outcome of worldwide changes but also a place-specific issue, i.e. it changes from continent to continent, a country to country; a region to region. Therefore, applying strategic spatial planning with respect to resilience principle contributes to: control, grasp and internalize uncertainties through place-specific strategies. In today-s fast changing world, planning system should follow strategic spatial projects to control multi-paradigm societies with adaptability capacities. Here, we have selected two alternatives to demonstrate; these are; 1.Tehran (Iran) from the Middle East 2.Bath (United Kingdom) from Europe. The study elaborates uncertainties and particularities in their strategic spatial planning processes in a comparative manner. Through the comparison, the study aims at assessing place-specific priorities in strategic planning. The approach is to a two-way stream, where the case cities from the extreme end of the spectrum can learn from each other. The structure of this paper is to firstly compare semi-periphery (Tehran) and coreperiphery (Bath) cities, with the focus to reveal how they equip to face with uncertainties according to their geographical locations and local particularities. Secondly, the key message to address is “Each locality requires its own strategic planning approach to be resilient.--

Keywords: Adaptation, Relational Network, Socio-Spatial Strategic Resiliency, Uncertainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772
33 A Supervised Learning Data Mining Approach for Object Recognition and Classification in High Resolution Satellite Data

Authors: Mais Nijim, Rama Devi Chennuboyina, Waseem Al Aqqad

Abstract:

Advances in spatial and spectral resolution of satellite images have led to tremendous growth in large image databases. The data we acquire through satellites, radars, and sensors consists of important geographical information that can be used for remote sensing applications such as region planning, disaster management. Spatial data classification and object recognition are important tasks for many applications. However, classifying objects and identifying them manually from images is a difficult task. Object recognition is often considered as a classification problem, this task can be performed using machine-learning techniques. Despite of many machine-learning algorithms, the classification is done using supervised classifiers such as Support Vector Machines (SVM) as the area of interest is known. We proposed a classification method, which considers neighboring pixels in a region for feature extraction and it evaluates classifications precisely according to neighboring classes for semantic interpretation of region of interest (ROI). A dataset has been created for training and testing purpose; we generated the attributes by considering pixel intensity values and mean values of reflectance. We demonstrated the benefits of using knowledge discovery and data-mining techniques, which can be on image data for accurate information extraction and classification from high spatial resolution remote sensing imagery.

Keywords: Remote sensing, object recognition, classification, data mining, waterbody identification, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2007
32 Evolutionary Approach for Automated Discovery of Censored Production Rules

Authors: Kamal K. Bharadwaj, Basheer M. Al-Maqaleh

Abstract:

In the recent past, there has been an increasing interest in applying evolutionary methods to Knowledge Discovery in Databases (KDD) and a number of successful applications of Genetic Algorithms (GA) and Genetic Programming (GP) to KDD have been demonstrated. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. The PRs, however, are unable to handle exceptions and do not exhibit variable precision. The Censored Production Rules (CPRs), an extension of PRs, were proposed by Michalski & Winston that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations, in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence are tight or there is simply no information available as to whether it holds or not. Thus, the 'If P Then D' part of the CPR expresses important information, while the Unless C part acts only as a switch and changes the polarity of D to ~D. This paper presents a classification algorithm based on evolutionary approach that discovers comprehensible rules with exceptions in the form of CPRs. The proposed approach has flexible chromosome encoding, where each chromosome corresponds to a CPR. Appropriate genetic operators are suggested and a fitness function is proposed that incorporates the basic constraints on CPRs. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Censored Production Rule, Data Mining, MachineLearning, Evolutionary Algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1830
31 Effect of Dynamic Stall, Finite Aspect Ratio and Streamtube Expansion on VAWT Performance Prediction using the BE-M Model

Authors: M. Raciti Castelli, A. Fedrigo, E. Benini

Abstract:

A multiple-option analytical model for the evaluation of the energy performance and distribution of aerodynamic forces acting on a vertical-axis Darrieus wind turbine depending on both rotor architecture and operating conditions is presented. For this purpose, a numerical algorithm, capable of generating the desired rotor conformation depending on design geometric parameters, is coupled to a Single/Double-Disk Multiple-Streamtube Blade Element – Momentum code. Both single and double-disk configurations are analyzed and model predictions are compared to literature experimental data in order to test the capability of the code for predicting rotor performance. Effective airfoil characteristics based on local blade Reynolds number are obtained through interpolation of literature low-Reynolds airfoil databases. Some corrections are introduced inside the original model with the aim of simulating also the effects of blade dynamic stall, rotor streamtube expansion and blade finite aspect ratio, for which a new empirical relationship to better fit the experimental data is proposed. In order to predict also open field rotor operation, a freestream wind shear profile is implemented, reproducing the effect of atmospheric boundary layer.

Keywords: Wind turbine, BE-M, dynamic stall, streamtube expansion, airfoil finite aspect ratio

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25050
30 Realistic Simulation Methodology in Brazil’s New Medical Education Curriculum: Potentialities

Authors: Cleto J. Sauer Jr

Abstract:

Introduction: Brazil’s new national curriculum guidelines (NCG) for medical education were published in 2014, presenting active learning methodologies as a cornerstone. Simulation was initially applied for aviation pilots’ training and is currently applied in health sciences. The high-fidelity simulator replicates human body anatomy in detail, also reproducing physiological functions and its use is increasing in medical schools. Realistic Simulation (RS) has pedagogical aspects that are aligned with Brazil’s NCG teaching concepts. The main objective of this study is to carry on a narrative review on RS’s aspects that are aligned with Brazil’s new NCG teaching concepts. Methodology: A narrative review was conducted, with search in three databases (PubMed, Embase and BVS) of studies published between 2010 and 2020. Results: After systematized search, 49 studies were selected and divided into four thematic groups. RS is aligned with new Brazilian medical curriculum as it is an active learning methodology, providing greater patient safety, uniform teaching, and student's emotional skills enhancement. RS is based on reflective learning, a teaching concept developed for adult’s education. Conclusion: RS is a methodology aligned with NCG teaching concepts and has potential to assist in the implementation of new Brazilian medical school’s curriculum. It is an immersive and interactive methodology, which provides reflective learning in a safe environment for students and patients.

Keywords: Curriculum, high-fidelity simulator, medical education, realistic simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 498