Search results for: cancer classification
160 Pre-Malignant Breast Lesions, Methods of Treatment and Outcome
Authors: Ahmed Mostafa, Mohamed Mahmoud, Nesreen H. Hafez, Mohamed Fahim
Abstract:
This retrospective study includes 60 patients with pre-invasive breast cancer. Aim of the study: Evaluation of premalignant lesions of the breast (DCIS), different treatment methods and outcome. Patients and methods: 60 patients with DCIS were studied from the period between 2005 to 2012, for 38 patients the primary surgical method was wide local resection (WLE) (63.3%) and the other cases (22 patients, 36.7%) had mastectomy, fourteen cases from those who underwent local excision received radiotherapy, while no adjuvant radiotherapy was given for those who underwent mastectomy. In case of hormonal receptor positive DCIS lesions hormonal treatment (Tamoxifen) was given after local control. Results: No difference in overall survival between mastectomy & breast conserving therapy (wide local excision and adjuvant radiotherapy), however local recurrence rate is higher in case of breast conserving therapy, also no role of Axillary evacuation in case of DCIS. The use of hormonal therapy decreases the incidence of local recurrence by about 98%. Conclusion: The main management of DCIS is local treatment (wide local excision and radiotherapy) with hormonal treatment in case of hormone receptor positive lesions.
Keywords: Ductal carcinoma in situ, surgical treatment, radiotherapy, breast conserving therapy, hormonal treatment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436159 Optimizing Forecasting for Indonesia's Coal and Palm Oil Exports: A Comparative Analysis of ARIMA, ANN, and LSTM Methods
Authors: Mochammad Dewo, Sumarsono Sudarto
Abstract:
The Exponential Triple Smoothing Algorithm approach nowadays, which is used to anticipate the export value of Indonesia's two major commodities, coal and palm oil, has a Mean Percentage Absolute Error (MAPE) value of 30-50%, which may be considered as a "reasonable" forecasting mistake. Forecasting errors of more than 30% shall have a domino effect on industrial output, as extra production adds to raw material, manufacturing and storage expenses. Whereas, reaching an "excellent" classification with an error value of less than 10% will provide new investors and exporters with confidence in the commercial development of related sectors. Industrial growth will bring out a positive impact on economic development. It can be applied for other commodities if the forecast error is less than 10%. The purpose of this project is to create a forecasting technique that can produce precise forecasting results with an error of less than 10%. This research analyzes forecasting methods such as ARIMA (Autoregressive Integrated Moving Average), ANN (Artificial Neural Network) and LSTM (Long-Short Term Memory). By providing a MAPE of 1%, this study reveals that ANN is the most successful strategy for forecasting coal and palm oil commodities in Indonesia.
Keywords: ANN, Artificial Neural Network, ARIMA, Autoregressive Integrated Moving Average, export value, forecast, LSTM, Long Short Term Memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 223158 A Neural Network Classifier for Estimation of the Degree of Infestation by Late Blight on Tomato Leaves
Authors: Gizelle K. Vianna, Gabriel V. Cunha, Gustavo S. Oliveira
Abstract:
Foliage diseases in plants can cause a reduction in both quality and quantity of agricultural production. Intelligent detection of plant diseases is an essential research topic as it may help monitoring large fields of crops by automatically detecting the symptoms of foliage diseases. This work investigates ways to recognize the late blight disease from the analysis of tomato digital images, collected directly from the field. A pair of multilayer perceptron neural network analyzes the digital images, using data from both RGB and HSL color models, and classifies each image pixel. One neural network is responsible for the identification of healthy regions of the tomato leaf, while the other identifies the injured regions. The outputs of both networks are combined to generate the final classification of each pixel from the image and the pixel classes are used to repaint the original tomato images by using a color representation that highlights the injuries on the plant. The new images will have only green, red or black pixels, if they came from healthy or injured portions of the leaf, or from the background of the image, respectively. The system presented an accuracy of 97% in detection and estimation of the level of damage on the tomato leaves caused by late blight.
Keywords: Artificial neural networks, digital image processing, pattern recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2551157 Normalizing Scientometric Indicators of Individual Publications Using Local Cluster Detection Methods on Citation Networks
Authors: Levente Varga, Dávid Deritei, Mária Ercsey-Ravasz, Răzvan Florian, Zsolt I. Lázár, István Papp, Ferenc Járai-Szabó
Abstract:
One of the major shortcomings of widely used scientometric indicators is that different disciplines cannot be compared with each other. The issue of cross-disciplinary normalization has been long discussed, but even the classification of publications into scientific domains poses problems. Structural properties of citation networks offer new possibilities, however, the large size and constant growth of these networks asks for precaution. Here we present a new tool that in order to perform cross-field normalization of scientometric indicators of individual publications relays on the structural properties of citation networks. Due to the large size of the networks, a systematic procedure for identifying scientific domains based on a local community detection algorithm is proposed. The algorithm is tested with different benchmark and real-world networks. Then, by the use of this algorithm, the mechanism of the scientometric indicator normalization process is shown for a few indicators like the citation number, P-index and a local version of the PageRank indicator. The fat-tail trend of the article indicator distribution enables us to successfully perform the indicator normalization process.Keywords: Citation networks, scientometric indicator, cross-field normalization, local cluster detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 724156 Performance Evaluation of Neural Network Prediction for Data Prefetching in Embedded Applications
Authors: Sofien Chtourou, Mohamed Chtourou, Omar Hammami
Abstract:
Embedded systems need to respect stringent real time constraints. Various hardware components included in such systems such as cache memories exhibit variability and therefore affect execution time. Indeed, a cache memory access from an embedded microprocessor might result in a cache hit where the data is available or a cache miss and the data need to be fetched with an additional delay from an external memory. It is therefore highly desirable to predict future memory accesses during execution in order to appropriately prefetch data without incurring delays. In this paper, we evaluate the potential of several artificial neural networks for the prediction of instruction memory addresses. Neural network have the potential to tackle the nonlinear behavior observed in memory accesses during program execution and their demonstrated numerous hardware implementation emphasize this choice over traditional forecasting techniques for their inclusion in embedded systems. However, embedded applications execute millions of instructions and therefore millions of addresses to be predicted. This very challenging problem of neural network based prediction of large time series is approached in this paper by evaluating various neural network architectures based on the recurrent neural network paradigm with pre-processing based on the Self Organizing Map (SOM) classification technique.Keywords: Address, data set, memory, prediction, recurrentneural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1674155 Recognizing an Individual, Their Topic of Conversation, and Cultural Background from 3D Body Movement
Authors: Gheida J. Shahrour, Martin J. Russell
Abstract:
The 3D body movement signals captured during human-human conversation include clues not only to the content of people’s communication but also to their culture and personality. This paper is concerned with automatic extraction of this information from body movement signals. For the purpose of this research, we collected a novel corpus from 27 subjects, arranged them into groups according to their culture. We arranged each group into pairs and each pair communicated with each other about different topics. A state-of-art recognition system is applied to the problems of person, culture, and topic recognition. We borrowed modeling, classification, and normalization techniques from speech recognition. We used Gaussian Mixture Modeling (GMM) as the main technique for building our three systems, obtaining 77.78%, 55.47%, and 39.06% from the person, culture, and topic recognition systems respectively. In addition, we combined the above GMM systems with Support Vector Machines (SVM) to obtain 85.42%, 62.50%, and 40.63% accuracy for person, culture, and topic recognition respectively. Although direct comparison among these three recognition systems is difficult, it seems that our person recognition system performs best for both GMM and GMM-SVM, suggesting that intersubject differences (i.e. subject’s personality traits) are a major source of variation. When removing these traits from culture and topic recognition systems using the Nuisance Attribute Projection (NAP) and the Intersession Variability Compensation (ISVC) techniques, we obtained 73.44% and 46.09% accuracy from culture and topic recognition systems respectively.
Keywords: Person Recognition, Topic Recognition, Culture Recognition, 3D Body Movement Signals, Variability Compensation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2173154 A Methodology for Automatic Diversification of Document Categories
Authors: Dasom Kim, Chen Liu, Myungsu Lim, Soo-Hyeon Jeon, Byeoung Kug Jeon, Kee-Young Kwahk, Namgyu Kim
Abstract:
Recently, numerous documents including large volumes of unstructured data and text have been created because of the rapid increase in the use of social media and the Internet. Usually, these documents are categorized for the convenience of users. Because the accuracy of manual categorization is not guaranteed, and such categorization requires a large amount of time and incurs huge costs. Many studies on automatic categorization have been conducted to help mitigate the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorize complex documents with multiple topics because they work on the assumption that individual documents can be categorized into single categories only. Therefore, to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, the learning process employed in these studies involves training using a multi-categorized document set. These methods therefore cannot be applied to the multi-categorization of most documents unless multi-categorized training sets using traditional multi-categorization algorithms are provided. To overcome this limitation, in this study, we review our novel methodology for extending the category of a single-categorized document to multiple categorizes, and then introduce a survey-based verification scenario for estimating the accuracy of our automatic categorization methodology.Keywords: Big Data Analysis, Document Classification, Text Mining, Topic Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744153 Artificial Neural Networks Technique for Seismic Hazard Prediction Using Seismic Bumps
Authors: Belkacem Selma, Boumediene Selma, Samira Chouraqui, Hanifi Missoum, Tourkia Guerzou
Abstract:
Natural disasters have occurred and will continue to cause human and material damage. Therefore, the idea of "preventing" natural disasters will never be possible. However, their prediction is possible with the advancement of technology. Even if natural disasters are effectively inevitable, their consequences may be partly controlled. The rapid growth and progress of artificial intelligence (AI) had a major impact on the prediction of natural disasters and risk assessment which are necessary for effective disaster reduction. Earthquake prediction to prevent the loss of human lives and even property damage is an important factor; that, is why it is crucial to develop techniques for predicting this natural disaster. This study aims to analyze the ability of artificial neural networks (ANNs) to predict earthquakes that occur in a given area. The used data describe the problem of high energy (higher than 104 J) seismic bumps forecasting in a coal mine using two long walls as an example. For this purpose, seismic bumps data obtained from mines have been analyzed. The results obtained show that the ANN is able to predict earthquake parameters with high accuracy; the classification accuracy through neural networks is more than 94%, and the models developed are efficient and robust and depend only weakly on the initial database.
Keywords: Earthquake prediction, artificial intelligence, AI, Artificial Neural Network, ANN, seismic bumps.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1184152 On The Analysis of a Compound Neural Network for Detecting Atrio Ventricular Heart Block (AVB) in an ECG Signal
Authors: Salama Meghriche, Amer Draa, Mohammed Boulemden
Abstract:
Heart failure is the most common reason of death nowadays, but if the medical help is given directly, the patient-s life may be saved in many cases. Numerous heart diseases can be detected by means of analyzing electrocardiograms (ECG). Artificial Neural Networks (ANN) are computer-based expert systems that have proved to be useful in pattern recognition tasks. ANN can be used in different phases of the decision-making process, from classification to diagnostic procedures. This work concentrates on a review followed by a novel method. The purpose of the review is to assess the evidence of healthcare benefits involving the application of artificial neural networks to the clinical functions of diagnosis, prognosis and survival analysis, in ECG signals. The developed method is based on a compound neural network (CNN), to classify ECGs as normal or carrying an AtrioVentricular heart Block (AVB). This method uses three different feed forward multilayer neural networks. A single output unit encodes the probability of AVB occurrences. A value between 0 and 0.1 is the desired output for a normal ECG; a value between 0.1 and 1 would infer an occurrence of an AVB. The results show that this compound network has a good performance in detecting AVBs, with a sensitivity of 90.7% and a specificity of 86.05%. The accuracy value is 87.9%.Keywords: Artificial neural networks, Electrocardiogram(ECG), Feed forward multilayer neural network, Medical diagnosis, Pattern recognitionm, Signal processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2471151 Evaluation of the Internal Quality for Pineapple Based on the Spectroscopy Approach and Neural Network
Authors: Nonlapun Meenil, Pisitpong Intarapong, Thitima Wongsheree, Pranchalee Samanpiboon
Abstract:
In Thailand, once pineapples are harvested, they must be classified into two classes based on their sweetness: sweet and unsweet. This paper has studied and developed the assessment of internal quality of pineapples using a low-cost compact spectroscopy sensor according to the spectroscopy approach and Neural Network (NN). During the experiments, Batavia pineapples were utilized, generating 100 samples. The extracted pineapple juice of each sample was used to determine the Soluble Solid Content (SSC) labeling into sweet and unsweet classes. In terms of experimental equipment, the sensor cover was specifically designed to install the sensor and light source to read the reflectance at a five mm depth from pineapple flesh. By using a spectroscopy sensor, data on visible and near-infrared reflectance (Vis-NIR) were collected. The NN was used to classify the pineapple classes. Before the classification step, the preprocessing methods, which are class balancing, data shuffling, and standardization, were applied. The 510 nm and 900 nm reflectance values of the middle parts of pineapples were used as features of the NN. With the sequential model and ReLU activation function, 100% accuracy of the training set and 76.67% accuracy of the test set were achieved. According to the abovementioned information, using a low-cost compact spectroscopy sensor has achieved favorable results in classifying the sweetness of the two classes of pineapples.
Keywords: Spectroscopy, soluble solid content, pineapple, neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 118150 Design Criteria for Achieving Acceptable Indoor Radon Concentration
Authors: T. Valdbjørn Rasmussen
Abstract:
Design criteria for achieving an acceptable indoor radon concentration are presented in this paper. The paper suggests three design criteria. These criteria have to be considered at the early stage of the building design phase to meet the latest recommendations from the World Health Organization in most countries. The three design criteria are; first, establishing a radon barrier facing the ground; second, lowering the air pressure in the lower zone of the slab on ground facing downwards; third, diluting the indoor air with outdoor air. The first two criteria can prevent radon from infiltrating from the ground, and the third criteria can dilute the indoor air. By combining these three criteria, the indoor radon concentration can be lowered achieving an acceptable level. In addition, a cheap and reliable method for measuring the radon concentration in the indoor air is described. The provision on radon in the Danish Building Regulations complies with the latest recommendations from the World Health Organization. Radon can cause lung cancer and it is not known whether there is a lower limit for when it is not harmful to human beings. Therefore, it is important to reduce the radon concentration as much as possible in buildings. Airtightness is an important factor when dealing with buildings. It is important to avoid air leakages in the building envelope both facing the atmosphere, e.g. in compliance with energy requirements, but also facing the ground, to meet the requirements to ensure and control the indoor environment. Infiltration of air from the ground underneath a building is the main providing source of radon to the indoor air.Keywords: Radon, natural radiation, barrier, pressure lowering, ventilation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1189149 Using Satellite Images Datasets for Road Intersection Detection in Route Planning
Authors: Fatma El-zahraa El-taher, Ayman Taha, Jane Courtney, Susan Mckeever
Abstract:
Understanding road networks plays an important role in navigation applications such as self-driving vehicles and route planning for individual journeys. Intersections of roads are essential components of road networks. Understanding the features of an intersection, from a simple T-junction to larger multi-road junctions is critical to decisions such as crossing roads or selecting safest routes. The identification and profiling of intersections from satellite images is a challenging task. While deep learning approaches offer state-of-the-art in image classification and detection, the availability of training datasets is a bottleneck in this approach. In this paper, a labelled satellite image dataset for the intersection recognition problem is presented. It consists of 14,692 satellite images of Washington DC, USA. To support other users of the dataset, an automated download and labelling script is provided for dataset replication. The challenges of construction and fine-grained feature labelling of a satellite image dataset are examined, including the issue of how to address features that are spread across multiple images. Finally, the accuracy of detection of intersections in satellite images is evaluated.
Keywords: Satellite images, remote sensing images, data acquisition, autonomous vehicles, robot navigation, route planning, road intersections.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 754148 MPPT Operation for PV Grid-connected System using RBFNN and Fuzzy Classification
Authors: A. Chaouachi, R. M. Kamel, K. Nagasaka
Abstract:
This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW Photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three Radial Basis Function Neural Networks (RBFNN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated RBFNN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology, comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and non-linear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network.
Keywords: MPPT, neuro-fuzzy, RBFN, grid-connected, photovoltaic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3181147 Analysis Model for the Relationship of Users, Products, and Stores on Online Marketplace Based on Distributed Representation
Authors: Ke He, Wumaier Parezhati, Haruka Yamashita
Abstract:
Recently, online marketplaces in the e-commerce industry, such as Rakuten and Alibaba, have become some of the most popular online marketplaces in Asia. In these shopping websites, consumers can select purchase products from a large number of stores. Additionally, consumers of the e-commerce site have to register their name, age, gender, and other information in advance, to access their registered account. Therefore, establishing a method for analyzing consumer preferences from both the store and the product side is required. This study uses the Doc2Vec method, which has been studied in the field of natural language processing. Doc2Vec has been used in many cases to analyze the extraction of semantic relationships between documents (represented as consumers) and words (represented as products) in the field of document classification. This concept is applicable to represent the relationship between users and items; however, the problem is that one more factor (i.e., shops) needs to be considered in Doc2Vec. More precisely, a method for analyzing the relationship between consumers, stores, and products is required. The purpose of our study is to combine the analysis of the Doc2vec model for users and shops, and for users and items in the same feature space. This method enables the calculation of similar shops and items for each user. In this study, we derive the real data analysis accumulated in the online marketplace and demonstrate the efficiency of the proposal.Keywords: Doc2Vec, marketing, online marketplace, recommendation system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 465146 Topics of Blockchain Technology to Teach at Community College
Authors: Penn P. Wu, Jeannie Jo
Abstract:
Blockchain technology has rapidly gained popularity in industry. This paper attempts to assist academia to answer four questions. First, should community colleges begin offering education to nurture blockchain-literate students for the job market? Second, what are the appropriate topical areas to cover? Third, should it be an individual course? And forth, should it be a technical or management course? This paper starts with identifying the knowledge domains of blockchain technology and the topical areas each domain has, and continues with placing them in appropriate academic territories (Computer Sciences vs. Business) and subjects (programming, management, marketing, and laws), and then develops an evaluation model to determine the appropriate topical area for community colleges to teach. The evaluation is based on seven factors: maturity of technology, impacts on management, real-world applications, subject classification, knowledge prerequisites, textbook readiness, and recommended pedagogies. The evaluation results point to an interesting direction that offering an introductory course is an ideal option to guide students through the learning journey of what blockchain is and how it applies to business. Such an introductory course does not need to engage students in the discussions of mathematics and sciences that make blockchain technologies possible. While it is inevitable to brief technical topics to help students build a solid knowledge foundation of blockchain technologies, community colleges should avoid offering students a course centered on the discussion of developing blockchain applications.
Keywords: Blockchain, pedagogies, blockchain technologies, blockchain course, blockchain pedagogies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 929145 Low Resolution Single Neural Network Based Face Recognition
Authors: Jahan Zeb, Muhammad Younus Javed, Usman Qayyum
Abstract:
This research paper deals with the implementation of face recognition using neural network (recognition classifier) on low-resolution images. The proposed system contains two parts, preprocessing and face classification. The preprocessing part converts original images into blurry image using average filter and equalizes the histogram of those image (lighting normalization). The bi-cubic interpolation function is applied onto equalized image to get resized image. The resized image is actually low-resolution image providing faster processing for training and testing. The preprocessed image becomes the input to neural network classifier, which uses back-propagation algorithm to recognize the familiar faces. The crux of proposed algorithm is its beauty to use single neural network as classifier, which produces straightforward approach towards face recognition. The single neural network consists of three layers with Log sigmoid, Hyperbolic tangent sigmoid and Linear transfer function respectively. The training function, which is incorporated in our work, is Gradient descent with momentum (adaptive learning rate) back propagation. The proposed algorithm was trained on ORL (Olivetti Research Laboratory) database with 5 training images. The empirical results provide the accuracy of 94.50%, 93.00% and 90.25% for 20, 30 and 40 subjects respectively, with time delay of 0.0934 sec per image.Keywords: Average filtering, Bicubic Interpolation, Neurons, vectorization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1749144 Night-Time Traffic Light Detection Based On SVM with Geometric Moment Features
Authors: Hyun-Koo Kim, Young-Nam Shin, Sa-gong Kuk, Ju H. Park, Ho-Youl Jung
Abstract:
This paper presents an effective traffic lights detection method at the night-time. First, candidate blobs of traffic lights are extracted from RGB color image. Input image is represented on the dominant color domain by using color transform proposed by Ruta, then red and green color dominant regions are selected as candidates. After candidate blob selection, we carry out shape filter for noise reduction using information of blobs such as length, area, area of boundary box, etc. A multi-class classifier based on SVM (Support Vector Machine) applies into the candidates. Three kinds of features are used. We use basic features such as blob width, height, center coordinate, area, area of blob. Bright based stochastic features are also used. In particular, geometric based moment-s values between candidate region and adjacent region are proposed and used to improve the detection performance. The proposed system is implemented on Intel Core CPU with 2.80 GHz and 4 GB RAM and tested with the urban and rural road videos. Through the test, we show that the proposed method using PF, BMF, and GMF reaches up to 93 % of detection rate with computation time of in average 15 ms/frame.Keywords: Night-time traffic light detection, multi-class classification, driving assistance system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3884143 Systematics of Water Lilies (Genus Nymphaea L.) Using 18S rDNA Sequences
Authors: M. Nakkuntod, S. Srinarang, K.W. Hilu
Abstract:
Water lily (Nymphaea L.) is the largest genus of Nymphaeaceae. This family is composed of six genera (Nuphar, Ondinea, Euryale, Victoria, Barclaya, Nymphaea). Its members are nearly worldwide in tropical and temperate regions. The classification of some species in Nymphaea is ambiguous due to high variation in leaf and flower parts such as leaf margin, stamen appendage. Therefore, the phylogenetic relationships based on 18S rDNA were constructed to delimit this genus. DNAs of 52 specimens belonging to water lily family were extracted using modified conventional method containing cetyltrimethyl ammonium bromide (CTAB). The results showed that the amplified fragment is about 1600 base pairs in size. After analysis, the aligned sequences presented 9.36% for variable characters comprising 2.66% of parsimonious informative sites and 6.70% of singleton sites. Moreover, there are 6 regions of 1-2 base(s) for insertion/deletion. The phylogenetic trees based on maximum parsimony and maximum likelihood with high bootstrap support indicated that genus Nymphaea was a paraphyletic group because of Ondinea, Victoria and Euryale disruption. Within genus Nymphaea, subgenus Nymphaea is a basal lineage group which cooperated with Euryale and Victoria. The other four subgenera, namely Lotos, Hydrocallis, Brachyceras and Anecphya were included the same large clade which Ondinea was placed within Anecphya clade due to geographical sharing.
Keywords: nrDNA, phylogeny, taxonomy, Waterlily.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1128142 Fuzzy Relatives of the CLARANS Algorithm With Application to Text Clustering
Authors: Mohamed A. Mahfouz, M. A. Ismail
Abstract:
This paper introduces new algorithms (Fuzzy relative of the CLARANS algorithm FCLARANS and Fuzzy c Medoids based on randomized search FCMRANS) for fuzzy clustering of relational data. Unlike existing fuzzy c-medoids algorithm (FCMdd) in which the within cluster dissimilarity of each cluster is minimized in each iteration by recomputing new medoids given current memberships, FCLARANS minimizes the same objective function minimized by FCMdd by changing current medoids in such away that that the sum of the within cluster dissimilarities is minimized. Computing new medoids may be effected by noise because outliers may join the computation of medoids while the choice of medoids in FCLARANS is dictated by the location of a predominant fraction of points inside a cluster and, therefore, it is less sensitive to the presence of outliers. In FCMRANS the step of computing new medoids in FCMdd is modified to be based on randomized search. Furthermore, a new initialization procedure is developed that add randomness to the initialization procedure used with FCMdd. Both FCLARANS and FCMRANS are compared with the robust and linearized version of fuzzy c-medoids (RFCMdd). Experimental results with different samples of the Reuter-21578, Newsgroups (20NG) and generated datasets with noise show that FCLARANS is more robust than both RFCMdd and FCMRANS. Finally, both FCMRANS and FCLARANS are more efficient and their outputs are almost the same as that of RFCMdd in terms of classification rate.Keywords: Data Mining, Fuzzy Clustering, Relational Clustering, Medoid-Based Clustering, Cluster Analysis, Unsupervised Learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2401141 Automatic Sleep Stage Scoring with Wavelet Packets Based on Single EEG Recording
Authors: Luay A. Fraiwan, Natheer Y. Khaswaneh, Khaldon Y. Lweesy
Abstract:
Sleep stage scoring is the process of classifying the stage of the sleep in which the subject is in. Sleep is classified into two states based on the constellation of physiological parameters. The two states are the non-rapid eye movement (NREM) and the rapid eye movement (REM). The NREM sleep is also classified into four stages (1-4). These states and the state wakefulness are distinguished from each other based on the brain activity. In this work, a classification method for automated sleep stage scoring based on a single EEG recording using wavelet packet decomposition was implemented. Thirty two ploysomnographic recording from the MIT-BIH database were used for training and validation of the proposed method. A single EEG recording was extracted and smoothed using Savitzky-Golay filter. Wavelet packets decomposition up to the fourth level based on 20th order Daubechies filter was used to extract features from the EEG signal. A features vector of 54 features was formed. It was reduced to a size of 25 using the gain ratio method and fed into a classifier of regression trees. The regression trees were trained using 67% of the records available. The records for training were selected based on cross validation of the records. The remaining of the records was used for testing the classifier. The overall correct rate of the proposed method was found to be around 75%, which is acceptable compared to the techniques in the literature.Keywords: Features selection, regression trees, sleep stagescoring, wavelet packets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2328140 An Elaborate Survey on Node Replication Attack in Static Wireless Sensor Networks
Authors: N. S. Usha, E. A. Mary Anita
Abstract:
Recent innovations in the field of technology led to the use of wireless sensor networks in various applications, which consists of a number of small, very tiny, low-cost, non-tamper proof and resource constrained sensor nodes. These nodes are often distributed and deployed in an unattended environment, so as to collaborate with each other to share data or information. Amidst various applications, wireless sensor network finds a major role in monitoring battle field in military applications. As these non-tamperproof nodes are deployed in an unattended location, they are vulnerable to many security attacks. Amongst many security attacks, the node replication attack seems to be more threatening to the network users. Node Replication attack is caused by an attacker, who catches one true node, duplicates the first certification and cryptographic materials, makes at least one or more copies of the caught node and spots them at certain key positions in the system to screen or disturb the network operations. Preventing the occurrence of such node replication attacks in network is a challenging task. In this survey article, we provide the classification of detection schemes and also explore the various schemes proposed in each category. Also, we compare the various detection schemes against certain evaluation parameters and also its limitations. Finally, we provide some suggestions for carrying out future research work against such attacks.
Keywords: Clone node, data security, detection schemes, node replication attack, wireless sensor networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 806139 Design of an Ensemble Learning Behavior Anomaly Detection Framework
Authors: Abdoulaye Diop, Nahid Emad, Thierry Winter, Mohamed Hilia
Abstract:
Data assets protection is a crucial issue in the cybersecurity field. Companies use logical access control tools to vault their information assets and protect them against external threats, but they lack solutions to counter insider threats. Nowadays, insider threats are the most significant concern of security analysts. They are mainly individuals with legitimate access to companies information systems, which use their rights with malicious intents. In several fields, behavior anomaly detection is the method used by cyber specialists to counter the threats of user malicious activities effectively. In this paper, we present the step toward the construction of a user and entity behavior analysis framework by proposing a behavior anomaly detection model. This model combines machine learning classification techniques and graph-based methods, relying on linear algebra and parallel computing techniques. We show the utility of an ensemble learning approach in this context. We present some detection methods tests results on an representative access control dataset. The use of some explored classifiers gives results up to 99% of accuracy.Keywords: Cybersecurity, data protection, access control, insider threat, user behavior analysis, ensemble learning, high performance computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1149138 Brain Image Segmentation Using Conditional Random Field Based On Modified Artificial Bee Colony Optimization Algorithm
Authors: B. Thiagarajan, R. Bremananth
Abstract:
Tumor is an uncontrolled growth of tissues in any part of the body. Tumors are of different types and they have different characteristics and treatments. Brain tumor is inherently serious and life-threatening because of its character in the limited space of the intracranial cavity (space formed inside the skull). Locating the tumor within MR (magnetic resonance) image of brain is integral part of the treatment of brain tumor. This segmentation task requires classification of each voxel as either tumor or non-tumor, based on the description of the voxel under consideration. Many studies are going on in the medical field using Markov Random Fields (MRF) in segmentation of MR images. Even though the segmentation process is better, computing the probability and estimation of parameters is difficult. In order to overcome the aforementioned issues, Conditional Random Field (CRF) is used in this paper for segmentation, along with the modified artificial bee colony optimization and modified fuzzy possibility c-means (MFPCM) algorithm. This work is mainly focused to reduce the computational complexities, which are found in existing methods and aimed at getting higher accuracy. The efficiency of this work is evaluated using the parameters such as region non-uniformity, correlation and computation time. The experimental results are compared with the existing methods such as MRF with improved Genetic Algorithm (GA) and MRF-Artificial Bee Colony (MRF-ABC) algorithm.
Keywords: Conditional random field, Magnetic resonance, Markov random field, Modified artificial bee colony.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2947137 Identifying Knowledge Gaps in Incorporating Toxicity of Particulate Matter Constituents for Developing Regulatory Limits on Particulate Matter
Authors: Ananya Das, Arun Kumar, Gazala Habib, Vivekanandan Perumal
Abstract:
Regulatory bodies has proposed limits on Particulate Matter (PM) concentration in air; however, it does not explicitly indicate the incorporation of effects of toxicities of constituents of PM in developing regulatory limits. This study aimed to provide a structured approach to incorporate toxic effects of components in developing regulatory limits on PM. A four-step human health risk assessment framework consists of - (1) hazard identification (parameters: PM and its constituents and their associated toxic effects on health), (2) exposure assessment (parameters: concentrations of PM and constituents, information on size and shape of PM; fate and transport of PM and constituents in respiratory system), (3) dose-response assessment (parameters: reference dose or target toxicity dose of PM and its constituents), and (4) risk estimation (metric: hazard quotient and/or lifetime incremental risk of cancer as applicable). Then parameters required at every step were obtained from literature. Using this information, an attempt has been made to determine limits on PM using component-specific information. An example calculation was conducted for exposures of PM2.5 and its metal constituents from Indian ambient environment to determine limit on PM values. Identified data gaps were: (1) concentrations of PM and its constituents and their relationship with sampling regions, (2) relationship of toxicity of PM with its components.Keywords: Air, component-specific toxicity, human health risks, particulate matter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1185136 Relevance of the Variation in the Angulation of Palatal Throat Form to the Orientation of the Occlusal Plane: A Cephalometric Study
Authors: Sanath Kumar Shetty, Sanya Sinha, K. Kamalakanth Shenoy
Abstract:
The posterior reference for the ala tragal line is a cause of confusion, with different authors suggesting different locations as to the superior, middle or inferior part of the tragus. This study was conducted on 200 subjects to evaluate if any correlation exists between the variation of angulation of palatal throat form and the relative parallelism of occlusal plane to ala-tragal line at different tragal levels. A custom made Occlusal Plane Analyzer was used to check the parallelism between the ala-tragal line and occlusal plane. A lateral cephalogram was shot for each subject to measure the angulation of the palatal throat form. Fisher’s exact test was used to evaluate the correlation between the angulation of the palatal throat form and the relative parallelism of occlusal plane to the ala tragal line. Also, a classification was formulated for the palatal throat form, based on confidence interval. From the results of the study, the inferior part, middle part and superior part of the tragus were seen as the reference points in 49.5%, 32% and 18.5% of the subjects respectively. Class I palatal throat form (41degree-50 degree), Class II palatal throat form (below 41 degree) and Class III palatal throat form (above 50 degree) were seen in 42%, 43% and 15% of the subjects respectively. It was also concluded that there is no significant correlation between the variation in the angulations of the palatal throat form and the relative parallelism of occlusal plane to the ala-tragal line.Keywords: Ala-tragal line, occlusal plane, palatal throat form, cephalometry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2643135 The Coexistence of Dual Form of Malnutrition among Portuguese Institutionalized Elderly People
Authors: C. Caçador, M. J. Reis Lima, J. Oliveira, M. J. Veiga, M. Teixeira Veríssimo, F. Ramos, M. C. Castilho, E. Teixeira-Lemos
Abstract:
In the present study we evaluated the nutritional status of 214 institutionalized elderly residents of both genders, aged 65 years and older of 11 care homes located in the district of Viseu (center of Portugal). The evaluation was based on anthropometric measurements and the Mini Nutritional Assessment (MNA) score.
The mean age of the subjects was 82.3 ± 6.1 years-old. Most of the elderly residents were female (72.0%). The majority had 4 years of formal education (51.9%) and was widowed (74.3%) or married (14.0%).
Men presented a mean age of 81.2±8.5 years-old, weight 69.3±14.5 kg and BMI 25.33±6.5 kg/m2. In women, the mean age was 84.5±8.2 years-old, weight 61.2±14.7 kg and BMI 27.43±5.6 kg/m2.
The evaluation of the nutritional status using the MNA score showed that 24.0% of the residents show a risk of undernutrition and 76.0% of them were well nourished.
There was a high prevalence of obese (24.8%) and overweight residents (33.2%) according to the BMI. 7.5% were considered underweight.
We also found that according to their waist circumference measurements 88.3% of the residents were at risk for cardiovascular disease (CVD) and 64.0% of them presented very high risk for CVD (WC≥88 cm for women and WC ≥102 cm for men).
The present study revealed the coexistence of a dual form of malnutrition (undernourished and overweight) among the institutionalized Portuguese concomitantly with an excess of abdominal adiposity. The high prevalence of residents at high risk for CVD should not be overlooked.
Given the vulnerability of the group of institutionalized elderly, our study highlights the importance of the classification of nutritional status based on both instruments: the BMI and the MNA.
Keywords: Nutritional status, MNA, BMI, elderly.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1950134 Hazard Identification and Sensitivity of Potential Resource of Emergency Water Supply
Authors: A. Bumbová, M. Čáslavský, F. Božek, J. Dvořák, E. Bakoš
Abstract:
The paper presents the case study of hazard identification and sensitivity of potential resource of emergency water supply as part of the application of methodology classifying the resources of drinking water for emergency supply of population. The case study has been carried out on a selected resource of emergency water supply in one region of the Czech Republic. The hazard identification and sensitivity of potential resource of emergency water supply is based on a unique procedure and developed general registers of selected types of hazards and sensitivities. The registers have been developed with the help of the “Fault Tree Analysis” method in combination with the “What if method”. The identified hazards for the assessed resource include hailstorms and torrential rains, drought, soil erosion, accidents of farm machinery, and agricultural production. The developed registers of hazards and vulnerabilities and a semi-quantitative assessment of hazards for individual parts of hydrological structure and technological elements of presented drilled wells are the basis for a semi-quantitative risk assessment of potential resource of emergency supply of population and the subsequent classification of such resource within the system of crisis planning.
Keywords: Hazard identification, register of hazards, sensitivity identification, register of sensitivity, emergency water supply, state of crisis, resource of emergency water supply, ground water.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1826133 A Fuzzy-Rough Feature Selection Based on Binary Shuffled Frog Leaping Algorithm
Authors: Javad Rahimipour Anaraki, Saeed Samet, Mahdi Eftekhari, Chang Wook Ahn
Abstract:
Feature selection and attribute reduction are crucial problems, and widely used techniques in the field of machine learning, data mining and pattern recognition to overcome the well-known phenomenon of the Curse of Dimensionality. This paper presents a feature selection method that efficiently carries out attribute reduction, thereby selecting the most informative features of a dataset. It consists of two components: 1) a measure for feature subset evaluation, and 2) a search strategy. For the evaluation measure, we have employed the fuzzy-rough dependency degree (FRFDD) of the lower approximation-based fuzzy-rough feature selection (L-FRFS) due to its effectiveness in feature selection. As for the search strategy, a modified version of a binary shuffled frog leaping algorithm is proposed (B-SFLA). The proposed feature selection method is obtained by hybridizing the B-SFLA with the FRDD. Nine classifiers have been employed to compare the proposed approach with several existing methods over twenty two datasets, including nine high dimensional and large ones, from the UCI repository. The experimental results demonstrate that the B-SFLA approach significantly outperforms other metaheuristic methods in terms of the number of selected features and the classification accuracy.Keywords: Binary shuffled frog leaping algorithm, feature selection, fuzzy-rough set, minimal reduct.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 730132 Performance Comparison of Situation-Aware Models for Activating Robot Vacuum Cleaner in a Smart Home
Authors: Seongcheol Kwon, Jeongmin Kim, Kwang Ryel Ryu
Abstract:
We assume an IoT-based smart-home environment where the on-off status of each of the electrical appliances including the room lights can be recognized in a real time by monitoring and analyzing the smart meter data. At any moment in such an environment, we can recognize what the household or the user is doing by referring to the status data of the appliances. In this paper, we focus on a smart-home service that is to activate a robot vacuum cleaner at right time by recognizing the user situation, which requires a situation-aware model that can distinguish the situations that allow vacuum cleaning (Yes) from those that do not (No). We learn as our candidate models a few classifiers such as naïve Bayes, decision tree, and logistic regression that can map the appliance-status data into Yes and No situations. Our training and test data are obtained from simulations of user behaviors, in which a sequence of user situations such as cooking, eating, dish washing, and so on is generated with the status of the relevant appliances changed in accordance with the situation changes. During the simulation, both the situation transition and the resulting appliance status are determined stochastically. To compare the performances of the aforementioned classifiers we obtain their learning curves for different types of users through simulations. The result of our empirical study reveals that naïve Bayes achieves a slightly better classification accuracy than the other compared classifiers.Keywords: Situation-awareness, Smart home, IoT, Machine learning, Classifier.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1855131 Many-Sided Self Risk Analysis Model for Information Asset to Secure Stability of the Information and Communication Service
Authors: Jin-Tae Lee, Jung-Hoon Suh, Sang-Soo Jang, Jae-Il Lee
Abstract:
Information and communication service providers (ICSP) that are significant in size and provide Internet-based services take administrative, technical, and physical protection measures via the information security check service (ISCS). These protection measures are the minimum action necessary to secure the stability and continuity of the information and communication services (ICS) that they provide. Thus, information assets are essential to providing ICS, and deciding the relative importance of target assets for protection is a critical procedure. The risk analysis model designed to decide the relative importance of information assets, which is described in this study, evaluates information assets from many angles, in order to choose which ones should be given priority when it comes to protection. Many-sided risk analysis (MSRS) grades the importance of information assets, based on evaluation of major security check items, evaluation of the dependency on the information and communication facility (ICF) and influence on potential incidents, and evaluation of major items according to their service classification, in order to identify the ISCS target. MSRS could be an efficient risk analysis model to help ICSPs to identify their core information assets and take information protection measures first, so that stability of the ICS can be ensured.Keywords: Information Asset, Information CommunicationFacility, Evaluation, ISCS (Information Security Check Service), Evaluation, Grade.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1448