Search results for: Military Fighter Aircraft Selection
158 Embedded Semi-Fragile Signature Based Scheme for Ownership Identification and Color Image Authentication with Recovery
Authors: M. Hamad Hassan, S.A.M. Gilani
Abstract:
In this paper, a novel scheme is proposed for Ownership Identification and Color Image Authentication by deploying Cryptography & Digital Watermarking. The color image is first transformed from RGB to YST color space exclusively designed for watermarking. Followed by color space transformation, each channel is divided into 4×4 non-overlapping blocks with selection of central 2×2 sub-blocks. Depending upon the channel selected two to three LSBs of each central 2×2 sub-block are set to zero to hold the ownership, authentication and recovery information. The size & position of sub-block is important for correct localization, enhanced security & fast computation. As YS ÔèÑ T so it is suitable to embed the recovery information apart from the ownership and authentication information, therefore 4×4 block of T channel along with ownership information is then deployed by SHA160 to compute the content based hash that is unique and invulnerable to birthday attack or hash collision instead of using MD5 that may raise the condition i.e. H(m)=H(m'). For recovery, intensity mean of 4x4 block of each channel is computed and encoded upto eight bits. For watermark embedding, key based mapping of blocks is performed using 2DTorus Automorphism. Our scheme is oblivious, generates highly imperceptible images with correct localization of tampering within reasonable time and has the ability to recover the original work with probability of near one.
Keywords: Hash Collision, LSB, MD5, PSNR, SHA160
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518157 Automatic Sleep Stage Scoring with Wavelet Packets Based on Single EEG Recording
Authors: Luay A. Fraiwan, Natheer Y. Khaswaneh, Khaldon Y. Lweesy
Abstract:
Sleep stage scoring is the process of classifying the stage of the sleep in which the subject is in. Sleep is classified into two states based on the constellation of physiological parameters. The two states are the non-rapid eye movement (NREM) and the rapid eye movement (REM). The NREM sleep is also classified into four stages (1-4). These states and the state wakefulness are distinguished from each other based on the brain activity. In this work, a classification method for automated sleep stage scoring based on a single EEG recording using wavelet packet decomposition was implemented. Thirty two ploysomnographic recording from the MIT-BIH database were used for training and validation of the proposed method. A single EEG recording was extracted and smoothed using Savitzky-Golay filter. Wavelet packets decomposition up to the fourth level based on 20th order Daubechies filter was used to extract features from the EEG signal. A features vector of 54 features was formed. It was reduced to a size of 25 using the gain ratio method and fed into a classifier of regression trees. The regression trees were trained using 67% of the records available. The records for training were selected based on cross validation of the records. The remaining of the records was used for testing the classifier. The overall correct rate of the proposed method was found to be around 75%, which is acceptable compared to the techniques in the literature.Keywords: Features selection, regression trees, sleep stagescoring, wavelet packets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2329156 Combined Feature Based Hyperspectral Image Classification Technique Using Support Vector Machines
Authors: Mrs.K.Kavitha, S.Arivazhagan
Abstract:
A spatial classification technique incorporating a State of Art Feature Extraction algorithm is proposed in this paper for classifying a heterogeneous classes present in hyper spectral images. The classification accuracy can be improved if and only if both the feature extraction and classifier selection are proper. As the classes in the hyper spectral images are assumed to have different textures, textural classification is entertained. Run Length feature extraction is entailed along with the Principal Components and Independent Components. A Hyperspectral Image of Indiana Site taken by AVIRIS is inducted for the experiment. Among the original 220 bands, a subset of 120 bands is selected. Gray Level Run Length Matrix (GLRLM) is calculated for the selected forty bands. From GLRLMs the Run Length features for individual pixels are calculated. The Principle Components are calculated for other forty bands. Independent Components are calculated for next forty bands. As Principal & Independent Components have the ability to represent the textural content of pixels, they are treated as features. The summation of Run Length features, Principal Components, and Independent Components forms the Combined Features which are used for classification. SVM with Binary Hierarchical Tree is used to classify the hyper spectral image. Results are validated with ground truth and accuracies are calculated.
Keywords: Multi-class, Run Length features, PCA, ICA, classification and Support Vector Machines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1522155 The Analysis of Secondary Case Studies as a Starting Point for Grounded Theory Studies: An Example from the Enterprise Software Industry
Authors: Abilio Avila, Orestis Terzidis
Abstract:
A fundamental principle of Grounded Theory (GT) is to prevent the formation of preconceived theories. This implies the need to start a research study with an open mind and to avoid being absorbed by the existing literature. However, to start a new study without an understanding of the research domain and its context can be extremely challenging. This paper presents a research approach that simultaneously supports a researcher to identify and to focus on critical areas of a research project and prevent the formation of prejudiced concepts by the current body of literature. This approach comprises of four stages: Selection of secondary case studies, analysis of secondary case studies, development of an initial conceptual framework, development of an initial interview guide. The analysis of secondary case studies as a starting point for a research project allows a researcher to create a first understanding of a research area based on real-world cases without being influenced by the existing body of theory. It enables a researcher to develop through a structured course of actions a firm guide that establishes a solid starting point for further investigations. Thus, the described approach may have significant implications for GT researchers who aim to start a study within a given research area.Keywords: Grounded theory, qualitative research, secondary case studies, secondary data analysis, interview guide.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1842154 The Study of Rapeseed Characteristics by Factor Analysis under Normal and Drought Stress Conditions
Authors: Ali Bakhtiari Gharibdosti, Mohammad Hosein Bijeh Keshavarzi, Samira Alijani
Abstract:
To understand internal characteristics relationships and determine factors which explain under consideration characteristics in rapeseed varieties, 10 rapeseed genotypes were implemented in complete accidental plot with three-time repetitions under drought stress in 2009-2010 in research field of agriculture college, Islamic Azad University, Karaj branch. In this research, 11 characteristics include of characteristics related to growth, production and functions stages was considered. Variance analysis results showed that there is a significant difference among rapeseed varieties characteristics. By calculating simple correlation coefficient under both conditions, normal and drought stress indicate that seed function characteristics in plant and pod number have positive and significant correlation in 1% probable level with seed function and selection on the base of these characteristics was effective for improving this function. Under normal and drought stress, analyzing the main factors showed that numbers of factors which have more than one amount, had five factors under normal conditions which were 82.72% of total variance totally, but under drought stress four factors diagnosed which were 76.78% of total variance. By considering total results of this research and by assessing effective characteristics for factor analysis and selecting different components of these characteristics, they can be used for modifying works to select applicable and tolerant genotypes in drought stress conditions.Keywords: Correlation, drought stress, factor analysis, rapeseed.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 701153 Automatic Detection of Breast Tumors in Sonoelastographic Images Using DWT
Authors: A. Sindhuja, V. Sadasivam
Abstract:
Breast Cancer is the most common malignancy in women and the second leading cause of death for women all over the world. Earlier the detection of cancer, better the treatment. The diagnosis and treatment of the cancer rely on segmentation of Sonoelastographic images. Texture features has not considered for Sonoelastographic segmentation. Sonoelastographic images of 15 patients containing both benign and malignant tumorsare considered for experimentation.The images are enhanced to remove noise in order to improve contrast and emphasize tumor boundary. It is then decomposed into sub-bands using single level Daubechies wavelets varying from single co-efficient to six coefficients. The Grey Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP) features are extracted and then selected by ranking it using Sequential Floating Forward Selection (SFFS) technique from each sub-band. The resultant images undergo K-Means clustering and then few post-processing steps to remove the false spots. The tumor boundary is detected from the segmented image. It is proposed that Local Binary Pattern (LBP) from the vertical coefficients of Daubechies wavelet with two coefficients is best suited for segmentation of Sonoelastographic breast images among the wavelet members using one to six coefficients for decomposition. The results are also quantified with the help of an expert radiologist. The proposed work can be used for further diagnostic process to decide if the segmented tumor is benign or malignant.
Keywords: Breast Cancer, Segmentation, Sonoelastography, Tumor Detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2207152 Hybrid Weighted Multiple Attribute Decision Making Handover Method for Heterogeneous Networks
Authors: Mohanad Alhabo, Li Zhang, Naveed Nawaz
Abstract:
Small cell deployment in 5G networks is a promising technology to enhance the capacity and coverage. However, unplanned deployment may cause high interference levels and high number of unnecessary handovers, which in turn result in an increase in the signalling overhead. To guarantee service continuity, minimize unnecessary handovers and reduce signalling overhead in heterogeneous networks, it is essential to properly model the handover decision problem. In this paper, we model the handover decision problem using Multiple Attribute Decision Making (MADM) method, specifically Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS), and propose a hybrid TOPSIS method to control the handover in heterogeneous network. The proposed method adopts a hybrid weighting policy, which is a combination of entropy and standard deviation. A hybrid weighting control parameter is introduced to balance the impact of the standard deviation and entropy weighting on the network selection process and the overall performance. Our proposed method show better performance, in terms of the number of frequent handovers and the mean user throughput, compared to the existing methods.
Keywords: Handover, HetNets, interference, MADM, small cells, TOPSIS, weight.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 573151 Simultaneous Optimization of Design and Maintenance through a Hybrid Process Using Genetic Algorithms
Authors: O. Adjoul, A. Feugier, K. Benfriha, A. Aoussat
Abstract:
In general, issues related to design and maintenance are considered in an independent manner. However, the decisions made in these two sets influence each other. The design for maintenance is considered an opportunity to optimize the life cycle cost of a product, particularly in the nuclear or aeronautical field, where maintenance expenses represent more than 60% of life cycle costs. The design of large-scale systems starts with product architecture, a choice of components in terms of cost, reliability, weight and other attributes, corresponding to the specifications. On the other hand, the design must take into account maintenance by improving, in particular, real-time monitoring of equipment through the integration of new technologies such as connected sensors and intelligent actuators. We noticed that different approaches used in the Design For Maintenance (DFM) methods are limited to the simultaneous characterization of the reliability and maintainability of a multi-component system. This article proposes a method of DFM that assists designers to propose dynamic maintenance for multi-component industrial systems. The term "dynamic" refers to the ability to integrate available monitoring data to adapt the maintenance decision in real time. The goal is to maximize the availability of the system at a given life cycle cost. This paper presents an approach for simultaneous optimization of the design and maintenance of multi-component systems. Here the design is characterized by four decision variables for each component (reliability level, maintainability level, redundancy level, and level of monitoring data). The maintenance is characterized by two decision variables (the dates of the maintenance stops and the maintenance operations to be performed on the system during these stops). The DFM model helps the designers choose technical solutions for the large-scale industrial products. Large-scale refers to the complex multi-component industrial systems and long life-cycle, such as trains, aircraft, etc. The method is based on a two-level hybrid algorithm for simultaneous optimization of design and maintenance, using genetic algorithms. The first level is to select a design solution for a given system that considers the life cycle cost and the reliability. The second level consists of determining a dynamic and optimal maintenance plan to be deployed for a design solution. This level is based on the Maintenance Free Operating Period (MFOP) concept, which takes into account the decision criteria such as, total reliability, maintenance cost and maintenance time. Depending on the life cycle duration, the desired availability, and the desired business model (sales or rental), this tool provides visibility of overall costs and optimal product architecture.
Keywords: Availability, design for maintenance, DFM, dynamic maintenance, life cycle cost, LCC, maintenance free operating period, MFOP, simultaneous optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 597150 A Psychophysiological Evaluation of an Effective Recognition Technique Using Interactive Dynamic Virtual Environments
Authors: Mohammadhossein Moghimi, Robert Stone, Pia Rotshtein
Abstract:
Recording psychological and physiological correlates of human performance within virtual environments and interpreting their impacts on human engagement, ‘immersion’ and related emotional or ‘effective’ states is both academically and technologically challenging. By exposing participants to an effective, real-time (game-like) virtual environment, designed and evaluated in an earlier study, a psychophysiological database containing the EEG, GSR and Heart Rate of 30 male and female gamers, exposed to 10 games, was constructed. Some 174 features were subsequently identified and extracted from a number of windows, with 28 different timing lengths (e.g. 2, 3, 5, etc. seconds). After reducing the number of features to 30, using a feature selection technique, K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) methods were subsequently employed for the classification process. The classifiers categorised the psychophysiological database into four effective clusters (defined based on a 3-dimensional space – valence, arousal and dominance) and eight emotion labels (relaxed, content, happy, excited, angry, afraid, sad, and bored). The KNN and SVM classifiers achieved average cross-validation accuracies of 97.01% (±1.3%) and 92.84% (±3.67%), respectively. However, no significant differences were found in the classification process based on effective clusters or emotion labels.
Keywords: Virtual Reality, effective computing, effective VR, emotion-based effective physiological database.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994149 Classifying of Maize Inbred Lines into Heterotic Groups using Diallel Analysis
Authors: Mozhgan Ziaie Bidhendi, Rajab Choukan, Farokh Darvish, Khodadad Mostafavi, Eslam Majidi
Abstract:
The selection of parents and breeding strategies for the successful maize hybrid production will be facilitated by heterotic groupings of parental lines and determination of combining abilities of them. Fourteen maize inbred lines, used in maize breeding programs in Iran, were crossed in a diallel mating design. The 91 F1 hybrids and the 14 parental lines were studied during two years at four locations of Iran for investigation of combining ability of gentypes for grain yield and to determine heterotic patterns among germplasm sources, using both, the Griffing-s method and the biplot approach for diallel analysis. The graphical representation offered by biplot analysis allowed a rapid and effective overview of general combining ability (GCA) and specific combining ability (SCA) effects of the inbred lines, their performance in crosses, as well as grouping patterns of similar genotypes. GCA and SCA effects were significant for grain yield (GY). Based on significant positive GCA effects, the lines derived from LSC could be used as parent in crosses to increase GY. The maximum best- parent heterosis values and highest SCA effects resulted from crosses B73 × MO17 and A679 × MO17 for GY. The best heterotic patterns were LSC × RYD, which would be potentially useful in maize breeding programs to obtain high-yielding hybrids in the same climate of Iran.Keywords: biplot, diallel, Griffing, Heterotic pattern
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5620148 Theoretical and Experimental Analysis of Hard Material Machining
Authors: Rajaram Kr. Gupta, Bhupendra Kumar, T. V. K. Gupta, D. S. Ramteke
Abstract:
Machining of hard materials is a recent technology for direct production of work-pieces. The primary challenge in machining these materials is selection of cutting tool inserts which facilitates an extended tool life and high-precision machining of the component. These materials are widely for making precision parts for the aerospace industry. Nickel-based alloys are typically used in extreme environment applications where a combination of strength, corrosion resistance and oxidation resistance material characteristics are required. The present paper reports the theoretical and experimental investigations carried out to understand the influence of machining parameters on the response parameters. Considering the basic machining parameters (speed, feed and depth of cut) a study has been conducted to observe their influence on material removal rate, surface roughness, cutting forces and corresponding tool wear. Experiments are designed and conducted with the help of Central Composite Rotatable Design technique. The results reveals that for a given range of process parameters, material removal rate is favorable for higher depths of cut and low feed rate for cutting forces. Low feed rates and high values of rotational speeds are suitable for better finish and higher tool life.
Keywords: Speed, feed, depth of cut, roughness, cutting force, flank wear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1974147 Selection of Best Band Combination for Soil Salinity Studies using ETM+ Satellite Images (A Case study: Nyshaboor Region,Iran)
Authors: Sanaeinejad, S. H.; A. Astaraei, . P. Mirhoseini.Mousavi, M. Ghaemi,
Abstract:
One of the main environmental problems which affect extensive areas in the world is soil salinity. Traditional data collection methods are neither enough for considering this important environmental problem nor accurate for soil studies. Remote sensing data could overcome most of these problems. Although satellite images are commonly used for these studies, however there are still needs to find the best calibration between the data and real situations in each specified area. Neyshaboor area, North East of Iran was selected as a field study of this research. Landsat satellite images for this area were used in order to prepare suitable learning samples for processing and classifying the images. 300 locations were selected randomly in the area to collect soil samples and finally 273 locations were reselected for further laboratory works and image processing analysis. Electrical conductivity of all samples was measured. Six reflective bands of ETM+ satellite images taken from the study area in 2002 were used for soil salinity classification. The classification was carried out using common algorithms based on the best composition bands. The results showed that the reflective bands 7, 3, 4 and 1 are the best band composition for preparing the color composite images. We also found out, that hybrid classification is a suitable method for identifying and delineation of different salinity classes in the area.
Keywords: Soil salinity, Remote sensing, Image processing, ETM+, Nyshaboor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2021146 High Specific Speed in Circulating Water Pump Can Cause Cavitation, Noise and Vibration
Authors: Chandra Gupt Porwal
Abstract:
Excessive vibration means increased wear, increased repair efforts, bad product selection & quality and high energy consumption. This may be sometimes experienced by cavitation or suction/discharge recirculation which could occur only when net positive suction head available NPSHA drops below the net positive suction head required NPSHR. Cavitation can cause axial surging, if it is excessive, will damage mechanical seals, bearings, possibly other pump components frequently, and shorten the life of the impeller. Efforts have been made to explain Suction Energy (SE), Specific Speed (Ns), Suction Specific Speed (Nss), NPSHA, NPSHR & their significance, possible reasons of cavitation /internal recirculation, its diagnostics and remedial measures to arrest and prevent cavitation in this paper. A case study is presented by the author highlighting that the root cause of unwanted noise and vibration is due to cavitation, caused by high specific speeds or inadequate net- positive suction head available which results in damages to material surfaces of impeller & suction bells and degradation of machine performance, its capacity and efficiency too. Author strongly recommends revisiting the technical specifications of CW pumps to provide sufficient NPSH margin ratios >1.5, for future projects and Nss be limited to 8500 - 9000 for cavitation free operation.
Keywords: Best efficiency point (BEP), Net positive suction head NPSHA, NPSHR, Specific Speed NS, Suction Specific Speed Nss.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5061145 A Text Mining Technique Using Association Rules Extraction
Authors: Hany Mahgoub, Dietmar Rösner, Nabil Ismail, Fawzy Torkey
Abstract:
This paper describes text mining technique for automatically extracting association rules from collections of textual documents. The technique called, Extracting Association Rules from Text (EART). It depends on keyword features for discover association rules amongst keywords labeling the documents. In this work, the EART system ignores the order in which the words occur, but instead focusing on the words and their statistical distributions in documents. The main contributions of the technique are that it integrates XML technology with Information Retrieval scheme (TFIDF) (for keyword/feature selection that automatically selects the most discriminative keywords for use in association rules generation) and use Data Mining technique for association rules discovery. It consists of three phases: Text Preprocessing phase (transformation, filtration, stemming and indexing of the documents), Association Rule Mining (ARM) phase (applying our designed algorithm for Generating Association Rules based on Weighting scheme GARW) and Visualization phase (visualization of results). Experiments applied on WebPages news documents related to the outbreak of the bird flu disease. The extracted association rules contain important features and describe the informative news included in the documents collection. The performance of the EART system compared with another system that uses the Apriori algorithm throughout the execution time and evaluating extracted association rules.
Keywords: Text mining, data mining, association rule mining
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4437144 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone
Abstract:
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.
Keywords: Artificial Neural Network, Data Mining, Electroencephalogram, Epilepsy, Feature Extraction, Seizure Detection, Signal Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1314143 Selecting Negative Examples for Protein-Protein Interaction
Authors: Mohammad Shoyaib, M. Abdullah-Al-Wadud, Oksam Chae
Abstract:
Proteomics is one of the largest areas of research for bioinformatics and medical science. An ambitious goal of proteomics is to elucidate the structure, interactions and functions of all proteins within cells and organisms. Predicting Protein-Protein Interaction (PPI) is one of the crucial and decisive problems in current research. Genomic data offer a great opportunity and at the same time a lot of challenges for the identification of these interactions. Many methods have already been proposed in this regard. In case of in-silico identification, most of the methods require both positive and negative examples of protein interaction and the perfection of these examples are very much crucial for the final prediction accuracy. Positive examples are relatively easy to obtain from well known databases. But the generation of negative examples is not a trivial task. Current PPI identification methods generate negative examples based on some assumptions, which are likely to affect their prediction accuracy. Hence, if more reliable negative examples are used, the PPI prediction methods may achieve even more accuracy. Focusing on this issue, a graph based negative example generation method is proposed, which is simple and more accurate than the existing approaches. An interaction graph of the protein sequences is created. The basic assumption is that the longer the shortest path between two protein-sequences in the interaction graph, the less is the possibility of their interaction. A well established PPI detection algorithm is employed with our negative examples and in most cases it increases the accuracy more than 10% in comparison with the negative pair selection method in that paper.Keywords: Interaction graph, Negative training data, Protein-Protein interaction, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1702142 Dynamic Features Selection for Heart Disease Classification
Authors: Walid MOUDANI
Abstract:
The healthcare environment is generally perceived as being information rich yet knowledge poor. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. In fact, valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, a proficient methodology for the extraction of significant patterns from the Coronary Heart Disease warehouses for heart attack prediction, which unfortunately continues to be a leading cause of mortality in the whole world, has been presented. For this purpose, we propose to enumerate dynamically the optimal subsets of the reduced features of high interest by using rough sets technique associated to dynamic programming. Therefore, we propose to validate the classification using Random Forest (RF) decision tree to identify the risky heart disease cases. This work is based on a large amount of data collected from several clinical institutions based on the medical profile of patient. Moreover, the experts- knowledge in this field has been taken into consideration in order to define the disease, its risk factors, and to establish significant knowledge relationships among the medical factors. A computer-aided system is developed for this purpose based on a population of 525 adults. The performance of the proposed model is analyzed and evaluated based on set of benchmark techniques applied in this classification problem.Keywords: Multi-Classifier Decisions Tree, Features Reduction, Dynamic Programming, Rough Sets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2532141 Portfolio Management for Construction Company during Covid-19 Using AHP Technique
Authors: Sareh Rajabi, Salwa Bheiry
Abstract:
In general, Covid-19 created many financial and non-financial damages to the economy and community. Level and severity of covid-19 as pandemic case varies over the region and due to different types of the projects. Covid-19 virus emerged as one of the most imperative risk management factors word-wide recently. Therefore, as part of portfolio management assessment, it is essential to evaluate severity of such risk on the project and program in portfolio management level to avoid any risky portfolio. Covid-19 appeared very effectively in South America, part of Europe and Middle East. Such pandemic infection affected the whole universe, due to lock down, interruption in supply chain management, health and safety requirements, transportations and commercial impacts. Therefore, this research proposes Analytical Hierarchy Process (AHP) to analyze and assess such pandemic case like Covid-19 and its impacts on the construction projects. The AHP technique uses four sub-criteria: Health and safety, commercial risk, completion risk and contractual risk to evaluate the project and program. The result will provide the decision makers with information which project has higher or lower risk in case of Covid-19 and pandemic scenario. Therefore, the decision makers can have most feasible solution based on effective weighted criteria for project selection within their portfolio to match with the organization’s strategies.
Keywords: Portfolio management, risk management, COVID-19, analytical hierarchy process technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 832140 A Security Model of Voice Eavesdropping Protection over Digital Networks
Authors: Supachai Tangwongsan, Sathaporn Kassuvan
Abstract:
The purpose of this research is to develop a security model for voice eavesdropping protection over digital networks. The proposed model provides an encryption scheme and a personal secret key exchange between communicating parties, a so-called voice data transformation system, resulting in a real-privacy conversation. The operation of this system comprises two main steps as follows: The first one is the personal secret key exchange for using the keys in the data encryption process during conversation. The key owner could freely make his/her choice in key selection, so it is recommended that one should exchange a different key for a different conversational party, and record the key for each case into the memory provided in the client device. The next step is to set and record another personal option of encryption, either taking all frames or just partial frames, so-called the figure of 1:M. Using different personal secret keys and different sets of 1:M to different parties without the intervention of the service operator, would result in posing quite a big problem for any eavesdroppers who attempt to discover the key used during the conversation, especially in a short period of time. Thus, it is quite safe and effective to protect the case of voice eavesdropping. The results of the implementation indicate that the system can perform its function accurately as designed. In this regard, the proposed system is suitable for effective use in voice eavesdropping protection over digital networks, without any requirements to change presently existing network systems, mobile phone network and VoIP, for instance.
Keywords: Computer Security, Encryption, Key Exchange, Security Model, Voice Eavesdropping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1581139 Multi-Level Air Quality Classification in China Using Information Gain and Support Vector Machine
Authors: Bingchun Liu, Pei-Chann Chang, Natasha Huang, Dun Li
Abstract:
Machine Learning and Data Mining are the two important tools for extracting useful information and knowledge from large datasets. In machine learning, classification is a wildly used technique to predict qualitative variables and is generally preferred over regression from an operational point of view. Due to the enormous increase in air pollution in various countries especially China, Air Quality Classification has become one of the most important topics in air quality research and modelling. This study aims at introducing a hybrid classification model based on information theory and Support Vector Machine (SVM) using the air quality data of four cities in China namely Beijing, Guangzhou, Shanghai and Tianjin from Jan 1, 2014 to April 30, 2016. China's Ministry of Environmental Protection has classified the daily air quality into 6 levels namely Serious Pollution, Severe Pollution, Moderate Pollution, Light Pollution, Good and Excellent based on their respective Air Quality Index (AQI) values. Using the information theory, information gain (IG) is calculated and feature selection is done for both categorical features and continuous numeric features. Then SVM Machine Learning algorithm is implemented on the selected features with cross-validation. The final evaluation reveals that the IG and SVM hybrid model performs better than SVM (alone), Artificial Neural Network (ANN) and K-Nearest Neighbours (KNN) models in terms of accuracy as well as complexity.
Keywords: Machine learning, air quality classification, air quality index, information gain, support vector machine, cross-validation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948138 Enhancement of Rice Straw Composting Using UV Induced Mutants of Penicillium Strain
Authors: T. N. M. El Sebai, A. A.Khattab, Wafaa M. Abd-El Rahim, H. Moawad
Abstract:
Fungal mutant strains have produced cellulase and xylanase enzymes, and have induced high hydrolysis with enhanced of rice straw. The mutants were obtained by exposing Penicillium strain to UV-light treatments. Screening and selection after treatment with UV-light were carried out using cellulolytic and xylanolytic clear zones method to select the hypercellulolytic and hyperxylanolytic mutants. These mutants were evaluated for their cellulase and xylanase enzyme production as well as their abilities for biodegradation of rice straw. The mutant 12 UV/1 produced 306.21% and 209.91% cellulase and xylanase, respectively, as compared with the original wild type strain. This mutant showed high capacity of rice straw degradation. The effectiveness of tested mutant strain and that of wild strain was compared in relation to enhancing the composting process of rice straw and animal manures mixture. The results obtained showed that the compost product of inoculated mixture with mutant strain (12 UV/1) was the best compared to the wild strain and un-inoculated mixture. Analysis of the composted materials showed that the characteristics of the produced compost were close to those of the high quality standard compost. The results obtained in the present work suggest that the combination between rice straw and animal manure could be used for enhancing the composting process of rice straw and particularly when applied with fungal decomposer accelerating the composting process.
Keywords: Rice straw, composting, UV mutants, Penicillium.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1823137 Comparative Study of Seismic Isolation as Retrofit Method for Historical Constructions
Authors: Carlos H. Cuadra
Abstract:
Seismic isolation can be used as a retrofit method for historical buildings with the advantage that minimum intervention on super-structure is required. However, selection of isolation devices depends on weight and stiffness of upper structure. In this study, two buildings are considered for analyses to evaluate the applicability of this retrofitting methodology. Both buildings are located at Akita prefecture in the north part of Japan. One building is a wooden structure that corresponds to the old council meeting hall of Noshiro city. The second building is a brick masonry structure that was used as house of a foreign mining engineer and it is located at Ani town. Ambient vibration measurements were performed on both buildings to estimate their dynamic characteristics. Then, target period of vibration of isolated systems is selected as 3 seconds is selected to estimate required stiffness of isolation devices. For wooden structure, which is a light construction, it was found that natural rubber isolators in combination with friction bearings are suitable for seismic isolation. In case of masonry building elastomeric isolator can be used for its seismic isolation. Lumped mass systems are used for seismic response analysis and it is verified in both cases that seismic isolation can be used as retrofitting method of historical construction. However, in the case of the light building, most of the weight corresponds to the reinforced concrete slab that is required to install isolation devices.
Keywords: Historical building, finite element method, masonry structure, seismic isolation, wooden structure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 725136 Performance Analysis of MIMO Based Multi-User Cooperation Diversity Over Various Fading Channels
Authors: Zuhaib Ashfaq Khan, Imran Khan, Nandana Rajatheva
Abstract:
In this paper, hybrid FDMA-TDMA access technique in a cooperative distributive fashion introducing and implementing a modified protocol introduced in [1] is analyzed termed as Power and Cooperation Diversity Gain Protocol (PCDGP). A wireless network consists of two users terminal , two relays and a destination terminal equipped with two antennas. The relays are operating in amplify-and-forward (AF) mode with a fixed gain. Two operating modes: cooperation-gain mode and powergain mode are exploited from source terminals to relays, as it is working in a best channel selection scheme. Vertical BLAST (Bell Laboratories Layered Space Time) or V-BLAST with minimum mean square error (MMSE) nulling is used at the relays to perfectly detect the joint signals from multiple source terminals. The performance is analyzed using binary phase shift keying (BPSK) modulation scheme and investigated over independent and identical (i.i.d) Rayleigh, Ricean-K and Nakagami-m fading environments. Subsequently, simulation results show that the proposed scheme can provide better signal quality of uplink users in a cooperative communication system using hybrid FDMATDMA technique.
Keywords: Cooperation Diversity, Best Channel Selectionscheme, MIMO relay networks, V-BLAST, QRdecomposition, and MMSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2002135 A Study of Dose Distribution and Image Quality under an Automatic Tube Current Modulation (ATCM) System for a Toshiba Aquilion 64 CT Scanner Using a New Design of Phantom
Authors: S. Sookpeng, C. J. Martin, D. J. Gentle
Abstract:
Automatic tube current modulation (ATCM) systems are available for all CT manufacturers and are used for the majority of patients. Understanding how the systems work and their influence on patient dose and image quality is important for CT users, in order to gain the most effective use of the systems. In the present study, a new phantom was used for evaluating dose distribution and image quality under the ATCM operation for the Toshiba Aquilion 64 CT scanner using different ATCM options and a fixed mAs technique. A routine chest, abdomen and pelvis (CAP) protocol was selected for study and Gafchromic film was used to measure entrance surface dose (ESD), peripheral dose and central axis dose in the phantom. The results show the dose reductions achievable with various ATCM options, in relation with the target noise. The doses and image noise distribution were more uniform when the ATCM system was implemented compared with the fixed mAs technique. The lower limit set for the tube current will affect the modulations especially for the lower dose option. This limit prevented the tube current being reduced further and therefore the lower dose ATCM setting resembled a fixed mAs technique. Selection of a lower tube current limit is likely to reduce doses for smaller patients in scans of chest and neck regions.
Keywords: Computed Tomography (CT), Automatic Tube Current Modulation (ATCM), Automatic Exposure Control (AEC).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2623134 Investigation of Flame and Soot Propagation in Non-Air Conditioned Railway Locomotives
Authors: Abhishek Agarwal, Manoj Sarda, Juhi Kaushik, Vatsal Sanjay, Arup Kumar Das
Abstract:
Propagation of fire through a non-air conditioned railway compartment is studied by virtue of numerical simulations. Simultaneous computational fire dynamics equations, such as Navier-Stokes, lumped species continuity, overall mass and energy conservation, and heat transfer are solved using finite volume based (for radiation) and finite difference based (for all other equations) solver, Fire Dynamics Simulator (FDS). A single coupe with an eight berth occupancy is used to establish the numerical model, followed by the selection of a three coupe system as the fundamental unit of the locomotive compartment. Heat Release Rate Per Unit Area (HRRPUA) of the initial fire is varied to consider a wide range of compartmental fires. Parameters, such as air inlet velocity relative to the locomotive at the windows, the level of interaction with the ambiance and closure of middle berth are studied through a wide range of numerical simulations. Almost all the loss of lives and properties due to fire breakout can be attributed to the direct or indirect exposure to flames or to the inhalation of toxic gases and resultant suffocation due to smoke and soot. Therefore, the temporal stature of fire and smoke are reported for each of the considered cases which can be used in the present or extended form to develop guidelines to be followed in case of a fire breakout.Keywords: Fire dynamics, flame propagation, locomotive fire, soot flow pattern.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1138133 Game Theory Based Diligent Energy Utilization Algorithm for Routing in Wireless Sensor Network
Authors: X. Mercilin Raajini, R. Raja Kumar, P. Indumathi, V. Praveen
Abstract:
Many cluster based routing protocols have been proposed in the field of wireless sensor networks, in which a group of nodes are formed as clusters. A cluster head is selected from one among those nodes based on residual energy, coverage area, number of hops and that cluster-head will perform data gathering from various sensor nodes and forwards aggregated data to the base station or to a relay node (another cluster-head), which will forward the packet along with its own data packet to the base station. Here a Game Theory based Diligent Energy Utilization Algorithm (GTDEA) for routing is proposed. In GTDEA, the cluster head selection is done with the help of game theory, a decision making process, that selects a cluster-head based on three parameters such as residual energy (RE), Received Signal Strength Index (RSSI) and Packet Reception Rate (PRR). Finding a feasible path to the destination with minimum utilization of available energy improves the network lifetime and is achieved by the proposed approach. In GTDEA, the packets are forwarded to the base station using inter-cluster routing technique, which will further forward it to the base station. Simulation results reveal that GTDEA improves the network performance in terms of throughput, lifetime, and power consumption.Keywords: Cluster head, Energy utilization, Game Theory, LEACH, Sensor network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1903132 Privacy Protection Principles of Omnichannel Approach
Authors: Renata Mekovec, Dijana Peras, Ruben Picek
Abstract:
The advent of the Internet, mobile devices and social media is revolutionizing the experience of retail customers by linking multiple sources through various channels. Omnichannel retailing is a retailing that combines multiple channels to allow customers to seamlessly leverage all the distribution information online and offline while shopping. Therefore, today data are an asset more critical than ever for all organizations. Nonetheless, because of its heterogeneity through platforms, developers are currently facing difficulties in dealing with personal data. Considering the possibilities of omnichannel communication, this paper presents channel categorization that could enhance the customer experience of omnichannel center called hyper center. The purpose of this paper is fundamentally to describe the connection between the omnichannel hyper center and the customer, with particular attention to privacy protection. The first phase was finding the most appropriate channels of communication for hyper center. Consequently, a selection of widely used communication channels has been identified and analyzed with regard to the effect requirements for optimizing user experience. The evaluation criteria are divided into 3 groups: general, user profile and channel options. For each criterion the weight of importance for omnichannel communication was defined. The most important thing was to consider how the hyper center can make user identification while respecting the privacy protection requirements. The study carried out also shows what customer experience across digital networks would look like, based on an omnichannel approach owing to privacy protection principles.Keywords: Personal data, privacy protection, omnichannel communication, retail.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 639131 Applying Theory of Inventive Problem Solving to Develop Innovative Solutions: A Case Study
Authors: Y. H. Wang, C. C. Hsieh
Abstract:
Good service design can increase organization revenue and consumer satisfaction while reducing labor and time costs. The problems facing consumers in the original serve model for eyewear and optical industry includes the following issues: 1. Insufficient information on eyewear products 2. Passively dependent on recommendations, insufficient selection 3. Incomplete records on progression of vision conditions 4. Lack of complete customer records. This study investigates the case of Kobayashi Optical, applying the Theory of Inventive Problem Solving (TRIZ) to develop innovative solutions for eyewear and optical industry. Analysis results raise the following conclusions and management implications: In order to provide customers with improved professional information and recommendations, Kobayashi Optical is suggested to establish customer purchasing records. Overall service efficiency can be enhanced by applying data mining techniques to analyze past consumer preferences and purchase histories. Furthermore, Kobayashi Optical should continue to develop a 3D virtual trial service which can allow customers for easy browsing of different frame styles and colors. This 3D virtual trial service will save customer waiting times in during peak service times at stores.Keywords: Theory of inventive problem solving, service design, augmented reality, eyewear and optical industry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1672130 An Overview of Islanding Detection Methods in Photovoltaic Systems
Authors: Wei Yee Teoh, Chee Wei Tan
Abstract:
The issue of unintentional islanding in PV grid interconnection still remains as a challenge in grid-connected photovoltaic (PV) systems. This paper discusses the overview of popularly used anti-islanding detection methods, practically applied in PV grid-connected systems. Anti-islanding methods generally can be classified into four major groups, which include passive methods, active methods, hybrid methods and communication base methods. Active methods have been the preferred detection technique over the years due to very small non-detected zone (NDZ) in small scale distribution generation. Passive method is comparatively simpler than active method in terms of circuitry and operations. However, it suffers from large NDZ that significantly reduces its performance. Communication base methods inherit the advantages of active and passive methods with reduced drawbacks. Hybrid method which evolved from the combination of both active and passive methods has been proven to achieve accurate anti-islanding detection by many researchers. For each of the studied anti-islanding methods, the operation analysis is described while the advantages and disadvantages are compared and discussed. It is difficult to pinpoint a generic method for a specific application, because most of the methods discussed are governed by the nature of application and system dependent elements. This study concludes that the setup and operation cost is the vital factor for anti-islanding method selection in order to achieve minimal compromising between cost and system quality.Keywords: Active method, hybrid method, islanding detection, passive method, photovoltaic (PV), utility method
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9759129 Heat and Mass Transfer Modelling of Industrial Sludge Drying at Different Pressures and Temperatures
Authors: L. Al Ahmad, C. Latrille, D. Hainos, D. Blanc, M. Clausse
Abstract:
A two-dimensional finite volume axisymmetric model is developed to predict the simultaneous heat and mass transfers during the drying of industrial sludge. The simulations were run using COMSOL-Multiphysics 3.5a. The input parameters of the numerical model were acquired from a preliminary experimental work. Results permit to establish correlations describing the evolution of the various parameters as a function of the drying temperature and the sludge water content. The selection and coupling of the equation are validated based on the drying kinetics acquired experimentally at a temperature range of 45-65 °C and absolute pressure range of 200-1000 mbar. The model, incorporating the heat and mass transfer mechanisms at different operating conditions, shows simulated values of temperature and water content. Simulated results are found concordant with the experimental values, only at the first and last drying stages where sludge shrinkage is insignificant. Simulated and experimental results show that sludge drying is favored at high temperatures and low pressure. As experimentally observed, the drying time is reduced by 68% for drying at 65 °C compared to 45 °C under 1 atm. At 65 °C, a 200-mbar absolute pressure vacuum leads to an additional reduction in drying time estimated by 61%. However, the drying rate is underestimated in the intermediate stage. This rate underestimation could be improved in the model by considering the shrinkage phenomena that occurs during sludge drying.
Keywords: Industrial sludge drying, heat transfer, mass transfer, mathematical modelling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 669