Search results for: minimum root mean square (RMS) error matching algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5682

Search results for: minimum root mean square (RMS) error matching algorithm

552 Reducing the Imbalance Penalty through Artificial Intelligence Methods Geothermal Production Forecasting: A Case Study for Turkey

Authors: H. Anıl, G. Kar

Abstract:

In addition to being rich in renewable energy resources, Turkey is one of the countries that promise potential in geothermal energy production with its high installed power, cheapness, and sustainability. Increasing imbalance penalties become an economic burden for organizations, since the geothermal generation plants cannot maintain the balance of supply and demand due to the inadequacy of the production forecasts given in the day-ahead market. A better production forecast reduces the imbalance penalties of market participants and provides a better imbalance in the day ahead market. In this study, using machine learning, deep learning and time series methods, the total generation of the power plants belonging to Zorlu Doğal Electricity Generation, which has a high installed capacity in terms of geothermal, was predicted for the first one-week and first two-weeks of March, then the imbalance penalties were calculated with these estimates and compared with the real values. These modeling operations were carried out on two datasets, the basic dataset and the dataset created by extracting new features from this dataset with the feature engineering method. According to the results, Support Vector Regression from traditional machine learning models outperformed other models and exhibited the best performance. In addition, the estimation results in the feature engineering dataset showed lower error rates than the basic dataset. It has been concluded that the estimated imbalance penalty calculated for the selected organization is lower than the actual imbalance penalty, optimum and profitable accounts.

Keywords: Machine learning, deep learning, time series models, feature engineering, geothermal energy production forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 181
551 Use of Fuzzy Edge Image in Block Truncation Coding for Image Compression

Authors: Amarunnishad T.M., Govindan V.K., Abraham T. Mathew

Abstract:

An image compression method has been developed using fuzzy edge image utilizing the basic Block Truncation Coding (BTC) algorithm. The fuzzy edge image has been validated with classical edge detectors on the basis of the results of the well-known Canny edge detector prior to applying to the proposed method. The bit plane generated by the conventional BTC method is replaced with the fuzzy bit plane generated by the logical OR operation between the fuzzy edge image and the corresponding conventional BTC bit plane. The input image is encoded with the block mean and standard deviation and the fuzzy bit plane. The proposed method has been tested with test images of 8 bits/pixel and size 512×512 and found to be superior with better Peak Signal to Noise Ratio (PSNR) when compared to the conventional BTC, and adaptive bit plane selection BTC (ABTC) methods. The raggedness and jagged appearance, and the ringing artifacts at sharp edges are greatly reduced in reconstructed images by the proposed method with the fuzzy bit plane.

Keywords: Image compression, Edge detection, Ground truth image, Peak signal to noise ratio

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1688
550 Data Hiding in Images in Discrete Wavelet Domain Using PMM

Authors: Souvik Bhattacharyya, Gautam Sanyal

Abstract:

Over last two decades, due to hostilities of environment over the internet the concerns about confidentiality of information have increased at phenomenal rate. Therefore to safeguard the information from attacks, number of data/information hiding methods have evolved mostly in spatial and transformation domain.In spatial domain data hiding techniques,the information is embedded directly on the image plane itself. In transform domain data hiding techniques the image is first changed from spatial domain to some other domain and then the secret information is embedded so that the secret information remains more secure from any attack. Information hiding algorithms in time domain or spatial domain have high capacity and relatively lower robustness. In contrast, the algorithms in transform domain, such as DCT, DWT have certain robustness against some multimedia processing.In this work the authors propose a novel steganographic method for hiding information in the transform domain of the gray scale image.The proposed approach works by converting the gray level image in transform domain using discrete integer wavelet technique through lifting scheme.This approach performs a 2-D lifting wavelet decomposition through Haar lifted wavelet of the cover image and computes the approximation coefficients matrix CA and detail coefficients matrices CH, CV, and CD.Next step is to apply the PMM technique in those coefficients to form the stego image. The aim of this paper is to propose a high-capacity image steganography technique that uses pixel mapping method in integer wavelet domain with acceptable levels of imperceptibility and distortion in the cover image and high level of overall security. This solution is independent of the nature of the data to be hidden and produces a stego image with minimum degradation.

Keywords: Cover Image, Pixel Mapping Method (PMM), StegoImage, Integer Wavelet Tranform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2834
549 On Mobile Checkpointing using Index and Time Together

Authors: Awadhesh Kumar Singh

Abstract:

Checkpointing is one of the commonly used techniques to provide fault-tolerance in distributed systems so that the system can operate even if one or more components have failed. However, mobile computing systems are constrained by low bandwidth, mobility, lack of stable storage, frequent disconnections and limited battery life. Hence, checkpointing protocols having lesser number of synchronization messages and fewer checkpoints are preferred in mobile environment. There are two different approaches, although not orthogonal, to checkpoint mobile computing systems namely, time-based and index-based. Our protocol is a fusion of these two approaches, though not first of its kind. In the present exposition, an index-based checkpointing protocol has been developed, which uses time to indirectly coordinate the creation of consistent global checkpoints for mobile computing systems. The proposed algorithm is non-blocking, adaptive, and does not use any control message. Compared to other contemporary checkpointing algorithms, it is computationally more efficient because it takes lesser number of checkpoints and does not need to compute dependency relationships. A brief account of important and relevant works in both the fields, time-based and index-based, has also been included in the presentation.

Keywords: Checkpointing, forced checkpoint, mobile computing, recovery, time-coordinated.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1480
548 Depth Controls of an Autonomous Underwater Vehicle by Neurocontrollers for Enhanced Situational Awareness

Authors: Igor Astrov, Andrus Pedai

Abstract:

This paper focuses on a critical component of the situational awareness (SA), the neural control of autonomous constant depth flight of an autonomous underwater vehicle (AUV). Autonomous constant depth flight is a challenging but important task for AUVs to achieve high level of autonomy under adverse conditions. The fundamental requirement for constant depth flight is the knowledge of the depth, and a properly designed controller to govern the process. The AUV, named VORAM, is used as a model for the verification of the proposed hybrid control algorithm. Three neural network controllers, named NARMA-L2 controllers, are designed for fast and stable diving maneuvers of chosen AUV model. This hybrid control strategy for chosen AUV model has been verified by simulation of diving maneuvers using software package Simulink and demonstrated good performance for fast SA in real-time searchand- rescue operations.

Keywords: Autonomous underwater vehicles, depth control, neurocontrollers, situational awareness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1860
547 An Improved Illumination Normalization based on Anisotropic Smoothing for Face Recognition

Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Seongwon Cho

Abstract:

Robust face recognition under various illumination environments is very difficult and needs to be accomplished for successful commercialization. In this paper, we propose an improved illumination normalization method for face recognition. Illumination normalization algorithm based on anisotropic smoothing is well known to be effective among illumination normalization methods but deteriorates the intensity contrast of the original image, and incurs less sharp edges. The proposed method in this paper improves the previous anisotropic smoothing-based illumination normalization method so that it increases the intensity contrast and enhances the edges while diminishing the effect of illumination variations. Due to the result of these improvements, face images preprocessed by the proposed illumination normalization method becomes to have more distinctive feature vectors (Gabor feature vectors) for face recognition. Through experiments of face recognition based on Gabor feature vector similarity, the effectiveness of the proposed illumination normalization method is verified.

Keywords: Illumination Normalization, Face Recognition, Anisotropic smoothing, Gabor feature vector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1538
546 Modified Naïve Bayes Based Prediction Modeling for Crop Yield Prediction

Authors: Kefaya Qaddoum

Abstract:

Most of greenhouse growers desire a determined amount of yields in order to accurately meet market requirements. The purpose of this paper is to model a simple but often satisfactory supervised classification method. The original naive Bayes have a serious weakness, which is producing redundant predictors. In this paper, utilized regularization technique was used to obtain a computationally efficient classifier based on naive Bayes. The suggested construction, utilized L1-penalty, is capable of clearing redundant predictors, where a modification of the LARS algorithm is devised to solve this problem, making this method applicable to a wide range of data. In the experimental section, a study conducted to examine the effect of redundant and irrelevant predictors, and test the method on WSG data set for tomato yields, where there are many more predictors than data, and the urge need to predict weekly yield is the goal of this approach. Finally, the modified approach is compared with several naive Bayes variants and other classification algorithms (SVM and kNN), and is shown to be fairly good.

Keywords: Tomato yields prediction, naive Bayes, redundancy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5098
545 Automatic Lip Contour Tracking and Visual Character Recognition for Computerized Lip Reading

Authors: Harshit Mehrotra, Gaurav Agrawal, M.C. Srivastava

Abstract:

Computerized lip reading has been one of the most actively researched areas of computer vision in recent past because of its crime fighting potential and invariance to acoustic environment. However, several factors like fast speech, bad pronunciation, poor illumination, movement of face, moustaches and beards make lip reading difficult. In present work, we propose a solution for automatic lip contour tracking and recognizing letters of English language spoken by speakers using the information available from lip movements. Level set method is used for tracking lip contour using a contour velocity model and a feature vector of lip movements is then obtained. Character recognition is performed using modified k nearest neighbor algorithm which assigns more weight to nearer neighbors. The proposed system has been found to have accuracy of 73.3% for character recognition with speaker lip movements as the only input and without using any speech recognition system in parallel. The approach used in this work is found to significantly solve the purpose of lip reading when size of database is small.

Keywords: Contour Velocity Model, Lip Contour Tracking, LipReading, Visual Character Recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2392
544 A New Hybrid RMN Image Segmentation Algorithm

Authors: Abdelouahab Moussaoui, Nabila Ferahta, Victor Chen

Abstract:

The development of aid's systems for the medical diagnosis is not easy thing because of presence of inhomogeneities in the MRI, the variability of the data from a sequence to the other as well as of other different source distortions that accentuate this difficulty. A new automatic, contextual, adaptive and robust segmentation procedure by MRI brain tissue classification is described in this article. A first phase consists in estimating the density of probability of the data by the Parzen-Rozenblatt method. The classification procedure is completely automatic and doesn't make any assumptions nor on the clusters number nor on the prototypes of these clusters since these last are detected in an automatic manner by an operator of mathematical morphology called skeleton by influence zones detection (SKIZ). The problem of initialization of the prototypes as well as their number is transformed in an optimization problem; in more the procedure is adaptive since it takes in consideration the contextual information presents in every voxel by an adaptive and robust non parametric model by the Markov fields (MF). The number of bad classifications is reduced by the use of the criteria of MPM minimization (Maximum Posterior Marginal).

Keywords: Clustering, Automatic Classification, SKIZ, MarkovFields, Image segmentation, Maximum Posterior Marginal (MPM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1400
543 Laboratory Analysis of Stormwater Runoff Hydraulic and Pollutant Removal Performance of Pervious Concrete Based on Seashell By-Products

Authors: Jean-Jacques Randrianarimanana, Nassim Sebaibi, Mohamed Boutouil

Abstract:

In order to solve problems associated with stormwater runoff in urban areas and their effects on natural and artificial water bodies, the integration of new technical solutions to the rainwater drainage becomes even more essential. Permeable pavement systems are one of the most widely used techniques. This paper presents a laboratory analysis of stormwater runoff hydraulic and pollutant removal performance of permeable pavement system using pervious pavements based on seashell products. The laboratory prototype is a square column of 25 cm of side and consists of the surface in pervious concrete, a bedding of 3 cm in height, a geotextile and a subbase layer of 50 cm in height. A series of constant simulated rain events using semi-synthetic runoff which varied in intensity and duration were carried out. The initial vertical saturated hydraulic conductivity of the entire pervious pavement system was 0.25 cm/s (148 L/m2/min). The hydraulic functioning was influenced by both the inlet flow rate value and the test duration. The total water losses including evaporation ranged between 9% to 20% for all hydraulic experiments. The temporal and vertical variability of the pollutant removal efficiency (PRE) of the system were studied for total suspended solids (TSS). The results showed that the PRE along the vertical profile was influenced by the size of the suspended solids, and the pervious paver has the highest capacity to trap pollutant than the other porous layers of the permeable pavement system after the geotextile. The TSS removal efficiency was about 80% for the entire system. The first-flush effect of TSS was observed, but it appeared only at the beginning (2 to 6 min) of the experiments. It has been shown that the PPS can capture first-flush. The project in which this study is integrated aims to contribute to both the valorization of shellfish waste and the sustainable management of rainwater.

Keywords: Hydraulic, pervious concrete, pollutant removal efficiency, seashell by-products, stormwater runoff.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 940
542 Effect of Non Uniformity Factors and Assignment Factors on Errors in Charge Simulation Method with Point Charge Model

Authors: Gururaj S Punekar, N K Kishore Senior, H S Y Shastry

Abstract:

Charge Simulation Method (CSM) is one of the very widely used numerical field computation technique in High Voltage (HV) engineering. The high voltage fields of varying non uniformities are encountered in practice. CSM programs being case specific, the simulation accuracies heavily depend on the user (programmers) experience. Here is an effort to understand CSM errors and evolve some guidelines to setup accurate CSM models, relating non uniformities with assignment factors. The results are for the six-point-charge model of sphere-plane gap geometry. Using genetic algorithm (GA) as tool, optimum assignment factors at different non uniformity factors for this model have been evaluated and analyzed. It is shown that the symmetrically placed six-point-charge models can be good enough to set up CSM programs with potential errors less than 0.1% when the field non uniformity factor is greater than 2.64 (field utilization factor less than 52.76%).

Keywords: Assignment factor, Charge Simulation Method, High Voltage, Numerical field computation, Non uniformity factor, Simulation errors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2033
541 SNR Classification Using Multiple CNNs

Authors: Thinh Ngo, Paul Rad, Brian Kelley

Abstract:

Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.

Keywords: Classification, classifier fusion, CNN, Deep Learning, prediction, SNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 701
540 OCR for Script Identification of Hindi (Devnagari) Numerals using Feature Sub Selection by Means of End-Point with Neuro-Memetic Model

Authors: Banashree N. P., R. Vasanta

Abstract:

Recognition of Indian languages scripts is challenging problems. In Optical Character Recognition [OCR], a character or symbol to be recognized can be machine printed or handwritten characters/numerals. There are several approaches that deal with problem of recognition of numerals/character depending on the type of feature extracted and different way of extracting them. This paper proposes a recognition scheme for handwritten Hindi (devnagiri) numerals; most admired one in Indian subcontinent. Our work focused on a technique in feature extraction i.e. global based approach using end-points information, which is extracted from images of isolated numerals. These feature vectors are fed to neuro-memetic model [18] that has been trained to recognize a Hindi numeral. The archetype of system has been tested on varieties of image of numerals. . In proposed scheme data sets are fed to neuro-memetic algorithm, which identifies the rule with highest fitness value of nearly 100 % & template associates with this rule is nothing but identified numerals. Experimentation result shows that recognition rate is 92-97 % compared to other models.

Keywords: OCR, Global Feature, End-Points, Neuro-Memetic model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1753
539 Voice Disorders Identification Using Hybrid Approach: Wavelet Analysis and Multilayer Neural Networks

Authors: L. Salhi, M. Talbi, A. Cherif

Abstract:

This paper presents a new strategy of identification and classification of pathological voices using the hybrid method based on wavelet transform and neural networks. After speech acquisition from a patient, the speech signal is analysed in order to extract the acoustic parameters such as the pitch, the formants, Jitter, and shimmer. Obtained results will be compared to those normal and standard values thanks to a programmable database. Sounds are collected from normal people and patients, and then classified into two different categories. Speech data base is consists of several pathological and normal voices collected from the national hospital “Rabta-Tunis". Speech processing algorithm is conducted in a supervised mode for discrimination of normal and pathology voices and then for classification between neural and vocal pathologies (Parkinson, Alzheimer, laryngeal, dyslexia...). Several simulation results will be presented in function of the disease and will be compared with the clinical diagnosis in order to have an objective evaluation of the developed tool.

Keywords: Formants, Neural Networks, Pathological Voices, Pitch, Wavelet Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2834
538 Mathematical Modeling of the AMCs Cross-Contamination Removal in the FOUPs: Finite Element Formulation and Application in FOUP’s Decontamination

Authors: N. Santatriniaina, J. Deseure, T.Q. Nguyen, H. Fontaine, C. Beitia, L. Rakotomanana

Abstract:

Nowadays, with the increasing of the wafer's size and the decreasing of critical size of integrated circuit manufacturing in modern high-tech, microelectronics industry needs a maximum attention to challenge the contamination control. The move to 300 [mm] is accompanied by the use of Front Opening Unified Pods for wafer and his storage. In these pods an airborne cross contamination may occur between wafers and the pods. A predictive approach using modeling and computational methods is very powerful method to understand and qualify the AMCs cross contamination processes. This work investigates the required numerical tools which are employed in order to study the AMCs cross-contamination transfer phenomena between wafers and FOUPs. Numerical optimization and finite element formulation in transient analysis were established. Analytical solution of one dimensional problem was developed and the calibration process of physical constants was performed. The least square distance between the model (analytical 1D solution) and the experimental data are minimized. The behavior of the AMCs intransient analysis was determined. The model framework preserves the classical forms of the diffusion and convection-diffusion equations and yields to consistent form of the Fick's law. The adsorption process and the surface roughness effect were also traduced as a boundary condition using the switch condition Dirichlet to Neumann and the interface condition. The methodology is applied, first using the optimization methods with analytical solution to define physical constants, and second using finite element method including adsorption kinetic and the switch of Dirichlet to Neumann condition.

Keywords: AMCs, FOUP, cross-contamination, adsorption, diffusion, numerical analysis, wafers, Dirichlet to Neumann, finite elements methods, Fick’s law, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3162
537 Optimization of Machining Parametric Study on Electrical Discharge Machining

Authors: Rakesh Prajapati, Purvik Patel, Hardik Patel

Abstract:

Productivity and quality are two important aspects that have become great concerns in today’s competitive global market. Every production/manufacturing unit mainly focuses on these areas in relation to the process, as well as the product developed. The electrical discharge machining (EDM) process, even now it is an experience process, wherein the selected parameters are still often far from the maximum, and at the same time selecting optimization parameters is costly and time consuming. Material Removal Rate (MRR) during the process has been considered as a productivity estimate with the aim to maximize it, with an intention of minimizing surface roughness taken as most important output parameter. These two opposites in nature requirements have been simultaneously satisfied by selecting an optimal process environment (optimal parameter setting). Objective function is obtained by Regression Analysis and Analysis of Variance. Then objective function is optimized using Genetic Algorithm technique. The model is shown to be effective; MRR and Surface Roughness improved using optimized machining parameters.

Keywords: Material removal rate, TWR, OC, DOE, ANOVA, MINITAB.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 827
536 Semi-automatic Construction of Ontology-based CBR System for Knowledge Integration

Authors: Junjie Gao, Guishi Deng

Abstract:

In order to integrate knowledge in heterogeneous case-based reasoning (CBR) systems, ontology-based CBR system has become a hot topic. To solve the facing problems of ontology-based CBR system, for example, its architecture is nonstandard, reusing knowledge in legacy CBR is deficient, ontology construction is difficult, etc, we propose a novel approach for semi-automatically construct ontology-based CBR system whose architecture is based on two-layer ontology. Domain knowledge implied in legacy case bases can be mapped from relational database schema and knowledge items to relevant OWL local ontology automatically by a mapping algorithm with low time-complexity. By concept clustering based on formal concept analysis, computing concept equation measure and concept inclusion measure, some suggestions about enriching or amending concept hierarchy of OWL local ontologies are made automatically that can aid designers to achieve semi-automatic construction of OWL domain ontology. Validation of the approach is done by an application example.

Keywords: OWL ontology, Case-based Reasoning, FormalConcept Analysis, Knowledge Integration

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1999
535 A New Reliability Based Channel Allocation Model in Mobile Networks

Authors: Anujendra, Parag Kumar Guha Thakurta

Abstract:

The data transmission between mobile hosts and base stations (BSs) in Mobile networks are often vulnerable to failure. So, efficient link connectivity, in terms of the services of both base stations and communication channels of the network, is required in wireless mobile networks to achieve highly reliable data transmission. In addition, it is observed that the number of blocked hosts is increased due to insufficient number of channels during heavy load in the network. Under such scenario, the channels are allocated accordingly to offer a reliable communication at any given time. Therefore, a reliability-based channel allocation model with acceptable system performance is proposed as a MOO problem in this paper. Two conflicting parameters known as Resource Reuse factor (RRF) and the number of blocked calls are optimized under reliability constraint in this problem. The solution to such MOO problem is obtained through NSGA-II (Non dominated Sorting Genetic Algorithm). The effectiveness of the proposed model in this work is shown with a set of experimental results.

Keywords: Base station, channel, GA, Pareto-optimal, reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1902
534 Degraded Document Analysis and Extraction of Original Text Document: An Approach without Optical Character Recognition

Authors: L. Hamsaveni, Navya Prakash, Suresha

Abstract:

Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source.

Keywords: Grayscale image format, image fusing, SURF detection, YCbCr image format.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1149
533 Comparison of Different Neural Network Approaches for the Prediction of Kidney Dysfunction

Authors: Ali Hussian Ali AlTimemy, Fawzi M. Al Naima

Abstract:

This paper presents the prediction of kidney dysfunction using different neural network (NN) approaches. Self organization Maps (SOM), Probabilistic Neural Network (PNN) and Multi Layer Perceptron Neural Network (MLPNN) trained with Back Propagation Algorithm (BPA) are used in this study. Six hundred and sixty three sets of analytical laboratory tests have been collected from one of the private clinical laboratories in Baghdad. For each subject, Serum urea and Serum creatinin levels have been analyzed and tested by using clinical laboratory measurements. The collected urea and cretinine levels are then used as inputs to the three NN models in which the training process is done by different neural approaches. SOM which is a class of unsupervised network whereas PNN and BPNN are considered as class of supervised networks. These networks are used as a classifier to predict whether kidney is normal or it will have a dysfunction. The accuracy of prediction, sensitivity and specificity were found for each type of the proposed networks .We conclude that PNN gives faster and more accurate prediction of kidney dysfunction and it works as promising tool for predicting of routine kidney dysfunction from the clinical laboratory data.

Keywords: Kidney Dysfunction, Prediction, SOM, PNN, BPNN, Urea and Creatinine levels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1922
532 A Distance Function for Data with Missing Values and Its Application

Authors: Loai AbdAllah, Ilan Shimshoni

Abstract:

Missing values in data are common in real world applications. Since the performance of many data mining algorithms depend critically on it being given a good metric over the input space, we decided in this paper to define a distance function for unlabeled datasets with missing values. We use the Bhattacharyya distance, which measures the similarity of two probability distributions, to define our new distance function. According to this distance, the distance between two points without missing attributes values is simply the Mahalanobis distance. When on the other hand there is a missing value of one of the coordinates, the distance is computed according to the distribution of the missing coordinate. Our distance is general and can be used as part of any algorithm that computes the distance between data points. Because its performance depends strongly on the chosen distance measure, we opted for the k nearest neighbor classifier to evaluate its ability to accurately reflect object similarity. We experimented on standard numerical datasets from the UCI repository from different fields. On these datasets we simulated missing values and compared the performance of the kNN classifier using our distance to other three basic methods. Our  experiments show that kNN using our distance function outperforms the kNN using other methods. Moreover, the runtime performance of our method is only slightly higher than the other methods.

Keywords: Missing values, Distance metric, Bhattacharyya distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2738
531 Fungi Associated with Decline of Kikar (Acacia nilotica) and Red River Gum (Eucalyptus camaldulensis) in Faisalabad

Authors: I. Ahmad, A. Hannan, S. Ahmad, M. Asif, M. F. Nawaz, M. A. Tanvir, M. F. Azhar

Abstract:

During this research, a comprehensive survey of tree growing areas of Faisalabad district of Pakistan was conducted to observe the symptoms, spectrum, occurrence and severity of A. nilotica and E. camaldulensis decline. Objective of current research was to investigate specific fungal pathogens involved in decline of A. nilotica and E. camaldulensis. For this purpose, infected roots, bark, neck portion, stem, branches, leaves and infected soils were collected to identify associated fungi. Potato dextrose agar (PDA) and Czepak dox agar media were used for isolations. Identification of isolated fungi was done microscopically and different fungi were identified. During survey of urban locations of Faisalabad, disease incidence on Kikar and Eucalyptus was recorded as 3.9-7.9% and 2.6-7.1% respectively. Survey of Agroforest zones of Faisalabad revealed decline incidence on kikar 7.5% from Sargodha road while on Satiana and Jhang road it was not planted. In eucalyptus trees, 4%, 8% and 0% disease incidence was observed on Jhang road, Sargodha road and Satiana road respectively. The maximum fungus isolated from the kikar tree was Drechslera australiensis (5.00%) from the stem part. Aspergillus flavus also gave the maximum value of (3.05%) from the bark. Alternaria alternata gave the maximum value of (2.05%) from leaves. Rhizopus and Mucor spp. were recorded minimum as compared to the Drechslera, Alternaria and Aspergillus. The maximum fungus isolated from the Eucalyptus tree was Armillaria luteobubalina (5.00%) from the stem part. The other fungi isolated were Macrophamina phaseolina and A. niger.

Keywords: Decline, frequency of mycoflora, A. nilotica, E. camaldulensis, Drechslera australiensis, Armillaria luteobubalina.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1429
530 Application of Novel Conserving Immersed Boundary Method to Moving Boundary Problem

Authors: S. N. Hosseini, S. M. H. Karimian

Abstract:

A new conserving approach in the context of Immersed Boundary Method (IBM) is presented to simulate one dimensional, incompressible flow in a moving boundary problem. The method employs control volume scheme to simulate the flow field. The concept of ghost node is used at the boundaries to conserve the mass and momentum equations. The Present method implements the conservation laws in all cells including boundary control volumes. Application of the method is studied in a test case with moving boundary. Comparison between the results of this new method and a sharp interface (Image Point Method) IBM algorithm shows a well distinguished improvement in both pressure and velocity fields of the present method. Fluctuations in pressure field are fully resolved in this proposed method. This approach expands the IBM capability to simulate flow field for variety of problems by implementing conservation laws in a fully Cartesian grid compared to other conserving methods.

Keywords: Immersed Boundary Method, conservation of mass and momentum laws, moving boundary, boundary condition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1977
529 The Influence of Beta Shape Parameters in Project Planning

Authors: Αlexios Kotsakis, Stefanos Katsavounis, Dimitra Alexiou

Abstract:

Networks can be utilized to represent project planning problems, using nodes for activities and arcs to indicate precedence relationship between them. For fixed activity duration, a simple algorithm calculates the amount of time required to complete a project, followed by the activities that comprise the critical path. Program Evaluation and Review Technique (PERT) generalizes the above model by incorporating uncertainty, allowing activity durations to be random variables, producing nevertheless a relatively crude solution in planning problems. In this paper, based on the findings of the relevant literature, which strongly suggests that a Beta distribution can be employed to model earthmoving activities, we utilize Monte Carlo simulation, to estimate the project completion time distribution and measure the influence of skewness, an element inherent in activities of modern technical projects. We also extract the activity criticality index, with an ultimate goal to produce more accurate planning estimations.

Keywords: Beta distribution, PERT, Monte Carlo Simulation, skewness, project completion time distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 757
528 Ensuring Data Security and Consistency in FTIMA - A Fault Tolerant Infrastructure for Mobile Agents

Authors: Umar Manzoor, Kiran Ijaz, Wajiha Shamim, Arshad Ali Shahid

Abstract:

Transaction management is one of the most crucial requirements for enterprise application development which often require concurrent access to distributed data shared amongst multiple application / nodes. Transactions guarantee the consistency of data records when multiple users or processes perform concurrent operations. Existing Fault Tolerance Infrastructure for Mobile Agents (FTIMA) provides a fault tolerant behavior in distributed transactions and uses multi-agent system for distributed transaction and processing. In the existing FTIMA architecture, data flows through the network and contains personal, private or confidential information. In banking transactions a minor change in the transaction can cause a great loss to the user. In this paper we have modified FTIMA architecture to ensure that the user request reaches the destination server securely and without any change. We have used triple DES for encryption/ decryption and MD5 algorithm for validity of message.

Keywords: Distributed Transaction, Security, Mobile Agents, FTIMA Architecture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1512
527 Information Gain Ratio Based Clustering for Investigation of Environmental Parameters Effects on Human Mental Performance

Authors: H. Mehdi, Kh. S. Karimov, A. A. Kavokin

Abstract:

Methods of clustering which were developed in the data mining theory can be successfully applied to the investigation of different kinds of dependencies between the conditions of environment and human activities. It is known, that environmental parameters such as temperature, relative humidity, atmospheric pressure and illumination have significant effects on the human mental performance. To investigate these parameters effect, data mining technique of clustering using entropy and Information Gain Ratio (IGR) K(Y/X) = (H(X)–H(Y/X))/H(Y) is used, where H(Y)=-ΣPi ln(Pi). This technique allows adjusting the boundaries of clusters. It is shown that the information gain ratio (IGR) grows monotonically and simultaneously with degree of connectivity between two variables. This approach has some preferences if compared, for example, with correlation analysis due to relatively smaller sensitivity to shape of functional dependencies. Variant of an algorithm to implement the proposed method with some analysis of above problem of environmental effects is also presented. It was shown that proposed method converges with finite number of steps.

Keywords: Clustering, Correlation analysis, EnvironmentalParameters, Information Gain Ratio, Mental Performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1815
526 Comparative Analysis of Chemical Composition and Biological Activities of Ajuga genevensis L. in in vitro Culture and Intact Plants

Authors: Naira Sahakyan, Margarit Petrosyan, Armen Trchounian

Abstract:

One of the tasks in contemporary biotechnology, pharmacology and other fields of human activities is to obtain biologically active substances from plants. They are very essential in the treatment of many diseases due to their actually high therapeutic value without visible side effects. However, sometimes the possibility of obtaining the metabolites is limited due to the reduction of wild-growing plants. That is why the plant cell cultures are of great interest as alternative sources of biologically active substances. Besides, during the monitored cultivation, it is possible to obtain substances that are not synthesized by plants in nature. Isolated culture of Ajuga genevensis with high growth activity and ability of regeneration was obtained using MS nutrient medium. The agar-diffusion method showed that aqueous extracts of callus culture revealed high antimicrobial activity towards various gram-positive (Bacillus subtilis A1WT; B. mesentericus WDCM 1873; Staphylococcus aureus WDCM 5233; Staph. citreus WT) and gram-negative (Escherichia coli WKPM M-17; Salmonella typhimurium TA 100) microorganisms. The broth dilution method revealed that the minimal and half maximal inhibitory concentration values against E. coli corresponded to the 70 μg/mL and 140 μg/mL concentration of the extract respectively. According to the photochemiluminescent analysis, callus tissue extracts of leaf and root origin showed higher antioxidant activity than the same quantity of A. genevensis intact plant extract. A. genevensis intact plant and callus culture extracts showed no cytotoxic effect on K-562 suspension cell line of human chronic myeloid leukemia. The GC-MS analysis showed deep differences between the qualitative and quantitative composition of callus culture and intact plant extracts. Hexacosane (11.17%); n-hexadecanoic acid (9.33%); and 2-methoxy-4-vinylphenol (4.28%) were the main components of intact plant extracts. 10-Methylnonadecane (57.0%); methoxyacetic acid, 2-tetradecyl ester (17.75%) and 1-Bromopentadecane (14.55%) were the main components of A. genevensis callus culture extracts. Obtained data indicate that callus culture of A. genevensis can be used as an alternative source of biologically active substances.

Keywords: Ajuga genevensis, antibacterial activity, antioxidant activity, callus cultures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1555
525 Experimental Results about the Dynamics of the Generalized Belief Propagation Used on LDPC Codes

Authors: Jean-Christophe Sibel, Sylvain Reynal, David Declercq

Abstract:

In the context of channel coding, the Generalized Belief Propagation (GBP) is an iterative algorithm used to recover the transmission bits sent through a noisy channel. To ensure a reliable transmission, we apply a map on the bits, that is called a code. This code induces artificial correlations between the bits to send, and it can be modeled by a graph whose nodes are the bits and the edges are the correlations. This graph, called Tanner graph, is used for most of the decoding algorithms like Belief Propagation or Gallager-B. The GBP is based on a non unic transformation of the Tanner graph into a so called region-graph. A clear advantage of the GBP over the other algorithms is the freedom in the construction of this graph. In this article, we explain a particular construction for specific graph topologies that involves relevant performance of the GBP. Moreover, we investigate the behavior of the GBP considered as a dynamic system in order to understand the way it evolves in terms of the time and in terms of the noise power of the channel. To this end we make use of classical measures and we introduce a new measure called the hyperspheres method that enables to know the size of the attractors.

Keywords: iterative decoder, LDPC, region-graph, chaos.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634
524 Application of Fuzzy Logic Approach for an Aircraft Model with and without Winglet

Authors: Altab Hossain, Ataur Rahman, Jakir Hossen, A.K.M. P. Iqbal, SK. Hasan

Abstract:

The measurement of aerodynamic forces and moments acting on an aircraft model is important for the development of wind tunnel measurement technology to predict the performance of the full scale vehicle. The potentials of an aircraft model with and without winglet and aerodynamic characteristics with NACA wing No. 65-3- 218 have been studied using subsonic wind tunnel of 1 m × 1 m rectangular test section and 2.5 m long of Aerodynamics Laboratory Faculty of Engineering (University Putra Malaysia). Focusing on analyzing the aerodynamic characteristics of the aircraft model, two main issues are studied in this paper. First, a six component wind tunnel external balance is used for measuring lift, drag and pitching moment. Secondly, Tests are conducted on the aircraft model with and without winglet of two configurations at Reynolds numbers 1.7×105, 2.1×105, and 2.5×105 for different angle of attacks. Fuzzy logic approach is found as efficient for the representation, manipulation and utilization of aerodynamic characteristics. Therefore, the primary purpose of this work was to investigate the relationship between lift and drag coefficients, with free-stream velocities and angle of attacks, and to illustrate how fuzzy logic might play an important role in study of lift aerodynamic characteristics of an aircraft model with the addition of certain winglet configurations. Results of the developed fuzzy logic were compared with the experimental results. For lift coefficient analysis, the mean of actual and predicted values were 0.62 and 0.60 respectively. The coreelation between actual and predicted values (from FLS model) of lift coefficient in different angle of attack was found as 0.99. The mean relative error of actual and predicted valus was found as 5.18% for the velocity of 26.36 m/s which was found to be less than the acceptable limits (10%). The goodness of fit of prediction value was 0.95 which was close to 1.0.

Keywords: Wind tunnel; Winglet; Lift coefficient; Fuzzy logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1882
523 Absence of Leave and Job Morality in the ICU

Authors: Li-Ping Hsiao, Feng-Chuan Pan

Abstract:

Leave of absence is important in maintaining a good status of human resource quality. Allowing the employees temporarily free from the routine assignments can vitalize the workers- morality and productivity. This is particularly critical to secure a satisfactory service quality for healthcare professionals of which were typically featured with labor intensive and complicated works to perform. As one of the veteran hospitals that were found and operated by the Veteran Department of Taiwan, the nursing staff of the case hospital was squeezed to an extreme minimum level under the pressure of a tight budgeting. Leave of absence on schedule became extremely difficult, especially for the intensive care units (ICU), in which required close monitoring over the cared patients, and that had more easily driven the ICU nurses nervous. Even worse, the deferred leaves were more than 10 days at any time in the ICU because of a fluctuating occupancy. As a result, these had brought a bad setback to this particular nursing team, and consequently defeated the job performance and service quality. To solve this problem and accordingly to strengthen their morality, a project team was organized across different departments specific for this. Sufficient information regarding jobs and positions requirements, labor resources, and actual working hours in detail were collected and analyzed in the team meetings. Several alternatives were finalized. These included job rotating, job combination, leave on impromptu and cross-departmental redeployment. Consequently, the deferred leave days sharply reduced 70% to a level of 3 or less days. This improvement had not only provided good shelter for the ICU nurses that improved their job performance and patient safety but also encouraged the nurses active participating of a project and learned the skills of solving problems with colleagues.

Keywords: Information, job rotating, human resource, intensive care unit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513