Search results for: Gaussian kernel
211 HPPDFIM-HD: Transaction Distortion and Connected Perturbation Approach for Hierarchical Privacy Preserving Distributed Frequent Itemset Mining over Horizontally-Partitioned Dataset
Authors: Fuad Ali Mohammed Al-Yarimi
Abstract:
Many algorithms have been proposed to provide privacy preserving in data mining. These protocols are based on two main approaches named as: the perturbation approach and the Cryptographic approach. The first one is based on perturbation of the valuable information while the second one uses cryptographic techniques. The perturbation approach is much more efficient with reduced accuracy while the cryptographic approach can provide solutions with perfect accuracy. However, the cryptographic approach is a much slower method and requires considerable computation and communication overhead. In this paper, a new scalable protocol is proposed which combines the advantages of the perturbation and distortion along with cryptographic approach to perform privacy preserving in distributed frequent itemset mining on horizontally distributed data. Both the privacy and performance characteristics of the proposed protocol are studied empirically.Keywords: anonymity data, data mining, distributed frequent itemset mining, gaussian perturbation, perturbation approach, privacy preserving data mining
Procedia PDF Downloads 494210 An Ensemble Deep Learning Architecture for Imbalanced Classification of Thoracic Surgery Patients
Authors: Saba Ebrahimi, Saeed Ahmadian, Hedie Ashrafi
Abstract:
Selecting appropriate patients for surgery is one of the main issues in thoracic surgery (TS). Both short-term and long-term risks and benefits of surgery must be considered in the patient selection criteria. There are some limitations in the existing datasets of TS patients because of missing values of attributes and imbalanced distribution of survival classes. In this study, a novel ensemble architecture of deep learning networks is proposed based on stacking different linear and non-linear layers to deal with imbalance datasets. The categorical and numerical features are split using different layers with ability to shrink the unnecessary features. Then, after extracting the insight from the raw features, a novel biased-kernel layer is applied to reinforce the gradient of the minority class and cause the network to be trained better comparing the current methods. Finally, the performance and advantages of our proposed model over the existing models are examined for predicting patient survival after thoracic surgery using a real-life clinical data for lung cancer patients.Keywords: deep learning, ensemble models, imbalanced classification, lung cancer, TS patient selection
Procedia PDF Downloads 127209 Analysis of Financial Time Series by Using Ornstein-Uhlenbeck Type Models
Authors: Md Al Masum Bhuiyan, Maria C. Mariani, Osei K. Tweneboah
Abstract:
In the present work, we develop a technique for estimating the volatility of financial time series by using stochastic differential equation. Taking the daily closing prices from developed and emergent stock markets as the basis, we argue that the incorporation of stochastic volatility into the time-varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. While using the technique, we see the long-memory behavior of data sets and one-step-ahead-predicted log-volatility with ±2 standard errors despite the variation of the observed noise from a Normal mixture distribution, because the financial data studied is not fully Gaussian. Also, the Ornstein-Uhlenbeck process followed in this work simulates well the financial time series, which aligns our estimation algorithm with large data sets due to the fact that this algorithm has good convergence properties.Keywords: financial time series, maximum likelihood estimation, Ornstein-Uhlenbeck type models, stochastic volatility model
Procedia PDF Downloads 227208 A Theorem Related to Sample Moments and Two Types of Moment-Based Density Estimates
Authors: Serge B. Provost
Abstract:
Numerous statistical inference and modeling methodologies are based on sample moments rather than the actual observations. A result justifying the validity of this approach is introduced. More specifically, it will be established that given the first n moments of a sample of size n, one can recover the original n sample points. This implies that a sample of size n and its first associated n moments contain precisely the same amount of information. However, it is efficient to make use of a limited number of initial moments as most of the relevant distributional information is included in them. Two types of density estimation techniques that rely on such moments will be discussed. The first one expresses a density estimate as the product of a suitable base density and a polynomial adjustment whose coefficients are determined by equating the moments of the density estimate to the sample moments. The second one assumes that the derivative of the logarithm of a density function can be represented as a rational function. This gives rise to a system of linear equations involving sample moments, the density estimate is then obtained by solving a differential equation. Unlike kernel density estimation, these methodologies are ideally suited to model ‘big data’ as they only require a limited number of moments, irrespective of the sample size. What is more, they produce simple closed form expressions that are amenable to algebraic manipulations. They also turn out to be more accurate as will be shown in several illustrative examples.Keywords: density estimation, log-density, polynomial adjustments, sample moments
Procedia PDF Downloads 153207 A New Approach of Preprocessing with SVM Optimization Based on PSO for Bearing Fault Diagnosis
Authors: Tawfik Thelaidjia, Salah Chenikher
Abstract:
Bearing fault diagnosis has attracted significant attention over the past few decades. It consists of two major parts: vibration signal feature extraction and condition classification for the extracted features. In this paper, feature extraction from faulty bearing vibration signals is performed by a combination of the signal’s Kurtosis and features obtained through the preprocessing of the vibration signal samples using Db2 discrete wavelet transform at the fifth level of decomposition. In this way, a 7-dimensional vector of the vibration signal feature is obtained. After feature extraction from vibration signal, the support vector machine (SVM) was applied to automate the fault diagnosis procedure. To improve the classification accuracy for bearing fault prediction, particle swarm optimization (PSO) is employed to simultaneously optimize the SVM kernel function parameter and the penalty parameter. The results have shown feasibility and effectiveness of the proposed approachKeywords: condition monitoring, discrete wavelet transform, fault diagnosis, kurtosis, machine learning, particle swarm optimization, roller bearing, rotating machines, support vector machine, vibration measurement
Procedia PDF Downloads 425206 Moving Object Detection Using Histogram of Uniformly Oriented Gradient
Authors: Wei-Jong Yang, Yu-Siang Su, Pau-Choo Chung, Jar-Ferr Yang
Abstract:
Moving object detection (MOD) is an important issue in advanced driver assistance systems (ADAS). There are two important moving objects, pedestrians and scooters in ADAS. In real-world systems, there exist two important challenges for MOD, including the computational complexity and the detection accuracy. The histogram of oriented gradient (HOG) features can easily detect the edge of object without invariance to changes in illumination and shadowing. However, to reduce the execution time for real-time systems, the image size should be down sampled which would lead the outlier influence to increase. For this reason, we propose the histogram of uniformly-oriented gradient (HUG) features to get better accurate description of the contour of human body. In the testing phase, the support vector machine (SVM) with linear kernel function is involved. Experimental results show the correctness and effectiveness of the proposed method. With SVM classifiers, the real testing results show the proposed HUG features achieve better than classification performance than the HOG ones.Keywords: moving object detection, histogram of oriented gradient, histogram of uniformly-oriented gradient, linear support vector machine
Procedia PDF Downloads 582205 Modelling the Dynamics of Corporate Bonds Spreads with Asymmetric GARCH Models
Authors: Sélima Baccar, Ephraim Clark
Abstract:
This paper can be considered as a new perspective to analyse credit spreads. A comprehensive empirical analysis of conditional variance of credit spreads indices is performed using various GARCH models. Based on a comparison between traditional and asymmetric GARCH models with alternative functional forms of the conditional density, we intend to identify what macroeconomic and financial factors have driven daily changes in the US Dollar credit spreads in the period from January 2011 through January 2013. The results provide a strong interdependence between credit spreads and the explanatory factors related to the conditions of interest rates, the state of the stock market, the bond market liquidity and the exchange risk. The empirical findings support the use of asymmetric GARCH models. The AGARCH and GJR models outperform the traditional GARCH in credit spreads modelling. We show, also, that the leptokurtic Student-t assumption is better than the Gaussian distribution and improves the quality of the estimates, whatever the rating or maturity.Keywords: corporate bonds, default risk, credit spreads, asymmetric garch models, student-t distribution
Procedia PDF Downloads 462204 Failure Inference and Optimization for Step Stress Model Based on Bivariate Wiener Model
Authors: Soudabeh Shemehsavar
Abstract:
In this paper, we consider the situation under a life test, in which the failure time of the test units are not related deterministically to an observable stochastic time varying covariate. In such a case, the joint distribution of failure time and a marker value would be useful for modeling the step stress life test. The problem of accelerating such an experiment is considered as the main aim of this paper. We present a step stress accelerated model based on a bivariate Wiener process with one component as the latent (unobservable) degradation process, which determines the failure times and the other as a marker process, the degradation values of which are recorded at times of failure. Parametric inference based on the proposed model is discussed and the optimization procedure for obtaining the optimal time for changing the stress level is presented. The optimization criterion is to minimize the approximate variance of the maximum likelihood estimator of a percentile of the products’ lifetime distribution.Keywords: bivariate normal, Fisher information matrix, inverse Gaussian distribution, Wiener process
Procedia PDF Downloads 311203 Least-Square Support Vector Machine for Characterization of Clusters of Microcalcifications
Authors: Baljit Singh Khehra, Amar Partap Singh Pharwaha
Abstract:
Clusters of Microcalcifications (MCCs) are most frequent symptoms of Ductal Carcinoma in Situ (DCIS) recognized by mammography. Least-Square Support Vector Machine (LS-SVM) is a variant of the standard SVM. In the paper, LS-SVM is proposed as a classifier for classifying MCCs as benign or malignant based on relevant extracted features from enhanced mammogram. To establish the credibility of LS-SVM classifier for classifying MCCs, a comparative evaluation of the relative performance of LS-SVM classifier for different kernel functions is made. For comparative evaluation, confusion matrix and ROC analysis are used. Experiments are performed on data extracted from mammogram images of DDSM database. A total of 380 suspicious areas are collected, which contain 235 malignant and 145 benign samples, from mammogram images of DDSM database. A set of 50 features is calculated for each suspicious area. After this, an optimal subset of 23 most suitable features is selected from 50 features by Particle Swarm Optimization (PSO). The results of proposed study are quite promising.Keywords: clusters of microcalcifications, ductal carcinoma in situ, least-square support vector machine, particle swarm optimization
Procedia PDF Downloads 346202 The Spectroscopic, Molecular Structure and Electrostatic Potential, Polarizability Hyperpolarizability, and Homo–Lumo Analysis of Monomeric and Dimeric Structures of 2-Chloro-N-(2 Methylphenyl) Benzamide
Authors: N. Khelloul, N. Benhalima, A. Chouaih, F. Hamzaoui
Abstract:
The monomer and dimer structures of the title molecule have been obtained from density functional theory (DFT) B3LYP method with 6-31G (d,p) as basis set calculations. The optimized geometrical parameters obtained by B3LYP/6-31G (d,p) method shows good agreement with experimental X-ray data. The polarizability and first order hyperpolarizabilty of the title molecule were calculated and interpreted. The intermolecular N–H•••O hydrogen bonds are discussed in dimer structure of the molecule. The vibrational wave numbers and their assignments were examined theoretically using the Gaussian 09 set of quantum chemistry codes. The predicted frontier molecular orbital energies at B3LYP/6-31G(d,p) method set show that charge transfer occurs within the molecule. The frontier molecular orbital calculations clearly show the inverse relationship of HOMO–LUMO gap with the total static hyperpolarizability. The results also show that 2-Chloro-N-(2-methylphenyl) benzamide 2 molecule may have nonlinear optical (NLO) comportment with non-zero values.Keywords: DFT, HOMO, LUMO, NLO
Procedia PDF Downloads 326201 Data Modeling and Calibration of In-Line Pultrusion and Laser Ablation Machine Processes
Authors: David F. Nettleton, Christian Wasiak, Jonas Dorissen, David Gillen, Alexandr Tretyak, Elodie Bugnicourt, Alejandro Rosales
Abstract:
In this work, preliminary results are given for the modeling and calibration of two inline processes, pultrusion, and laser ablation, using machine learning techniques. The end product of the processes is the core of a medical guidewire, manufactured to comply with a user specification of diameter and flexibility. An ensemble approach is followed which requires training several models. Two state of the art machine learning algorithms are benchmarked: Kernel Recursive Least Squares (KRLS) and Support Vector Regression (SVR). The final objective is to build a precise digital model of the pultrusion and laser ablation process in order to calibrate the resulting diameter and flexibility of a medical guidewire, which is the end product while taking into account the friction on the forming die. The result is an ensemble of models, whose output is within a strict required tolerance and which covers the required range of diameter and flexibility of the guidewire end product. The modeling and automatic calibration of complex in-line industrial processes is a key aspect of the Industry 4.0 movement for cyber-physical systems.Keywords: calibration, data modeling, industrial processes, machine learning
Procedia PDF Downloads 279200 Machine Learning Driven Analysis of Kepler Objects of Interest to Identify Exoplanets
Authors: Akshat Kumar, Vidushi
Abstract:
This paper identifies 27 KOIs, 26 of which are currently classified as candidates and one as false positives that have a high probability of being confirmed. For this purpose, 11 machine learning algorithms were implemented on the cumulative kepler dataset sourced from the NASA exoplanet archive; it was observed that the best-performing model was HistGradientBoosting and XGBoost with a test accuracy of 93.5%, and the lowest-performing model was Gaussian NB with a test accuracy of 54%, to test model performance F1, cross-validation score and RUC curve was calculated. Based on the learned models, the significant characteristics for confirm exoplanets were identified, putting emphasis on the object’s transit and stellar properties; these characteristics were namely koi_count, koi_prad, koi_period, koi_dor, koi_ror, and koi_smass, which were later considered to filter out the potential KOIs. The paper also calculates the Earth similarity index based on the planetary radius and equilibrium temperature for each KOI identified to aid in their classification.Keywords: Kepler objects of interest, exoplanets, space exploration, machine learning, earth similarity index, transit photometry
Procedia PDF Downloads 49199 Ship Detection Requirements Analysis for Different Sea States: Validation on Real SAR Data
Authors: Jaime Martín-de-Nicolás, David Mata-Moya, Nerea del-Rey-Maestre, Pedro Gómez-del-Hoyo, María-Pilar Jarabo-Amores
Abstract:
Ship detection is nowadays quite an important issue in tasks related to sea traffic control, fishery management and ship search and rescue. Although it has traditionally been carried out by patrol ships or aircrafts, coverage and weather conditions and sea state can become a problem. Synthetic aperture radars can surpass these coverage limitations and work under any climatological condition. A fast CFAR ship detector based on a robust statistical modeling of sea clutter with respect to sea states in SAR images is used. In this paper, the minimum SNR required to obtain a given detection probability with a given false alarm rate for any sea state is determined. A Gaussian target model using real SAR data is considered. Results show that SNR does not depend heavily on the class considered. Provided there is some variation in the backscattering of targets in SAR imagery, the detection probability is limited and a post-processing stage based on morphology would be suitable.Keywords: SAR, generalized gamma distribution, detection curves, radar detection
Procedia PDF Downloads 442198 Recognition of Voice Commands of Mentor Robot in Noisy Environment Using Hidden Markov Model
Authors: Khenfer Koummich Fatma, Hendel Fatiha, Mesbahi Larbi
Abstract:
This paper presents an approach based on Hidden Markov Models (HMM: Hidden Markov Model) using HTK tools. The goal is to create a human-machine interface with a voice recognition system that allows the operator to teleoperate a mentor robot to execute specific tasks as rotate, raise, close, etc. This system should take into account different levels of environmental noise. This approach has been applied to isolated words representing the robot commands pronounced in two languages: French and Arabic. The obtained recognition rate is the same in both speeches, Arabic and French in the neutral words. However, there is a slight difference in favor of the Arabic speech when Gaussian white noise is added with a Signal to Noise Ratio (SNR) equals 30 dB, in this case; the Arabic speech recognition rate is 69%, and the French speech recognition rate is 80%. This can be explained by the ability of phonetic context of each speech when the noise is added.Keywords: Arabic speech recognition, Hidden Markov Model (HMM), HTK, noise, TIMIT, voice command
Procedia PDF Downloads 360197 Coordination Behavior, Theoretical Studies, and Biological Activity of Some Transition Metal Complexes with Oxime Ligands
Authors: Noura Kichou, Manel Tafergguenit, Nabila Ghechtouli, Zakia Hank
Abstract:
The aim of this work is to synthesize, characterize and evaluate the biological activity of two Ligands : glyoxime and dimethylglyoxime, and their metal Ni(II) chelates. The newly chelates were characterized by elemental analysis, IR, EPR, nuclear magnetic resonances (1H and 13C), and biological activity. The antibacterial and antifungal activities of the ligands and its metal complexes were screened against bacterial species (Staphylococcus aureus, Bacillus subtilis, and Escherichia coli) and fungi (Candida albicans). Ampicillin and amphotericin were used as references for antibacterial and antifungal studies. The activity data show that the metal complexes have a promising biological activity comparable with parent free ligand against bacterial and fungal species. A structural, energetic, and electronic theoretical study was carried out using the DFT method, with the functional B3LYP and the gaussian program 09. A complete optimization of geometries was made, followed by a calculation of the frequencies of the normal modes of vibration. The UV spectrum was also interpreted. The theoretical results were compared with the experimental data.Keywords: glyoxime, dimetylglyoxime, nickel, antibacterial activity
Procedia PDF Downloads 92196 Coordination Behavior, Theoretical studies and Biological Activity of Some Transition Metal Complexes with Oxime Ligands
Authors: Noura Kichou, Manel Tafergguenit, Nabila Ghechtouli, Zakia Hank
Abstract:
The aim of this work is to synthesize, characterize and evaluate the biological activity of two Ligands: glyoxime and dimethylglyoxime, and their metal Ni(II) chelates. The newly chelates were characterized by elemental analysis, IR, EPR, nuclear magnetic resonances (1H and 13C), and biological activity. The antibacterial and antifungal activities of the ligands and its metal complexes were screened against bacterial species (Staphylococcus aureus, Bacillus subtilis, and Escherichia coli) and fungi (Candida albicans). Ampicillin and amphotericin were used as references for antibacterial and antifungal studies. The activity data show that the metal complexes have a promising biological activity comparable with parent free ligand against bacterial and fungal species. A structural, energetic, and electronic theoretical study was carried out using the DFT method, with the functional B3LYP and the gaussian program 09. A complete optimization of geometries was made, followed by a calculation of the frequencies of the normal modes of vibration. The UV spectrum was also interpreted. The theoretical results were compared with the experimental data.Keywords: glyoxime, dimetylglyoxime, nickel, antibacterial activity
Procedia PDF Downloads 100195 Alterations of Gut Microbiota and Its Metabolomics in Child with 6PPDQ, PBDE, PCB, and Metal (Loid) Exposure
Authors: Xia Huo
Abstract:
The composition and metabolites of the gut microbiota can be altered by environmental pollutants. However, the effect of co-exposure to multiple pollutants on the human gut microbiota has not been sufficiently studied. In this study, gut microorganisms and their metabolites were compared between 33 children from Guiyu and 34 children from Haojiang. The exposure level was assessed by estimating the daily intake (EDI) of polybrominated diphenyl ethers (PBDEs), polychlorinated biphenyls (PCBs), 6PPD-quinone (6PPDQ), and metal(loid)s in dust. Significant correlations were found between the EDIs of 6PPDQ, BDE28, PCB52, Ni, Cu, and both the alpha diversity index and specific metabolites in single-element models. The study found that the Bayesian kernel machine regression (BKMR) model showed a negative correlation between the EDIs of five pollutants (6PPDQ, BDE28, PCB52, Ni, and Cu) and the Chao 1 index, particularly beyond the 55th percentile. Furthermore, the EDIs of these five pollutants were positively correlated with the levels of the metabolite 2,4-diaminobutyric acid while negatively correlated with the levels of d-erythro-sphingosine and d-threitol. Our research suggests that exposure to 6PPDQ, BDE28, PCB52, Ni, and Cu in kindergarten dust is associated with alterations in the gut microbiota and its metabolites. These alterations may be associated with neurodevelopmental abnormalities in children.Keywords: gut microbiota, 6PPDQ, PBDEs, PCBs, metal(loid)s, BKMR
Procedia PDF Downloads 42194 Development of Computational Approach for Calculation of Hydrogen Solubility in Hydrocarbons for Treatment of Petroleum
Authors: Abdulrahman Sumayli, Saad M. AlShahrani
Abstract:
For the hydrogenation process, knowing the solubility of hydrogen (H2) in hydrocarbons is critical to improve the efficiency of the process. We investigated the H2 solubility computation in four heavy crude oil feedstocks using machine learning techniques. Temperature, pressure, and feedstock type were considered as the inputs to the models, while the hydrogen solubility was the sole response. Specifically, we employed three different models: Support Vector Regression (SVR), Gaussian process regression (GPR), and Bayesian ridge regression (BRR). To achieve the best performance, the hyper-parameters of these models are optimized using the whale optimization algorithm (WOA). We evaluated the models using a dataset of solubility measurements in various feedstocks, and we compared their performance based on several metrics. Our results show that the WOA-SVR model tuned with WOA achieves the best performance overall, with an RMSE of 1.38 × 10− 2 and an R-squared of 0.991. These findings suggest that machine learning techniques can provide accurate predictions of hydrogen solubility in different feedstocks, which could be useful in the development of hydrogen-related technologies. Besides, the solubility of hydrogen in the four heavy oil fractions is estimated in different ranges of temperatures and pressures of 150 ◦C–350 ◦C and 1.2 MPa–10.8 MPa, respectivelyKeywords: temperature, pressure variations, machine learning, oil treatment
Procedia PDF Downloads 57193 Artificial Intelligent Methodology for Liquid Propellant Engine Design Optimization
Authors: Hassan Naseh, Javad Roozgard
Abstract:
This paper represents the methodology based on Artificial Intelligent (AI) applied to Liquid Propellant Engine (LPE) optimization. The AI methodology utilized from Adaptive neural Fuzzy Inference System (ANFIS). In this methodology, the optimum objective function means to achieve maximum performance (specific impulse). The independent design variables in ANFIS modeling are combustion chamber pressure and temperature and oxidizer to fuel ratio and output of this modeling are specific impulse that can be applied with other objective functions in LPE design optimization. To this end, the LPE’s parameter has been modeled in ANFIS methodology based on generating fuzzy inference system structure by using grid partitioning, subtractive clustering and Fuzzy C-Means (FCM) clustering for both inferences (Mamdani and Sugeno) and various types of membership functions. The final comparing optimization results shown accuracy and processing run time of the Gaussian ANFIS Methodology between all methods.Keywords: ANFIS methodology, artificial intelligent, liquid propellant engine, optimization
Procedia PDF Downloads 564192 Support Vector Regression Combined with Different Optimization Algorithms to Predict Global Solar Radiation on Horizontal Surfaces in Algeria
Authors: Laidi Maamar, Achwak Madani, Abdellah El Ahdj Abdellah
Abstract:
The aim of this work is to use Support Vector regression (SVR) combined with dragonfly, firefly, Bee Colony and particle swarm Optimization algorithm to predict global solar radiation on horizontal surfaces in some cities in Algeria. Combining these optimization algorithms with SVR aims principally to enhance accuracy by fine-tuning the parameters, speeding up the convergence of the SVR model, and exploring a larger search space efficiently; these parameters are the regularization parameter (C), kernel parameters, and epsilon parameter. By doing so, the aim is to improve the generalization and predictive accuracy of the SVR model. Overall, the aim is to leverage the strengths of both SVR and optimization algorithms to create a more powerful and effective regression model for various cities and under different climate conditions. Results demonstrate close agreement between predicted and measured data in terms of different metrics. In summary, SVM has proven to be a valuable tool in modeling global solar radiation, offering accurate predictions and demonstrating versatility when combined with other algorithms or used in hybrid forecasting models.Keywords: support vector regression (SVR), optimization algorithms, global solar radiation prediction, hybrid forecasting models
Procedia PDF Downloads 18191 Evaluation of Spatial Correlation Length and Karhunen-Loeve Expansion Terms for Predicting Reliability Level of Long-Term Settlement in Soft Soils
Authors: Mehrnaz Alibeikloo, Hadi Khabbaz, Behzad Fatahi
Abstract:
The spectral random field method is one of the widely used methods to obtain more reliable and accurate results in geotechnical problems involving material variability. Karhunen-Loeve (K-L) expansion method was applied to perform random field discretization of cross-correlated creep parameters. Karhunen-Loeve expansion method is based on eigenfunctions and eigenvalues of covariance function adopting Kernel integral solution. In this paper, the accuracy of Karhunen-Loeve expansion was investigated to predict long-term settlement of soft soils adopting elastic visco-plastic creep model. For this purpose, a parametric study was carried to evaluate the effect of K-L expansion terms and spatial correlation length on the reliability of results. The results indicate that small values of spatial correlation length require more K-L expansion terms. Moreover, by increasing spatial correlation length, the coefficient of variation (COV) of creep settlement increases, confirming more conservative and safer prediction.Keywords: Karhunen-Loeve expansion, long-term settlement, reliability analysis, spatial correlation length
Procedia PDF Downloads 148190 Alternate Furrow Irrigation and Potassium Fertilizer on Seed Yield, Water Use Efficiency and Fatty Acids of Rapeseed
Authors: A. Bahrani
Abstract:
In order to study the effect of restricted irrigation systems and different potassium fertilizer on water use efficiency and yield of rapeseed (Brassica napus L.), an experiment was conducted in an arid area in Khuzestan, Iran in 2013. The main plots consisted of three irrigation methods: FI (full irrigation), alternate furrow irrigation (AFI) and fixed furrow irrigation (FFI). Each subplot received three rates of K fertiliser application: 0, 150 or 300 kg ha-1. The results showed that the plots receiving the full irrigation resulted in significantly higher grain yields, 1000-kernel weight and grain number per pod than both alternate treatments. However, the highest WUE were obtained in alternate furrow irrigation and 300 kg K ha-1 and the lowest one was found in the FI treatment and 0 kg K ha-1. Potassium application increased RWC in alternate furrow irrigation and fixed furrow irrigation than FI treatment. Maximum oil content was observed in those treatments where full irrigation was applied while minimum oil content was produced in FFI irrigated treatments. Potassium fertilizer also increased grain oil by 15 % than control. Deficit irrigation reduced oleic acid and erucic acid. However, oleic acid and linoleic acid increased with increasing of potassium.Keywords: erucic acid, irrigation methods, linoleic acid, oil percent, oleic acid
Procedia PDF Downloads 269189 An Improved Tracking Approach Using Particle Filter and Background Subtraction
Authors: Amir Mukhtar, Dr. Likun Xia
Abstract:
An improved, robust and efficient visual target tracking algorithm using particle filtering is proposed. Particle filtering has been proven very successful in estimating non-Gaussian and non-linear problems. In this paper, the particle filter is used with color feature to estimate the target state with time. Color distributions are applied as this feature is scale and rotational invariant, shows robustness to partial occlusion and computationally efficient. The performance is made more robust by choosing the different (YIQ) color scheme. Tracking is performed by comparison of chrominance histograms of target and candidate positions (particles). Color based particle filter tracking often leads to inaccurate results when light intensity changes during a video stream. Furthermore, background subtraction technique is used for size estimation of the target. The qualitative evaluation of proposed algorithm is performed on several real-world videos. The experimental results demonstrate that the improved algorithm can track the moving objects very well under illumination changes, occlusion and moving background.Keywords: tracking, particle filter, histogram, corner points, occlusion, illumination
Procedia PDF Downloads 364188 Feature Weighting Comparison Based on Clustering Centers in the Detection of Diabetic Retinopathy
Authors: Kemal Polat
Abstract:
In this paper, three feature weighting methods have been used to improve the classification performance of diabetic retinopathy (DR). To classify the diabetic retinopathy, features extracted from the output of several retinal image processing algorithms, such as image-level, lesion-specific and anatomical components, have been used and fed them into the classifier algorithms. The dataset used in this study has been taken from University of California, Irvine (UCI) machine learning repository. Feature weighting methods including the fuzzy c-means clustering based feature weighting, subtractive clustering based feature weighting, and Gaussian mixture clustering based feature weighting, have been used and compered with each other in the classification of DR. After feature weighting, five different classifier algorithms comprising multi-layer perceptron (MLP), k- nearest neighbor (k-NN), decision tree, support vector machine (SVM), and Naïve Bayes have been used. The hybrid method based on combination of subtractive clustering based feature weighting and decision tree classifier has been obtained the classification accuracy of 100% in the screening of DR. These results have demonstrated that the proposed hybrid scheme is very promising in the medical data set classification.Keywords: machine learning, data weighting, classification, data mining
Procedia PDF Downloads 316187 Estimation of Cholesterol Level in Different Brands of Vegetable Oils in Iraq
Authors: Mohammed Idaan Hassan Al-Majidi
Abstract:
An analysis of twenty one assorted brands of vegetable oils in Babylon Iraq, reveals varying levels of cholesterol content. Cholesterol was found to be present in most of the oil brands sampled using three standard methods. Cholesterol was detected in seventeen of the vegetable oil brands with concentration of less than 1 mg/ml while seven of the oil brands had cholesterol concentrations ranging between 1-4 mg/ml. Low iodine values were obtained in four of the vegetable oil brands and three of them had high acid values. High performance liquid chromatography (HPLC) confirmed the presence of cholesterol at varying concentrations in all the oil brands and gave the lowest detectable cholesterol values in all the oil brands. The Laser brand made from rapeseed had the highest cholesterol concentration of 3.2 mg/ml while Grand brand made from groundnuts had the least concentration (0.12 mg/ml) of cholesterol using HPLC analysis. Leibermann-Burchard method showed that Gino brand from palm kernel had the least concentration of cholesterol (3.86 mg/ml ±0.032) and the highest concentration of 3.996 mg/ml ±0.0404 was obtained in Sesame seed oil brand. This report is important in view of health implications of cholesterol in our diets. Consequently, we have been able to show that there is no cholesterol free oil in the market as shown on the vegetable oil brand labels. Therefore, companies producing and marketing vegetable oils are enjoined to desist from misleading the public by labeling their products as “cholesterol free”. They should indicate the amount of cholesterol present in the vegetable oil, no matter how small the quantity may be.Keywords: vegetable oils, heart diseases, leibermann-burchard, cholesterol
Procedia PDF Downloads 243186 Biotransformation of Glycerine Pitch as Renewable Carbon Resource into P(3HB-co-4HB) Biopolymer
Authors: Amirul Al-Ashraf Abdullah, Hema Ramachandran, Iszatty Ismail
Abstract:
Oleochemical industry in Malaysia has been diversifying significantly due to the abundant supply of both palm and kernel oils as raw materials as well as the high demand for downstream products such as fatty acids, fatty alcohols and glycerine. However, environmental awareness is growing rapidly in Malaysia because oleochemical industry is one of the palm-oil based industries that possess risk to the environment. Glycerine pitch is one of the scheduled wastes generated from the fatty acid plants in Malaysia and its discharge may cause a serious environmental problem. Therefore, it is imperative to find alternative applications for this waste glycerine. Consequently, the aim of this research is to explore the application of glycerine pitch as direct fermentation substrate in the biosynthesis of poly(3-hydroxybutyrate-co-4-hydroxybutyrate) [P(3HB-co-4HB)] copolymer, aiming to contribute toward the sustainable production of biopolymer in the world. Utilization of glycerine pitch (10 g/l) together with 1,4-butanediol (5 g/l) had resulted in the achievement of 40 mol% 4HB monomer with the highest PHA concentration of 2.91 g/l. Synthesis of yellow pigment which exhibited antimicrobial properties occurred simultaneously with the production of P(3HB-co-4HB) through the use of glycerine pitch as renewable carbon resource. Utilization of glycerine pitch in the biosynthesis of P(3HB-co-4HB) will not only contribute to reducing society’s dependence on non-renewable resources but also will promote the development of cost efficiency microbial fermentation towards biosustainability and green technology.Keywords: biopolymer, glycerine pitch, natural pigment, P(3HB-co-4HB)
Procedia PDF Downloads 450185 Predictive Analytics of Student Performance Determinants
Authors: Mahtab Davari, Charles Edward Okon, Somayeh Aghanavesi
Abstract:
Every institute of learning is usually interested in the performance of enrolled students. The level of these performances determines the approach an institute of study may adopt in rendering academic services. The focus of this paper is to evaluate students' academic performance in given courses of study using machine learning methods. This study evaluated various supervised machine learning classification algorithms such as Logistic Regression (LR), Support Vector Machine, Random Forest, Decision Tree, K-Nearest Neighbors, Linear Discriminant Analysis, and Quadratic Discriminant Analysis, using selected features to predict study performance. The accuracy, precision, recall, and F1 score obtained from a 5-Fold Cross-Validation were used to determine the best classification algorithm to predict students’ performances. SVM (using a linear kernel), LDA, and LR were identified as the best-performing machine learning methods. Also, using the LR model, this study identified students' educational habits such as reading and paying attention in class as strong determinants for a student to have an above-average performance. Other important features include the academic history of the student and work. Demographic factors such as age, gender, high school graduation, etc., had no significant effect on a student's performance.Keywords: student performance, supervised machine learning, classification, cross-validation, prediction
Procedia PDF Downloads 112184 Globally Attractive Mild Solutions for Non-Local in Time Subdiffusion Equations of Neutral Type
Authors: Jorge Gonzalez Camus, Carlos Lizama
Abstract:
In this work is proved the existence of at least one globally attractive mild solution to the Cauchy problem, for fractional evolution equation of neutral type, involving the fractional derivate in Caputo sense. An almost sectorial operator on a Banach space X and a kernel belonging to a large class appears in the equation, which covers many relevant cases from physics applications, in particular, the important case of time - fractional evolution equations of neutral type. The main tool used in this work was the Hausdorff measure of noncompactness and fixed point theorems, specifically Darbo-type. Initially, the equation is a Cauchy problem, involving a fractional derivate in Caputo sense. Then, is formulated the equivalent integral version, and defining a convenient functional, using the analytic integral resolvent operator, and verifying the hypothesis of the fixed point theorem of Darbo type, give us the existence of mild solution for the initial problem. Furthermore, each mild solution is globally attractive, a property that is desired in asymptotic behavior for that solution.Keywords: attractive mild solutions, integral Volterra equations, neutral type equations, non-local in time equations
Procedia PDF Downloads 140183 Robust Medical Image Watermarking based on Contourlet and Extraction Using ICA
Authors: S. Saju, G. Thirugnanam
Abstract:
In this paper, a medical image watermarking algorithm based on contourlet is proposed. Medical image watermarking is a special subcategory of image watermarking in the sense that images have special requirements. Watermarked medical images should not differ perceptually from their original counterparts because clinical reading of images must not be affected. Watermarking techniques based on wavelet transform are reported in many literatures but robustness and security using contourlet are better when compared to wavelet transform. The main challenge in exploring geometry in images comes from the discrete nature of the data. In this paper, original image is decomposed to two level using contourlet and the watermark is embedded in the resultant sub-bands. Sub-band selection is based on the value of Peak Signal to Noise Ratio (PSNR) that is calculated between watermarked and original image. To extract the watermark, Kernel ICA is used and it has a novel characteristic is that it does not require the transformation process to extract the watermark. Simulation results show that proposed scheme is robust against attacks such as Salt and Pepper noise, Median filtering and rotation. The performance measures like PSNR and Similarity measure are evaluated and compared with Discrete Wavelet Transform (DWT) to prove the robustness of the scheme. Simulations are carried out using Matlab Software.Keywords: digital watermarking, independent component analysis, wavelet transform, contourlet
Procedia PDF Downloads 516182 Prediction Fluid Properties of Iranian Oil Field with Using of Radial Based Neural Network
Authors: Abdolreza Memari
Abstract:
In this article in order to estimate the viscosity of crude oil,a numerical method has been used. We use this method to measure the crude oil's viscosity for 3 states: Saturated oil's viscosity, viscosity above the bubble point and viscosity under the saturation pressure. Then the crude oil's viscosity is estimated by using KHAN model and roller ball method. After that using these data that include efficient conditions in measuring viscosity, the estimated viscosity by the presented method, a radial based neural method, is taught. This network is a kind of two layered artificial neural network that its stimulation function of hidden layer is Gaussian function and teaching algorithms are used to teach them. After teaching radial based neural network, results of experimental method and artificial intelligence are compared all together. Teaching this network, we are able to estimate crude oil's viscosity without using KHAN model and experimental conditions and under any other condition with acceptable accuracy. Results show that radial neural network has high capability of estimating crude oil saving in time and cost is another advantage of this investigation.Keywords: viscosity, Iranian crude oil, radial based, neural network, roller ball method, KHAN model
Procedia PDF Downloads 486