Search results for: minimum root mean square (RMS) error matching algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9329

Search results for: minimum root mean square (RMS) error matching algorithm

7799 Efficient Schemes of Classifiers for Remote Sensing Satellite Imageries of Land Use Pattern Classifications

Authors: S. S. Patil, Sachidanand Kini

Abstract:

Classification of land use patterns is compelling in complexity and variability of remote sensing imageries data. An imperative research in remote sensing application exploited to mine some of the significant spatially variable factors as land cover and land use from satellite images for remote arid areas in Karnataka State, India. The diverse classification techniques, unsupervised and supervised consisting of maximum likelihood, Mahalanobis distance, and minimum distance are applied in Bellary District in Karnataka State, India for the classification of the raw satellite images. The accuracy evaluations of results are compared visually with the standard maps with ground-truths. We initiated with the maximum likelihood technique that gave the finest results and both minimum distance and Mahalanobis distance methods over valued agriculture land areas. In meanness of mislaid few irrelevant features due to the low resolution of the satellite images, high-quality accord between parameters extracted automatically from the developed maps and field observations was found.

Keywords: Mahalanobis distance, minimum distance, supervised, unsupervised, user classification accuracy, producer's classification accuracy, maximum likelihood, kappa coefficient

Procedia PDF Downloads 183
7798 Modified CUSUM Algorithm for Gradual Change Detection in a Time Series Data

Authors: Victoria Siriaki Jorry, I. S. Mbalawata, Hayong Shin

Abstract:

The main objective in a change detection problem is to develop algorithms for efficient detection of gradual and/or abrupt changes in the parameter distribution of a process or time series data. In this paper, we present a modified cumulative (MCUSUM) algorithm to detect the start and end of a time-varying linear drift in mean value of a time series data based on likelihood ratio test procedure. The design, implementation and performance of the proposed algorithm for a linear drift detection is evaluated and compared to the existing CUSUM algorithm using different performance measures. An approach to accurately approximate the threshold of the MCUSUM is also provided. Performance of the MCUSUM for gradual change-point detection is compared to that of standard cumulative sum (CUSUM) control chart designed for abrupt shift detection using Monte Carlo Simulations. In terms of the expected time for detection, the MCUSUM procedure is found to have a better performance than a standard CUSUM chart for detection of the gradual change in mean. The algorithm is then applied and tested to a randomly generated time series data with a gradual linear trend in mean to demonstrate its usefulness.

Keywords: average run length, CUSUM control chart, gradual change detection, likelihood ratio test

Procedia PDF Downloads 299
7797 Enhanced Imperialist Competitive Algorithm for the Cell Formation Problem Using Sequence Data

Authors: S. H. Borghei, E. Teymourian, M. Mobin, G. M. Komaki, S. Sheikh

Abstract:

Imperialist competitive algorithm (ICA) is a recent meta-heuristic method that is inspired by the social evolutions for solving NP-Hard problems. The ICA is a population based algorithm which has achieved a great performance in comparison to other meta-heuristics. This study is about developing enhanced ICA approach to solve the cell formation problem (CFP) using sequence data. In addition to the conventional ICA, an enhanced version of ICA, namely EICA, applies local search techniques to add more intensification aptitude and embed the features of exploration and intensification more successfully. Suitable performance measures are used to compare the proposed algorithms with some other powerful solution approaches in the literature. In the same way, for checking the proficiency of algorithms, forty test problems are presented. Five benchmark problems have sequence data, and other ones are based on 0-1 matrices modified to sequence based problems. Computational results elucidate the efficiency of the EICA in solving CFP problems.

Keywords: cell formation problem, group technology, imperialist competitive algorithm, sequence data

Procedia PDF Downloads 455
7796 An Experimental Study on Some Conventional and Hybrid Models of Fuzzy Clustering

Authors: Jeugert Kujtila, Kristi Hoxhalli, Ramazan Dalipi, Erjon Cota, Ardit Murati, Erind Bedalli

Abstract:

Clustering is a versatile instrument in the analysis of collections of data providing insights of the underlying structures of the dataset and enhancing the modeling capabilities. The fuzzy approach to the clustering problem increases the flexibility involving the concept of partial memberships (some value in the continuous interval [0, 1]) of the instances in the clusters. Several fuzzy clustering algorithms have been devised like FCM, Gustafson-Kessel, Gath-Geva, kernel-based FCM, PCM etc. Each of these algorithms has its own advantages and drawbacks, so none of these algorithms would be able to perform superiorly in all datasets. In this paper we will experimentally compare FCM, GK, GG algorithm and a hybrid two-stage fuzzy clustering model combining the FCM and Gath-Geva algorithms. Firstly we will theoretically dis-cuss the advantages and drawbacks for each of these algorithms and we will describe the hybrid clustering model exploiting the advantages and diminishing the drawbacks of each algorithm. Secondly we will experimentally compare the accuracy of the hybrid model by applying it on several benchmark and synthetic datasets.

Keywords: fuzzy clustering, fuzzy c-means algorithm (FCM), Gustafson-Kessel algorithm, hybrid clustering model

Procedia PDF Downloads 514
7795 Equity Risk Premiums and Risk Free Rates in Modelling and Prediction of Financial Markets

Authors: Mohammad Ghavami, Reza S. Dilmaghani

Abstract:

This paper presents an adaptive framework for modelling financial markets using equity risk premiums, risk free rates and volatilities. The recorded economic factors are initially used to train four adaptive filters for a certain limited period of time in the past. Once the systems are trained, the adjusted coefficients are used for modelling and prediction of an important financial market index. Two different approaches based on least mean squares (LMS) and recursive least squares (RLS) algorithms are investigated. Performance analysis of each method in terms of the mean squared error (MSE) is presented and the results are discussed. Computer simulations carried out using recorded data show MSEs of 4% and 3.4% for the next month prediction using LMS and RLS adaptive algorithms, respectively. In terms of twelve months prediction, RLS method shows a better tendency estimation compared to the LMS algorithm.

Keywords: adaptive methods, LSE, MSE, prediction of financial Markets

Procedia PDF Downloads 336
7794 Estimation of Optimum Parameters of Non-Linear Muskingum Model of Routing Using Imperialist Competition Algorithm (ICA)

Authors: Davood Rajabi, Mojgan Yazdani

Abstract:

Non-linear Muskingum model is an efficient method for flood routing, however, the efficiency of this method is influenced by three applied parameters. Therefore, efficiency assessment of Imperialist Competition Algorithm (ICA) to evaluate optimum parameters of non-linear Muskingum model was addressed through this study. In addition to ICA, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) were also used aiming at an available criterion to verdict ICA. In this regard, ICA was applied for Wilson flood routing; then, routing of two flood events of DoAab Samsami River was investigated. In case of Wilson flood that the target function was considered as the sum of squared deviation (SSQ) of observed and calculated discharges. Routing two other floods, in addition to SSQ, another target function was also considered as the sum of absolute deviations of observed and calculated discharge. For the first floodwater based on SSQ, GA indicated the best performance, however, ICA was on first place, based on SAD. For the second floodwater, based on both target functions, ICA indicated a better operation. According to the obtained results, it can be said that ICA could be used as an appropriate method to evaluate the parameters of Muskingum non-linear model.

Keywords: Doab Samsami river, genetic algorithm, imperialist competition algorithm, meta-exploratory algorithms, particle swarm optimization, Wilson flood

Procedia PDF Downloads 504
7793 Numerical Analysis of a Reaction Diffusion System of Lambda-Omega Type

Authors: Hassan J. Al Salman, Ahmed A. Al Ghafli

Abstract:

In this study, we consider a nonlinear in time finite element approximation of a reaction diffusion system of lambda-omega type. We use a fixed-point theorem to prove existence of the approximations at each time level. Then, we derive some essential stability estimates and discuss the uniqueness of the approximations. In addition, we employ Nochetto mathematical framework to prove an optimal error bound in time for d= 1, 2 and 3 space dimensions. Finally, we present some numerical experiments to verify the obtained theoretical results.

Keywords: reaction diffusion system, finite element approximation, stability estimates, error bound

Procedia PDF Downloads 430
7792 Diabetes Mellitus and Blood Glucose Variability Increases the 30-day Readmission Rate after Kidney Transplantation

Authors: Harini Chakkera

Abstract:

Background: Inpatient hyperglycemia is an established independent risk factor among several patient cohorts with hospital readmission. This has not been studied after kidney transplantation. Nearly one-third of patients who have undergone a kidney transplant reportedly experience 30-day readmission. Methods: Data on first-time solitary kidney transplantations were retrieved between September 2015 to December 2018. Information was linked to the electronic health record to determine a diagnosis of diabetes mellitus and extract glucometeric and insulin therapy data. Univariate logistic regression analysis and the XGBoost algorithm were used to predict 30-day readmission. We report the average performance of the models on the testing set on five bootstrapped partitions of the data to ensure statistical significance. Results: The cohort included 1036 patients who received kidney transplantation, and 224 (22%) experienced 30-day readmission. The machine learning algorithm was able to predict 30-day readmission with an average AUC of 77.3% (95% CI 75.30-79.3%). We observed statistically significant differences in the presence of pretransplant diabetes, inpatient-hyperglycemia, inpatient-hypoglycemia, and minimum and maximum glucose values among those with higher 30-day readmission rates. The XGBoost model identified the index admission length of stay, presence of hyper- and hypoglycemia and recipient and donor BMI values as the most predictive risk factors of 30-day readmission. Additionally, significant variations in the therapeutic management of blood glucose by providers were observed. Conclusions: Suboptimal glucose metrics during hospitalization after kidney transplantation is associated with an increased risk for 30-day hospital readmission. Optimizing the hospital blood glucose management, a modifiable factor, after kidney transplantation may reduce the risk of 30-day readmission.

Keywords: kidney, transplant, diabetes, insulin

Procedia PDF Downloads 90
7791 Adaptive Motion Compensated Spatial Temporal Filter of Colonoscopy Video

Authors: Nidhal Azawi

Abstract:

Colonoscopy procedure is widely used in the world to detect an abnormality. Early diagnosis can help to heal many patients. Because of the unavoidable artifacts that exist in colon images, doctors cannot detect a colon surface precisely. The purpose of this work is to improve the visual quality of colonoscopy videos to provide better information for physicians by removing some artifacts. This work complements a series of work consisting of three previously published papers. In this paper, Optic flow is used for motion compensation, and then consecutive images are aligned/registered to integrate some information to create a new image that has or reveals more information than the original one. Colon images have been classified into informative and noninformative images by using a deep neural network. Then, two different strategies were used to treat informative and noninformative images. Informative images were treated by using Lucas Kanade (LK) with an adaptive temporal mean/median filter, whereas noninformative images are treated by using Lucas Kanade with a derivative of Gaussian (LKDOG) with adaptive temporal median images. A comparison result showed that this work achieved better results than that results in the state- of- the- art strategies for the same degraded colon images data set, which consists of 1000 images. The new proposed algorithm reduced the error alignment by about a factor of 0.3 with a 100% successfully image alignment ratio. In conclusion, this algorithm achieved better results than the state-of-the-art approaches in case of enhancing the informative images as shown in the results section; also, it succeeded to convert the non-informative images that have very few details/no details because of the blurriness/out of focus or because of the specular highlight dominate significant amount of an image to informative images.

Keywords: optic flow, colonoscopy, artifacts, spatial temporal filter

Procedia PDF Downloads 113
7790 A Survey on Lossless Compression of Bayer Color Filter Array Images

Authors: Alina Trifan, António J. R. Neves

Abstract:

Although most digital cameras acquire images in a raw format, based on a Color Filter Array that arranges RGB color filters on a square grid of photosensors, most image compression techniques do not use the raw data; instead, they use the rgb result of an interpolation algorithm of the raw data. This approach is inefficient and by performing a lossless compression of the raw data, followed by pixel interpolation, digital cameras could be more power efficient and provide images with increased resolution given that the interpolation step could be shifted to an external processing unit. In this paper, we conduct a survey on the use of lossless compression algorithms with raw Bayer images. Moreover, in order to reduce the effect of the transition between colors that increase the entropy of the raw Bayer image, we split the image into three new images corresponding to each channel (red, green and blue) and we study the same compression algorithms applied to each one individually. This simple pre-processing stage allows an improvement of more than 15% in predictive based methods.

Keywords: bayer image, CFA, lossless compression, image coding standards

Procedia PDF Downloads 321
7789 A General Form of Characteristics Method Applied on Minimum Length Nozzles Design

Authors: Merouane Salhi, Mohamed Roudane, Abdelkader Kirad

Abstract:

In this work, we present a new form of characteristics method, which is a technique for solving partial differential equations. Typically, it applies to first-order equations; the aim of this method is to reduce a partial differential equation to a family of ordinary differential equations along which the solution can be integrated from some initial data. This latter developed under the real gas theory, because when the thermal and the caloric imperfections of a gas increases, the specific heat and their ratio do not remain constant anymore and start to vary with the gas parameters. The gas doesn’t stay perfect. Its state equation change and it becomes for a real gas. The presented equations of the characteristics remain valid whatever area or field of study. Here we need have inserted the developed Prandtl Meyer function in the mathematical system to find a new model when the effect of stagnation pressure is taken into account. In this case, the effects of molecular size and intermolecular attraction forces intervene to correct the state equation, the thermodynamic parameters and the value of Prandtl Meyer function. However, with the assumptions that Berthelot’s state equation accounts for molecular size and intermolecular force effects, expressions are developed for analyzing the supersonic flow for thermally and calorically imperfect gas. The supersonic parameters depend directly on the stagnation parameters of the combustion chamber. The resolution has been made by the finite differences method using the corrector predictor algorithm. As results, the developed mathematical model used to design 2D minimum length nozzles under effect of the stagnation parameters of fluid flow. A comparison for air with the perfect gas PG and high temperature models on the one hand and our results by the real gas theory on the other of nozzles shapes and characteristics are made.

Keywords: numerical methods, nozzles design, real gas, stagnation parameters, supersonic expansion, the characteristics method

Procedia PDF Downloads 243
7788 New Segmentation of Piecewise Moving-Average Model by Using Reversible Jump MCMC Algorithm

Authors: Suparman

Abstract:

This paper addresses the problem of the signal segmentation within a Bayesian framework by using reversible jump MCMC algorithm. The signal is modelled by piecewise constant Moving-Average (MA) model where the numbers of segments, the position of change-point, the order and the coefficient of the MA model for each segment are unknown. The reversible jump MCMC algorithm is then used to generate samples distributed according to the joint posterior distribution of the unknown parameters. These samples allow calculating some interesting features of the posterior distribution. The performance of the methodology is illustrated via several simulation results.

Keywords: piecewise, moving-average model, reversible jump MCMC, signal segmentation

Procedia PDF Downloads 227
7787 Comparison of the Distillation Curve Obtained Experimentally with the Curve Extrapolated by a Commercial Simulator

Authors: Lívia B. Meirelles, Erika C. A. N. Chrisman, Flávia B. de Andrade, Lilian C. M. de Oliveira

Abstract:

True Boiling Point distillation (TBP) is one of the most common experimental techniques for the determination of petroleum properties. This curve provides information about the performance of petroleum in terms of its cuts. The experiment is performed in a few days. Techniques are used to determine the properties faster with a software that calculates the distillation curve when a little information about crude oil is known. In order to evaluate the accuracy of distillation curve prediction, eight points of the TBP curve and specific gravity curve (348 K and 523 K) were inserted into the HYSYS Oil Manager, and the extended curve was evaluated up to 748 K. The methods were able to predict the curve with the accuracy of 0.6%-9.2% error (Software X ASTM), 0.2%-5.1% error (Software X Spaltrohr).

Keywords: distillation curve, petroleum distillation, simulation, true boiling point curve

Procedia PDF Downloads 442
7786 Algorithmic Approach to Management of Complications of Permanent Facial Filler: A Saudi Experience

Authors: Luay Alsalmi

Abstract:

Background: Facial filler is the most common type of cosmetic surgery next to botox. Permanent filler is preferred nowadays due to the low cost brought about by non-recurring injection appointments. However, such fillers pose a higher risk for complications, with even greater adverse effects when the procedure is done using unknown dermal filler injections. AIM: This study aimed to establish an algorithm to categorize and manage patients that receive permanent fillers. Materials and Methods: Twelve participants were presented to the service through emergency or as outpatient from November 2015 to May 2021. Demographics such as age, sex, date of injection, time of onset, and types of complications were collected. After examination, all cases were managed based on an algorithm established. FACE-Q was used to measure overall satisfaction and psychological well-being. Results: The algorithm to diagnose and manage these patients effectively with a high satisfaction rate was established in this study. All participants were non-smoker females with no known medical comorbidities. The algorithm presented determined the treatment plan when faced with complications. Results revealed high appearance-related psychosocial distress was observed prior to surgery, while it significantly dropped after surgery. FACE-Q was able to establish evidence of satisfactory ratings among patients prior to and after surgery. Conclusion: This treatment algorithm can guide the surgeon in formulating a suitable plan with fewer complications and a high satisfaction rate.

Keywords: facial filler, FACE-Q, psycho-social stress, botox, treatment algorithm

Procedia PDF Downloads 84
7785 Commissioning of a Flattening Filter Free (FFF) using an Anisotropic Analytical Algorithm (AAA)

Authors: Safiqul Islam, Anamul Haque, Mohammad Amran Hossain

Abstract:

Aim: To compare the dosimetric parameters of the flattened and flattening filter free (FFF) beam and to validate the beam data using anisotropic analytical algorithm (AAA). Materials and Methods: All the dosimetric data’s (i.e. depth dose profiles, profile curves, output factors, penumbra etc.) required for the beam modeling of AAA were acquired using the Blue Phantom RFA for 6 MV, 6 FFF, 10MV & 10FFF. Progressive resolution Optimizer and Dose Volume Optimizer algorithm for VMAT and IMRT were are also configured in the beam model. Beam modeling of the AAA were compared with the measured data sets. Results: Due to the higher and lover energy component in 6FFF and 10 FFF the surface doses are 10 to 15% higher compared to flattened 6 MV and 10 MV beams. FFF beam has a lower mean energy compared to the flattened beam and the beam quality index were 6 MV 0.667, 6FFF 0.629, 10 MV 0.74 and 10 FFF 0.695 respectively. Gamma evaluation with 2% dose and 2 mm distance criteria for the Open Beam, IMRT and VMAT plans were also performed and found a good agreement between the modeled and measured data. Conclusion: We have successfully modeled the AAA algorithm for the flattened and FFF beams and achieved a good agreement with the calculated and measured value.

Keywords: commissioning of a Flattening Filter Free (FFF) , using an Anisotropic Analytical Algorithm (AAA), flattened beam, parameters

Procedia PDF Downloads 301
7784 Comparison between the Conventional Methods and PSO Based MPPT Algorithm for Photovoltaic Systems

Authors: Ramdan B. A. Koad, Ahmed F. Zobaa

Abstract:

Since the output characteristics of Photovoltaic (PV) system depends on the ambient temperature, solar radiation and load impedance, its maximum Power Point (MPP) is not constant. Under each condition PV module has a point at which it can produce its MPP. Therefore, a Maximum Power Point Tracking (MPPT) method is needed to uphold the PV panel operating at its MPP. This paper presents comparative study between the conventional MPPT methods used in (PV) system: Perturb and Observe (P&O), Incremental Conductance (IncCond), and Particle Swarm Optimization (PSO) algorithm for (MPPT) of (PV) system. To evaluate the study, the proposed PSO MPPT is implemented on a DC-DC converter and has been compared with P&O and INcond methods in terms of their tracking speed, accuracy and performance by using the Matlab tool Simulink. The simulation result shows that the proposed algorithm is simple, and is superior to the P&O and IncCond methods.

Keywords: photovoltaic systems, maximum power point tracking, perturb and observe method, incremental conductance, methods and practical swarm optimization algorithm

Procedia PDF Downloads 358
7783 Survival Analysis Based Delivery Time Estimates for Display FAB

Authors: Paul Han, Jun-Geol Baek

Abstract:

In the flat panel display industry, the scheduler and dispatching system to meet production target quantities and the deadline of production are the major production management system which controls each facility production order and distribution of WIP (Work in Process). In dispatching system, delivery time is a key factor for the time when a lot can be supplied to the facility. In this paper, we use survival analysis methods to identify main factors and a forecasting model of delivery time. Of survival analysis techniques to select important explanatory variables, the cox proportional hazard model is used to. To make a prediction model, the Accelerated Failure Time (AFT) model was used. Performance comparisons were conducted with two other models, which are the technical statistics model based on transfer history and the linear regression model using same explanatory variables with AFT model. As a result, the Mean Square Error (MSE) criteria, the AFT model decreased by 33.8% compared to the existing prediction model, decreased by 5.3% compared to the linear regression model. This survival analysis approach is applicable to implementing a delivery time estimator in display manufacturing. And it can contribute to improve the productivity and reliability of production management system.

Keywords: delivery time, survival analysis, Cox PH model, accelerated failure time model

Procedia PDF Downloads 543
7782 Design and Implementation of a Counting and Differentiation System for Vehicles through Video Processing

Authors: Derlis Gregor, Kevin Cikel, Mario Arzamendia, Raúl Gregor

Abstract:

This paper presents a self-sustaining mobile system for counting and classification of vehicles through processing video. It proposes a counting and classification algorithm divided in four steps that can be executed multiple times in parallel in a SBC (Single Board Computer), like the Raspberry Pi 2, in such a way that it can be implemented in real time. The first step of the proposed algorithm limits the zone of the image that it will be processed. The second step performs the detection of the mobile objects using a BGS (Background Subtraction) algorithm based on the GMM (Gaussian Mixture Model), as well as a shadow removal algorithm using physical-based features, followed by morphological operations. In the first step the vehicle detection will be performed by using edge detection algorithms and the vehicle following through Kalman filters. The last step of the proposed algorithm registers the vehicle passing and performs their classification according to their areas. An auto-sustainable system is proposed, powered by batteries and photovoltaic solar panels, and the data transmission is done through GPRS (General Packet Radio Service)eliminating the need of using external cable, which will facilitate it deployment and translation to any location where it could operate. The self-sustaining trailer will allow the counting and classification of vehicles in specific zones with difficult access.

Keywords: intelligent transportation system, object detection, vehicle couting, vehicle classification, video processing

Procedia PDF Downloads 322
7781 Mathematical and Numerical Analysis of a Reaction Diffusion System of Lambda-Omega Type

Authors: Hassan Al Salman, Ahmed Al Ghafli

Abstract:

In this study we consider a nonlinear in time finite element approximation of a reaction diffusion system of lambda-omega type. We use a fixed point theorem to prove existence of the approximations. Then, we derive some essential stability estimates and discuss the uniqueness of the approximations. Also, we prove an optimal error bound in time for d=1, 2 and 3 space dimensions. Finally, we present some numerical experiments to verify the theoretical results.

Keywords: reaction diffusion system, finite element approximation, fixed point theorem, an optimal error bound

Procedia PDF Downloads 533
7780 A Comparative Analysis of Global Minimum Variance and Naïve Portfolios: Performance across Stock Market Indices and Selected Economic Regimes Using Various Risk-Return Metrics

Authors: Lynmar M. Didal, Ramises G. Manzano Jr., Jacque Bon-Isaac C. Aboy

Abstract:

This study analyzes the performance of global minimum variance and naive portfolios across different economic periods, using monthly stock returns from the Philippine Stock Exchange Index (PSEI), S&P 500, and Dow Jones Industrial Average (DOW). The performance is evaluated through the Sharpe ratio, Sortino ratio, Jensen’s Alpha, Treynor ratio, and Information ratio. Additionally, the study investigates the impact of short selling on portfolio performance. Six-time periods are defined for analysis, encompassing events such as the global financial crisis and the COVID-19 pandemic. Findings indicate that the Naive portfolio generally outperforms the GMV portfolio in the S&P 500, signifying higher returns with increased volatility. Conversely, in the PSEI and DOW, the GMV portfolio shows more efficient risk-adjusted returns. Short selling significantly impacts the GMV portfolio during mid-GFC and mid-COVID periods. The study offers insights for investors, suggesting the Naive portfolio for higher risk tolerance and the GMV portfolio as a conservative alternative.

Keywords: portfolio performance, global minimum variance, naïve portfolio, risk-adjusted metrics, short-selling

Procedia PDF Downloads 96
7779 Effect of Abiotic Factors on Population of Red Cotton Bug Dysdercus Koenigii F. (Heteroptera: Pyrrhocoridae) and Its Impact on Cotton Boll Disease

Authors: Haider Karar, Saghir Ahmad, Amjad Ali, Ibrar Ul Haq

Abstract:

The experiment was conducted at Cotton Research Station, Multan to study the impact of weather factors and red cotton bug (RCB) on cotton boll disease yielded yellowish lint during 2012. The population on RCB along with abiotic factors was recorded during three consecutive years i.e. 2012, 2013, and 2014. Along with population of RCB and abiotic factors, the number of unopened/opened cotton bolls (UOB), percent yellowish lint (YL) and whitish lint (WL) were also recorded. The data revealed that the population per plant of RCB remain 0.50 and 0.34 during years 2012, 2013 but increased during 2014 i.e. 3.21 per plant. The number of UOB were more i.e. 13.43% in 2012 with YL 76.30 and WL 23.70% when average maximum temperature 34.73◦C, minimum temperature 22.83◦C, RH 77.43% and 11.08 mm rainfall. Similarly in 2013 the number of UOB were less i.e. 0.34 per plant with YL 1.48 and WL 99.53 per plant when average maximum temperature 34.60◦C, minimum temperature 23.37◦C, RH 73.01% and 9.95 mm rainfall. During 2014 RCB population per plant was 3.22 with no UOB and YL was 0.00% and WL was 100% when average maximum temperature 23.70◦C, minimum temperature 23.18◦C, RH 71.67% and 4.55 mm rainfall. So it is concluded that the cotton bolls disease was more during 2012 due to more rainfall and more percent RH. The RCB may be the carrier of boll rot disease pathogen during more rainfall.

Keywords: red cotton bug, cotton, weather factors, years

Procedia PDF Downloads 345
7778 Improvement of Parallel Compressor Model in Dealing Outlet Unequal Pressure Distribution

Authors: Kewei Xu, Jens Friedrich, Kevin Dwinger, Wei Fan, Xijin Zhang

Abstract:

Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.

Keywords: parallel compressor model (pcm), revised calculation method, inlet distortion, outlet unequal pressure distribution

Procedia PDF Downloads 331
7777 The Parallelization of Algorithm Based on Partition Principle for Association Rules Discovery

Authors: Khadidja Belbachir, Hafida Belbachir

Abstract:

subsequently the expansion of the physical supports storage and the needs ceaseless to accumulate several data, the sequential algorithms of associations’ rules research proved to be ineffective. Thus the introduction of the new parallel versions is imperative. We propose in this paper, a parallel version of a sequential algorithm “Partition”. This last is fundamentally different from the other sequential algorithms, because it scans the data base only twice to generate the significant association rules. By consequence, the parallel approach does not require much communication between the sites. The proposed approach was implemented for an experimental study. The obtained results, shows a great reduction in execution time compared to the sequential version and Count Distributed algorithm.

Keywords: association rules, distributed data mining, partition, parallel algorithms

Procedia PDF Downloads 416
7776 The Weights of Distinguished sl2-Subalgebras in Dn

Authors: Yassir I. Dinar

Abstract:

We computed the weights of the adjoint action of distinguished sl2-triples in Lie algebra of type Dn using mathematical induction.

Keywords: lie algebra, root systems, representation theory, nilpotent orbits

Procedia PDF Downloads 294
7775 Biocontrol Effectiveness of Indigenous Trichoderma Species against Meloidogyne javanica and Fusarium oxysporum f. sp. radicis lycopersici on Tomato

Authors: Hajji Lobna, Chattaoui Mayssa, Regaieg Hajer, M'Hamdi-Boughalleb Naima, Rhouma Ali, Horrigue-Raouani Najet

Abstract:

In this study, three local isolates of Trichoderma (Tr1: T. viride, Tr2: T. harzianum and Tr3: T. asperellum) were isolated and evaluated for their biocontrol effectiveness under in vitro conditions and in greenhouse. In vitro bioassay revealed a biopotential control against Fusarium oxysporum f. sp. radicis lycopersici and Meloidogyne javanica (RKN) separately. All species of Trichoderma exhibited biocontrol performance and (Tr1) Trichoderma viride was the most efficient. In fact, growth rate inhibition of Fusarium oxysporum f. sp. radicis lycopersici (FORL) was reached 75.5% with Tr1. Parasitism rate of root-knot nematode was 60% for juveniles and 75% for eggs with the same one. Pots experiment results showed that Tr1 and Tr2, compared to chemical treatment, enhanced the plant growth and exhibited better antagonism against root-knot nematode and root-rot fungi separated or combined. All Trichoderma isolates revealed a bioprotection potential against Fusarium oxysporum f. sp. radicis lycopersici. When pathogen fungi inoculated alone, Fusarium wilt index and browning vascular rate were reduced significantly with Tr1 (0.91, 2.38%) and Tr2 (1.5, 5.5%), respectively. In the case of combined infection with Fusarium and nematode, the same isolate of Trichoderma Tr1 and Tr2 decreased Fusarium wilt index at 1.1 and 0.83 and reduced the browning vascular rate at 6.5% and 6%, respectively. Similarly, the isolate Tr1 and Tr2 caused maximum inhibition of nematode multiplication. Multiplication rate was declined at 4% with both isolates either tomato infected by nematode separately or concomitantly with Fusarium. The chemical treatment was moderate in activity against Meloidogyne javanica and Fusarium oxysporum f. sp. radicis lycopersici alone and combined.

Keywords: trichoderma spp., meloidogyne javanica, Fusarium oxysporum f.sp.radicis lycopersici, biocontrol

Procedia PDF Downloads 278
7774 Wireless Sensor Networks Optimization by Using 2-Stage Algorithm Based on Imperialist Competitive Algorithm

Authors: Hamid R. Lashgarian Azad, Seyed N. Shetab Boushehri

Abstract:

Wireless sensor networks (WSN) have become progressively popular due to their wide range of applications. Wireless Sensor Network is made of numerous tiny sensor nodes that are battery-powered. It is a very significant problem to maximize the lifetime of wireless sensor networks. In this paper, we propose a two-stage protocol based on an imperialist competitive algorithm (2S-ICA) to solve a sensor network optimization problem. The energy of the sensors can be greatly reduced and the lifetime of the network reduced by long communication distances between the sensors and the sink. We can minimize the overall communication distance considerably, thereby extending the lifetime of the network lifetime through connecting sensors into a series of independent clusters using 2SICA. Comparison results of the proposed protocol and LEACH protocol, which is common to solving WSN problems, show that our protocol has a better performance in terms of improving network life and increasing the number of transmitted data.

Keywords: wireless sensor network, imperialist competitive algorithm, LEACH protocol, k-means clustering

Procedia PDF Downloads 103
7773 Evaluation of Three Digital Graphical Methods of Baseflow Separation Techniques in the Tekeze Water Basin in Ethiopia

Authors: Alebachew Halefom, Navsal Kumar, Arunava Poddar

Abstract:

The purpose of this work is to specify the parameter values, the base flow index (BFI), and to rank the methods that should be used for base flow separation. Three different digital graphical approaches are chosen and used in this study for the purpose of comparison. The daily time series discharge data were collected from the site for a period of 30 years (1986 up to 2015) and were used to evaluate the algorithms. In order to separate the base flow and the surface runoff, daily recorded streamflow (m³/s) data were used to calibrate procedures and get parameter values for the basin. Additionally, the performance of the model was assessed by the use of the standard error (SE), the coefficient of determination (R²), and the flow duration curve (FDC) and baseflow indexes. The findings indicate that, in general, each strategy can be used worldwide to differentiate base flow; however, the Sliding Interval Method (SIM) performs significantly better than the other two techniques in this basin. The average base flow index was calculated to be 0.72 using the local minimum method, 0.76 using the fixed interval method, and 0.78 using the sliding interval method, respectively.

Keywords: baseflow index, digital graphical methods, streamflow, Emba Madre Watershed

Procedia PDF Downloads 79
7772 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)

Procedia PDF Downloads 309
7771 Effects of Surface Roughness on a Unimorph Piezoelectric Micro-Electro-Mechanical Systems Vibrational Energy Harvester Using Finite Element Method Modeling

Authors: Jean Marriz M. Manzano, Marc D. Rosales, Magdaleno R. Vasquez Jr., Maria Theresa G. De Leon

Abstract:

This paper discusses the effects of surface roughness on a cantilever beam vibrational energy harvester. A silicon sample was fabricated using MEMS fabrication processes. When etching silicon using deep reactive ion etching (DRIE) at large etch depths, rougher surfaces are observed as a result of increased response in process pressure, amount of coil power and increased helium backside cooling readings. To account for the effects of surface roughness on the characteristics of the cantilever beam, finite element method (FEM) modeling was performed using actual roughness data from fabricated samples. It was found that when etching about 550um of silicon, root mean square roughness parameter, Sq, varies by 1 to 3 um (at 100um thick) across a 6-inch wafer. Given this Sq variation, FEM simulations predict an 8 to148 Hz shift in the resonant frequency while having no significant effect on the output power. The significant shift in the resonant frequency implies that careful consideration of surface roughness from fabrication processes must be done when designing energy harvesters.

Keywords: deep reactive ion etching, finite element method, microelectromechanical systems, multiphysics analysis, surface roughness, vibrational energy harvester

Procedia PDF Downloads 121
7770 Aggregate Supply Response of Some Livestock Commodities in Algeria: Cointegration- Vector Error Correction Model Approach

Authors: Amine M. Benmehaia, Amine Oulmane

Abstract:

The supply response of agricultural commodities to changes in price incentives is an important issue for the success of any policy reform in the agricultural sector. This study aims to quantify the responsiveness of producers of some livestock commodities to price incentives in Algerian context. Time series analysis is used on annual data for a period of 52 years (1966-2018). Both co-integration and vector error correction model (VECM) are used through the Nerlove model of partial adjustment. The study attempts to determine the long-run and short-run relationships along with the magnitudes of disequilibria in the selected commodities. Results show that the short-run price elasticities are low in cow and sheep meat sectors (8.7 and 8% respectively), while their respective long-run elasticities are 16.5 and 10.5, whereas eggs and milk have very high short-run price elasticities (82 and 90% respectively) with long-run elasticities of 40 and 46 respectively. The error correction coefficient, reflecting the speed of adjustment towards the long-run equilibrium, is statistically significant and have the expected negative sign. Its estimates are 12.7 for cow meat, 33.5 for sheep meat, 46.7 for eggs and 8.4 for milk. It seems that cow meat and milk producers have a weak feedback of about 12.7% and 8.4% respectively of the previous year's disequilibrium from the long-run price elasticity, whereas sheep meat and eggs producers adjust to correct long run disequilibrium with a high speed of adjustment (33.5% and 46.7 % respectively). The implication of this is that much more in-depth research is needed to identify those factors that affect agricultural supply and to describe the effect of factors that shift supply in response to price incentives. This could provide valuable information for government in the use of appropriate policy measures.

Keywords: Algeria, cointegration, livestock, supply response, vector error correction model

Procedia PDF Downloads 141