Search results for: likelihood ratioleast square lattice adaptive filters.
1073 Intelligent Temperature Controller for Water-Bath System
Authors: Om Prakash Verma, Rajesh Singla, Rajesh Kumar
Abstract:
Conventional controller’s usually required a prior knowledge of mathematical modelling of the process. The inaccuracy of mathematical modelling degrades the performance of the process, especially for non-linear and complex control problem. The process used is Water-Bath system, which is most widely used and nonlinear to some extent. For Water-Bath system, it is necessary to attain desired temperature within a specified period of time to avoid the overshoot and absolute error, with better temperature tracking capability, else the process is disturbed.
To overcome above difficulties intelligent controllers, Fuzzy Logic (FL) and Adaptive Neuro-Fuzzy Inference System (ANFIS), are proposed in this paper. The Fuzzy controller is designed to work with knowledge in the form of linguistic control rules. But the translation of these linguistic rules into the framework of fuzzy set theory depends on the choice of certain parameters, for which no formal method is known. To design ANFIS, Fuzzy-Inference-System is combined with learning capability of Neural-Network.
It is analyzed that ANFIS is best suitable for adaptive temperature control of above system. As compared to PID and FLC, ANFIS produces a stable control signal. It has much better temperature tracking capability with almost zero overshoot and minimum absolute error.
Keywords: PID Controller, FLC, ANFIS, Non-Linear Control System, Water-Bath System, MATLAB-7.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 55481072 A Comparative Study on ANN, ANFIS and SVM Methods for Computing Resonant Frequency of A-Shaped Compact Microstrip Antennas
Authors: Ahmet Kayabasi, Ali Akdagli
Abstract:
In this study, three robust predicting methods, namely artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS) and support vector machine (SVM) were used for computing the resonant frequency of A-shaped compact microstrip antennas (ACMAs) operating at UHF band. Firstly, the resonant frequencies of 144 ACMAs with various dimensions and electrical parameters were simulated with the help of IE3D™ based on method of moment (MoM). The ANN, ANFIS and SVM models for computing the resonant frequency were then built by considering the simulation data. 124 simulated ACMAs were utilized for training and the remaining 20 ACMAs were used for testing the ANN, ANFIS and SVM models. The performance of the ANN, ANFIS and SVM models are compared in the training and test process. The average percentage errors (APE) regarding the computed resonant frequencies for training of the ANN, ANFIS and SVM were obtained as 0.457%, 0.399% and 0.600%, respectively. The constructed models were then tested and APE values as 0.601% for ANN, 0.744% for ANFIS and 0.623% for SVM were achieved. The results obtained here show that ANN, ANFIS and SVM methods can be successfully applied to compute the resonant frequency of ACMAs, since they are useful and versatile methods that yield accurate results.Keywords: A-shaped compact microstrip antenna, Artificial Neural Network (ANN), adaptive Neuro-Fuzzy Inference System (ANFIS), Support Vector Machine (SVM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22151071 Phosphine Mortality Estimation for Simulation of Controlling Pest of Stored Grain: Lesser Grain Borer (Rhyzopertha dominica)
Authors: Mingren Shi, Michael Renton
Abstract:
There is a world-wide need for the development of sustainable management strategies to control pest infestation and the development of phosphine (PH3) resistance in lesser grain borer (Rhyzopertha dominica). Computer simulation models can provide a relatively fast, safe and inexpensive way to weigh the merits of various management options. However, the usefulness of simulation models relies on the accurate estimation of important model parameters, such as mortality. Concentration and time of exposure are both important in determining mortality in response to a toxic agent. Recent research indicated the existence of two resistance phenotypes in R. dominica in Australia, weak and strong, and revealed that the presence of resistance alleles at two loci confers strong resistance, thus motivating the construction of a two-locus model of resistance. Experimental data sets on purified pest strains, each corresponding to a single genotype of our two-locus model, were also available. Hence it became possible to explicitly include mortalities of the different genotypes in the model. In this paper we described how we used two generalized linear models (GLM), probit and logistic models, to fit the available experimental data sets. We used a direct algebraic approach generalized inverse matrix technique, rather than the traditional maximum likelihood estimation, to estimate the model parameters. The results show that both probit and logistic models fit the data sets well but the former is much better in terms of small least squares (numerical) errors. Meanwhile, the generalized inverse matrix technique achieved similar accuracy results to those from the maximum likelihood estimation, but is less time consuming and computationally demanding.
Keywords: mortality estimation, probit models, logistic model, generalized inverse matrix approach, pest control simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15841070 A Dynamic Composition of an Adaptive Course
Authors: S. Chiali, Z.Eberrichi, M.Malki
Abstract:
The number of framework conceived for e-learning constantly increase, unfortunately the creators of learning materials and educational institutions engaged in e-formation adopt a “proprietor" approach, where the developed products (courses, activities, exercises, etc.) can be exploited only in the framework where they were conceived, their uses in the other learning environments requires a greedy adaptation in terms of time and effort. Each one proposes courses whose organization, contents, modes of interaction and presentations are unique for all learners, unfortunately the latter are heterogeneous and are not interested by the same information, but only by services or documents adapted to their needs. Currently the new tendency for the framework conceived for e-learning, is the interoperability of learning materials, several standards exist (DCMI (Dublin Core Metadata Initiative)[2], LOM (Learning Objects Meta data)[1], SCORM (Shareable Content Object Reference Model)[6][7][8], ARIADNE (Alliance of Remote Instructional Authoring and Distribution Networks for Europe)[9], CANCORE (Canadian Core Learning Resource Metadata Application Profiles)[3]), they converge all to the idea of learning objects. They are also interested in the adaptation of the learning materials according to the learners- profile. This article proposes an approach for the composition of courses adapted to the various profiles (knowledge, preferences, objectives) of learners, based on two ontologies (domain to teach and educational) and the learning objects.Keywords: Adaptive educational hypermedia systems (AEHS), E-learning, Learner's model, Learning objects, Metadata, Ontology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19601069 Moment Generating Functions of Observed Gaps between Hypopnea Using Saddlepoint Approximations
Authors: Nur Zakiah Mohd Saat, Abdul Aziz Jemain
Abstract:
Saddlepoint approximations is one of the tools to obtain an expressions for densities and distribution functions. We approximate the densities of the observed gaps between the hypopnea events using the Huzurbazar saddlepoint approximation. We demonstrate the density of a maximum likelihood estimator in exponential families.Keywords: Exponential, maximum likehood estimators, observed gap, Saddlepoint approximations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12981068 Construction of Space-Filling Designs for Three Input Variables Computer Experiments
Authors: Kazeem A. Osuolale, Waheed B. Yahya, Babatunde L. Adeleke
Abstract:
Latin hypercube designs (LHDs) have been applied in many computer experiments among the space-filling designs found in the literature. A LHD can be randomly generated but a randomly chosen LHD may have bad properties and thus act poorly in estimation and prediction. There is a connection between Latin squares and orthogonal arrays (OAs). A Latin square of order s involves an arrangement of s symbols in s rows and s columns, such that every symbol occurs once in each row and once in each column and this exists for every non-negative integer s. In this paper, a computer program was written to construct orthogonal array-based Latin hypercube designs (OA-LHDs). Orthogonal arrays (OAs) were constructed from Latin square of order s and the OAs constructed were afterward used to construct the desired Latin hypercube designs for three input variables for use in computer experiments. The LHDs constructed have better space-filling properties and they can be used in computer experiments that involve only three input factors. MATLAB 2012a computer package (www.mathworks.com/) was used for the development of the program that constructs the designs.Keywords: Computer Experiments, Latin Squares, Latin Hypercube Designs, Orthogonal Array, Space-filling Designs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17231067 MARTI and MRSD: Newly Developed Isolation-Damping Devices with Adaptive Hardening for Seismic Protection of Structures
Authors: Murat Dicleli, Ali Salem Milani
Abstract:
In this paper, a summary of analytical and experimental studies into the behavior of a new hysteretic damper, designed for seismic protection of structures is presented. The Multidirectional Torsional Hysteretic Damper (MRSD) is a patented invention in which a symmetrical arrangement of identical cylindrical steel cores is so configured as to yield in torsion while the structure experiences planar movements due to earthquake shakings. The new device has certain desirable properties. Notably, it is characterized by a variable and controllable-via-design post-elastic stiffness. The mentioned property is a result of MRSD’s kinematic configuration which produces this geometric hardening, rather than being a secondary large-displacement effect. Additionally, the new system is capable of reaching high force and displacement capacities, shows high levels of damping, and very stable cyclic response. The device has gone through many stages of design refinement, multiple prototype verification tests and development of design guide-lines and computer codes to facilitate its implementation in practice. Practicality of the new device, as offspring of an academic sphere, is assured through extensive collaboration with industry in its final design stages, prototyping and verification test programs.Keywords: Seismic, isolation, damper, adaptive stiffness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20001066 Performance Evaluation of a Minimum Mean Square Error-Based Physical Sidelink Share Channel Receiver under Fading Channel
Authors: Yang Fu, Jaime Rodrigo Navarro, Jose F. Monserrat, Faiza Bouchmal, Oscar Carrasco Quilis
Abstract:
Cellular Vehicle to Everything (C-V2X) is considered a promising solution for future autonomous driving. From Release 16 to Release 17, the Third Generation Partnership Project (3GPP) has introduced the definitions and services for 5G New Radio (NR) V2X. Since establishing a simulator for C-V2X communications is an essential preliminary step to achieve reliable and stable communication links, this paper proposes a complete framework of a link-level simulator based on the 3GPP specifications for the Physical Sidelink Share Channel (PSSCH) of the 5G NR Physical Layer (PHY). In this framework, several algorithms in the receiver part, i.e., sliding window in channel estimation and Minimum Mean Square Error (MMSE)-based equalization, are developed. Finally, the performance of the developed PSSCH receiver is validated through extensive simulations under different assumptions.
Keywords: Yang Fu, Jaime Rodrigo Navarro, Jose F. Monserrat, Faiza Bouchmal, Oscar Carrasco Quilis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1501065 Climate Adaptive Building Shells for Plus-Energy-Buildings, Designed on Bionic Principles
Authors: Andreas Hammer
Abstract:
Six peculiar architecture designs from the Frankfurt University will be discussed within this paper and their future potential of the adaptable and solar thin-film sheets implemented facades will be shown acting and reacting on climate/solar changes of their specific sites. The different aspects, as well as limitations with regard to technical and functional restrictions, will be named. The design process for a “multi-purpose building”, a “high-rise building refurbishment” and a “biker’s lodge” on the river Rheine valley, has been critically outlined and developed step by step from an international studentship towards an overall energy strategy, that firstly had to push the design to a plus-energy building and secondly had to incorporate bionic aspects into the building skins design. Both main parameters needed to be reviewed and refined during the whole design process. Various basic bionic approaches have been given [e.g. solar ivy TM, flectofin TM or hygroskin TM, which were to experiment with, regarding the use of bendable photovoltaic thin film elements being parts of a hybrid, kinetic façade system.
Keywords: Energy-strategy, photovoltaic in building skins, bionic and bioclimatic design, plus-energy-buildings, solar gain, the harvesting façade, sustainable building concept, high-efficiency building skin, climate adaptive Building Shells (CABS).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27851064 Flexible Wormhole-Switched Network-on-chip with Two-Level Priority Data Delivery Service
Authors: Faizal A. Samman, Thomas Hollstein, Manfred Glesner
Abstract:
A synchronous network-on-chip using wormhole packet switching and supporting guaranteed-completion best-effort with low-priority (LP) and high-priority (HP) wormhole packet delivery service is presented in this paper. Both our proposed LP and HP message services deliver a good quality of service in term of lossless packet completion and in-order message data delivery. However, the LP message service does not guarantee minimal completion bound. The HP packets will absolutely use 100% bandwidth of their reserved links if the HP packets are injected from the source node with maximum injection. Hence, the service are suitable for small size messages (less than hundred bytes). Otherwise the other HP and LP messages, which require also the links, will experience relatively high latency depending on the size of the HP message. The LP packets are routed using a minimal adaptive routing, while the HP packets are routed using a non-minimal adaptive routing algorithm. Therefore, an additional 3-bit field, identifying the packet type, is introduced in their packet headers to classify and to determine the type of service committed to the packet. Our NoC prototypes have been also synthesized using a 180-nm CMOS standard-cell technology to evaluate the cost of implementing the combination of both services.Keywords: Network-on-Chip, Parallel Pipeline Router Architecture, Wormhole Switching, Two-Level Priority Service.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17661063 Assessment of Time-Lapse in Visible and Thermal Face Recognition
Authors: Sajad Farokhi, Siti Mariyam Shamsuddin, Jan Flusser, Usman Ullah Sheikh
Abstract:
Although face recognition seems as an easy task for human, automatic face recognition is a much more challenging task due to variations in time, illumination and pose. In this paper, the influence of time-lapse on visible and thermal images is examined. Orthogonal moment invariants are used as a feature extractor to analyze the effect of time-lapse on thermal and visible images and the results are compared with conventional Principal Component Analysis (PCA). A new triangle square ratio criterion is employed instead of Euclidean distance to enhance the performance of nearest neighbor classifier. The results of this study indicate that the ideal feature vectors can be represented with high discrimination power due to the global characteristic of orthogonal moment invariants. Moreover, the effect of time-lapse has been decreasing and enhancing the accuracy of face recognition considerably in comparison with PCA. Furthermore, our experimental results based on moment invariant and triangle square ratio criterion show that the proposed approach achieves on average 13.6% higher in recognition rate than PCA.Keywords: Infrared Face recognition, Time-lapse, Zernike moment invariants
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17841062 Inheritance of Primary Yield Component Traits of Common Beans (Phaseolus vulgaris L.): Number of Seeds per Pod and 1000 Seed Weight in an 8X8 Diallel Cross Population
Authors: Atnaf Tiruneh Mulugeta, Mohammed Ali Hussein, Zelleke Habtamu
Abstract:
Thirty six genotypes (8 parents and 28 F1 diallel crosses) were grown in randomized complete block design during 2006 at Mandura, North western Ethiopia. The experiment was executed to study the inheritance of two primary yield component traits: number of seeds per pod and 1000 seed weight. Statistical significant difference was observed between genotypes, parents, and crosses for these traits. The mean square due to GCA was significant for the two traits. However, SCA mean square was significant only for number of seeds per pod. Thus both additive and non-additive types of gene actions were important in the inheritance of number of seeds per pod. Significant b1 component was obtained for this trait. The b2 and b3 components, however, were not significant, suggesting the absence of gene asymmetry. From Wr/Vr graph, inheritance of seeds per pod was governed by partial dominance with additive gene action.
Keywords: Diallel crosses, General combining ability, Phaseolus vulgaris L., Specific combining ability
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24941061 The Impact of Upgrades on ERP System Reliability
Authors: F. Urem, K. Fertalj, I. Livaja
Abstract:
Constant upgrading of Enterprise Resource Planning (ERP) systems is necessary, but can cause new defects. This paper attempts to model the likelihood of defects after completed upgrades with Weibull defect probability density function (PDF). A case study is presented analyzing data of recorded defects obtained for one ERP subsystem. The trends are observed for the value of the parameters relevant to the proposed statistical Weibull distribution for a given one year period. As a result, the ability to predict the appearance of defects after the next upgrade is described.Keywords: ERP, upgrade, reliability, Weibull model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16341060 Stability Enhancement of a Large-Scale Power System Using Power System Stabilizer Based on Adaptive Neuro Fuzzy Inference System
Authors: Agung Budi Muljono, I Made Ginarsa, I Made Ari Nrartha
Abstract:
A large-scale power system (LSPS) consists of two or more sub-systems connected by inter-connecting transmission. Loading pattern on an LSPS always changes from time to time and varies depend on consumer need. The serious instability problem is appeared in an LSPS due to load fluctuation in all of the bus. Adaptive neuro-fuzzy inference system (ANFIS)-based power system stabilizer (PSS) is presented to cover the stability problem and to enhance the stability of an LSPS. The ANFIS control is presented because the ANFIS control is more effective than Mamdani fuzzy control in the computation aspect. Simulation results show that the presented PSS is able to maintain the stability by decreasing peak overshoot to the value of −2.56 × 10−5 pu for rotor speed deviation Δω2−3. The presented PSS also makes the settling time to achieve at 3.78 s on local mode oscillation. Furthermore, the presented PSS is able to improve the peak overshoot and settling time of Δω3−9 to the value of −0.868 × 10−5 pu and at the time of 3.50 s for inter-area oscillation.Keywords: ANFIS, large-scale, power system, PSS, stability enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11951059 Discrete and Stationary Adaptive Sub-Band Threshold Method for Improving Image Resolution
Authors: P. Joyce Beryl Princess, Y. Harold Robinson
Abstract:
Image Processing is a structure of Signal Processing for which the input is the image and the output is also an image or parameter of the image. Image Resolution has been frequently referred as an important aspect of an image. In Image Resolution Enhancement, images are being processed in order to obtain more enhanced resolution. To generate highly resoluted image for a low resoluted input image with high PSNR value. Stationary Wavelet Transform is used for Edge Detection and minimize the loss occurs during Downsampling. Inverse Discrete Wavelet Transform is to get highly resoluted image. Highly resoluted output is generated from the Low resolution input with high quality. Noisy input will generate output with low PSNR value. So Noisy resolution enhancement technique has been used for adaptive sub-band thresholding is used. Downsampling in each of the DWT subbands causes information loss in the respective subbands. SWT is employed to minimize this loss. Inverse Discrete wavelet transform (IDWT) is to convert the object which is downsampled using DWT into a highly resoluted object. Used Image denoising and resolution enhancement techniques will generate image with high PSNR value. Our Proposed method will improve Image Resolution and reached the optimized threshold.Keywords: Image Processing, Inverse Discrete wavelet transform, PSNR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17901058 Triple-input Single-output Voltage-mode Multifunction Filter Using Only Two Current Conveyors
Authors: Mehmet Sagbas, Kemal Fidanboylu, M. Can Bayram
Abstract:
A new voltage-mode triple-input single-output multifunction filter using only two current conveyors is presented. The proposed filter which possesses three inputs and single-output can generate all biquadratic filtering functions at the output terminal by selecting different input signal combinations. The validity of the proposed filter is verified through PSPICE simulations.Keywords: Active Filters, Voltage mode, Current conveyor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17551057 Using Reuse Water for Irrigation Green space of Naein City
Authors: Nasri M., Soleimani A.
Abstract:
Since water resources of desert Naein City are very limited, a approach which saves water resources and meanwhile meets the needs of the greenspace for water is to use city-s sewage wastewater. Proper treatment of Naein-s sewage up to the standards required for green space uses may solve some of the problems of green space development of the city. The present paper closely examines available statistics and information associated with city-s sewage system, and determines complementary stages of sewage treatment facilities of the city. In the present paper, population, per capita water use, and required discharge for various greenspace pieces including different plants are calculated. Moreover, in order to facilitate the application of water resources, a Crude water distribution network apart from drinking water distribution network is designed, and a plan for mixing municipal wells- water with sewage wastewater in proposed mixing tanks is suggested. Hence, following greenspace irrigation reform and complementary plan, per capita greenspace of the city will be increased from current amount of 13.2 square meters to 32 square meters.Keywords: Sewage Treatment Facility, Wastewater, Greenspace, Distribution Network, Naein City
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16461056 Determine of Constant Coefficients to RelateTotal Dissolved Solids to Electrical Conductivity
Authors: M. Siosemarde, F. Kave, E. Pazira, H. Sedghi, S. J. Ghaderi
Abstract:
Salinity is a measure of the amount of salts in the water. Total Dissolved Solids (TDS) as salinity parameter are often determined using laborious and time consuming laboratory tests, but it may be more appropriate and economical to develop a method which uses a more simple soil salinity index. Because dissolved ions increase salinity as well as conductivity, the two measures are related. The aim of this research was determine of constant coefficients for predicting of Total Dissolved Solids (TDS) based on Electrical Conductivity (EC) with Statistics of Correlation coefficient, Root mean square error, Maximum error, Mean Bias error, Mean absolute error, Relative error and Coefficient of residual mass. For this purpose, two experimental areas (S1, S2) of Khuzestan province-IRAN were selected and four treatments with three replications by series of double rings were applied. The treatments were included 25cm, 50cm, 75cm and 100cm water application. The results showed the values 16.3 & 12.4 were the best constant coefficients for predicting of Total Dissolved Solids (TDS) based on EC in Pilot S1 and S2 with correlation coefficient 0.977 & 0.997 and 191.1 & 106.1 Root mean square errors (RMSE) respectively.Keywords: constant coefficients, electrical conductivity, Khuzestan plain and total dissolved solids.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39051055 Self-Adaptive Differential Evolution Based Power Economic Dispatch of Generators with Valve-Point Effects and Multiple Fuel Options
Authors: R.Balamurugan, S.Subramanian
Abstract:
This paper presents the solution of power economic dispatch (PED) problem of generating units with valve point effects and multiple fuel options using Self-Adaptive Differential Evolution (SDE) algorithm. The global optimal solution by mathematical approaches becomes difficult for the realistic PED problem in power systems. The Differential Evolution (DE) algorithm is found to be a powerful evolutionary algorithm for global optimization in many real problems. In this paper the key parameters of control in DE algorithm such as the crossover constant CR and weight applied to random differential F are self-adapted. The PED problem formulation takes into consideration of nonsmooth fuel cost function due to valve point effects and multi fuel options of generator. The proposed approach has been examined and tested with the numerical results of PED problems with thirteen-generation units including valve-point effects, ten-generation units with multiple fuel options neglecting valve-point effects and ten-generation units including valve-point effects and multiple fuel options. The test results are promising and show the effectiveness of proposed approach for solving PED problems.Keywords: Multiple fuels, power economic dispatch, selfadaptivedifferential evolution and valve-point effects.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18951054 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation
Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke
Abstract:
Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.Keywords: Automatic calibration framework, approximate Bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17401053 Security Risk Analysis Based on the Policy Formalization and the Modeling of Big Systems
Authors: Luc Cessieux, French Navy, Adrien Derock, DCNS/IMATH
Abstract:
Security risk models have been successful in estimating the likelihood of attack for simple security threats. However, modeling complex system and their security risk is even a challenge. Many methods have been proposed to face this problem. Often difficult to manipulate, and not enough all-embracing they are not as famous as they should with administrators and deciders. We propose in this paper a new tool to model big systems on purpose. The software, takes into account attack threats and security strength.
Keywords: Security, risk management, threat, modelization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13241052 Modelling and Control of Milk Fermentation Process in Biochemical Reactor
Authors: Jožef Ritonja
Abstract:
The biochemical industry is one of the most important modern industries. Biochemical reactors are crucial devices of the biochemical industry. The essential bioprocess carried out in bioreactors is the fermentation process. A thorough insight into the fermentation process and the knowledge how to control it are essential for effective use of bioreactors to produce high quality and quantitatively enough products. The development of the control system starts with the determination of a mathematical model that describes the steady state and dynamic properties of the controlled plant satisfactorily, and is suitable for the development of the control system. The paper analyses the fermentation process in bioreactors thoroughly, using existing mathematical models. Most existing mathematical models do not allow the design of a control system for controlling the fermentation process in batch bioreactors. Due to this, a mathematical model was developed and presented that allows the development of a control system for batch bioreactors. Based on the developed mathematical model, a control system was designed to ensure optimal response of the biochemical quantities in the fermentation process. Due to the time-varying and non-linear nature of the controlled plant, the conventional control system with a proportional-integral-differential controller with constant parameters does not provide the desired transient response. The improved adaptive control system was proposed to improve the dynamics of the fermentation. The use of the adaptive control is suggested because the parameters’ variations of the fermentation process are very slow. The developed control system was tested to produce dairy products in the laboratory bioreactor. A carbon dioxide concentration was chosen as the controlled variable. The carbon dioxide concentration correlates well with the other, for the quality of the fermentation process in significant quantities. The level of the carbon dioxide concentration gives important information about the fermentation process. The obtained results showed that the designed control system provides minimum error between reference and actual values of carbon dioxide concentration during a transient response and in a steady state. The recommended control system makes reference signal tracking much more efficient than the currently used conventional control systems which are based on linear control theory. The proposed control system represents a very effective solution for the improvement of the milk fermentation process.Keywords: Bioprocess engineering, biochemical reactor, fermentation process, modeling, adaptive control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14841051 SVM-Based Detection of SAR Images in Partially Developed Speckle Noise
Authors: J. P. Dubois, O. M. Abdul-Latif
Abstract:
Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of SAR (synthetic aperture radar) images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to real SAR images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected SAR images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (the detection hypotheses) in the original images.Keywords: Least Square-Support Vector Machine, SyntheticAperture Radar. Partially Developed Speckle, Multi-Look Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15371050 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures
Authors: Adriano Z. Zambom, Preethi Ravikumar
Abstract:
One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.Keywords: Additive models, local polynomial regression, residuals, mean square error, variable selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10111049 Distribution Sampling of Vector Variance without Duplications
Authors: Erna T. Herdiani, Maman A. Djauhari
Abstract:
In recent years, the use of vector variance as a measure of multivariate variability has received much attention in wide range of statistics. This paper deals with a more economic measure of multivariate variability, defined as vector variance minus all duplication elements. For high dimensional data, this will increase the computational efficiency almost 50 % compared to the original vector variance. Its sampling distribution will be investigated to make its applications possible.Keywords: Asymptotic distribution, covariance matrix, likelihood ratio test, vector variance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15741048 Optimal Simultaneous Sizing and Siting of DGs and Smart Meters Considering Voltage Profile Improvement in Active Distribution Networks
Authors: T. Sattarpour, D. Nazarpour
Abstract:
This paper investigates the effect of simultaneous placement of DGs and smart meters (SMs), on voltage profile improvement in active distribution networks (ADNs). A substantial center of attention has recently been on responsive loads initiated in power system problem studies such as distributed generations (DGs). Existence of responsive loads in active distribution networks (ADNs) would have undeniable effect on sizing and siting of DGs. For this reason, an optimal framework is proposed for sizing and siting of DGs and SMs in ADNs. SMs are taken into consideration for the sake of successful implementing of demand response programs (DRPs) such as direct load control (DLC) with end-side consumers. Looking for voltage profile improvement, the optimization procedure is solved by genetic algorithm (GA) and tested on IEEE 33-bus distribution test system. Different scenarios with variations in the number of DG units, individual or simultaneous placing of DGs and SMs, and adaptive power factor (APF) mode for DGs to support reactive power have been established. The obtained results confirm the significant effect of DRPs and APF mode in determining the optimal size and site of DGs to be connected in ADN resulting to the improvement of voltage profile as well.
Keywords: Active distribution network (ADN), distributed generations (DGs), smart meters (SMs), demand response programs (DRPs), adaptive power factor (APF).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17701047 Investigation of Adaptable Winglets for Improved UAV Control and Performance
Abstract:
An investigation of adaptable winglets for morphing aircraft control and performance is described in this paper. The concepts investigated consist of various winglet configurations fundamentally centred on a baseline swept wing. The impetus for the work was to identify and optimize winglets to enhance controllability and the aerodynamic efficiency of a small unmanned aerial vehicle. All computations were performed with Athena Vortex Lattice modelling with varying degrees of twist, swept, and dihedral angle considered. The results from this work indicate that if adaptable winglets were employed on small scale UAV’s improvements in both aircraft control and performance could be achieved.
Keywords: Aircraft, rolling, wing, winglet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29711046 Least-Squares Support Vector Machine for Characterization of Clusters of Microcalcifications
Authors: Baljit Singh Khehra, Amar Partap Singh Pharwaha
Abstract:
Clusters of Microcalcifications (MCCs) are most frequent symptoms of Ductal Carcinoma in Situ (DCIS) recognized by mammography. Least-Square Support Vector Machine (LS-SVM) is a variant of the standard SVM. In the paper, LS-SVM is proposed as a classifier for classifying MCCs as benign or malignant based on relevant extracted features from enhanced mammogram. To establish the credibility of LS-SVM classifier for classifying MCCs, a comparative evaluation of the relative performance of LS-SVM classifier for different kernel functions is made. For comparative evaluation, confusion matrix and ROC analysis are used. Experiments are performed on data extracted from mammogram images of DDSM database. A total of 380 suspicious areas are collected, which contain 235 malignant and 145 benign samples, from mammogram images of DDSM database. A set of 50 features is calculated for each suspicious area. After this, an optimal subset of 23 most suitable features is selected from 50 features by Particle Swarm Optimization (PSO). The results of proposed study are quite promising.
Keywords: Clusters of Microcalcifications, Ductal Carcinoma in Situ, Least-Square Support Vector Machine, Particle Swarm Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18121045 Detecting Tomato Flowers in Greenhouses Using Computer Vision
Authors: Dor Oppenheim, Yael Edan, Guy Shani
Abstract:
This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.Keywords: Agricultural engineering, computer vision, image processing, flower detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23671044 Wear and Mechanical Properties of Nodular Iron Modified with Copper
Authors: J. Ramos, V. Gil, A. F. Torres
Abstract:
In this research (using induction furnace process) nodular iron with three different percentages of copper (residual, 0.5% and 1,2%) was obtained. Chemical analysis was performed by mass spectrometry and microstructures were characterized by Optical Microscopy (ASTM E3) and Scanning Electron Microscopy (SEM). The study of mechanical behavior was carried out in a mechanical test machine (ASTM E8) and a Pin on disk tribometer (ASTM G99) was used to assess wear resistance. It is observed that the dissolution of copper in crystal lattice increases the pearlite structure improving the wear and hardness behavior, but producing a contrary effect on the energy absorption.
Keywords: Ferritic and perlite structure, mechanical properties, nodular iron, wear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2269