Search results for: robust estimation.
1070 Influential Parameters in Estimating Soil Properties from Cone Penetrating Test: An Artificial Neural Network Study
Authors: Ahmed G. Mahgoub, Dahlia H. Hafez, Mostafa A. Abu Kiefa
Abstract:
The Cone Penetration Test (CPT) is a common in-situ test which generally investigates a much greater volume of soil more quickly than possible from sampling and laboratory tests. Therefore, it has the potential to realize both cost savings and assessment of soil properties rapidly and continuously. The principle objective of this paper is to demonstrate the feasibility and efficiency of using artificial neural networks (ANNs) to predict the soil angle of internal friction (Φ) and the soil modulus of elasticity (E) from CPT results considering the uncertainties and non-linearities of the soil. In addition, ANNs are used to study the influence of different parameters and recommend which parameters should be included as input parameters to improve the prediction. Neural networks discover relationships in the input data sets through the iterative presentation of the data and intrinsic mapping characteristics of neural topologies. General Regression Neural Network (GRNN) is one of the powerful neural network architectures which is utilized in this study. A large amount of field and experimental data including CPT results, plate load tests, direct shear box, grain size distribution and calculated data of overburden pressure was obtained from a large project in the United Arab Emirates. This data was used for the training and the validation of the neural network. A comparison was made between the obtained results from the ANN's approach, and some common traditional correlations that predict Φ and E from CPT results with respect to the actual results of the collected data. The results show that the ANN is a very powerful tool. Very good agreement was obtained between estimated results from ANN and actual measured results with comparison to other correlations available in the literature. The study recommends some easily available parameters that should be included in the estimation of the soil properties to improve the prediction models. It is shown that the use of friction ration in the estimation of Φ and the use of fines content in the estimation of E considerable improve the prediction models.
Keywords: Angle of internal friction, Cone penetrating test, General regression neural network, Soil modulus of elasticity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22821069 FACTS Based Stabilization for Smart Grid Applications
Authors: Adel M. Sharaf, Foad H. Gandoman
Abstract:
Nowadays, Photovoltaic-PV Farms/ Parks and large PV-Smart Grid Interface Schemes are emerging and commonly utilized in Renewable Energy distributed generation. However, PVhybrid- Dc-Ac Schemes using interface power electronic converters usually has negative impact on power quality and stabilization of modern electrical network under load excursions and network fault conditions in smart grid. Consequently, robust FACTS based interface schemes are required to ensure efficient energy utilization and stabilization of bus voltages as well as limiting switching/fault onrush current condition. FACTS devices are also used in smart grid- Battery Interface and Storage Schemes with PV-Battery Storage hybrid systems as an elegant alternative to renewable energy utilization with backup battery storage for electric utility energy and demand side management to provide needed energy and power capacity under heavy load conditions. The paper presents a robust interface PV-Li-Ion Battery Storage Interface Scheme for Distribution/Utilization Low Voltage Interface using FACTS stabilization enhancement and dynamic maximum PV power tracking controllers. Digital simulation and validation of the proposed scheme is done using MATLAB/Simulink software environment for Low Voltage- Distribution/Utilization system feeding a hybrid Linear-Motorized inrush and nonlinear type loads from a DC-AC Interface VSC-6- pulse Inverter Fed from the PV Park/Farm with a back-up Li-Ion Storage Battery.
Keywords: AC FACTS, Smart grid, Stabilization, PV-Battery Storage, Switched Filter-Compensation (SFC).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32471068 FPGA Implementation of Generalized Maximal Ratio Combining Receiver Diversity
Authors: Rafic Ayoubi, Jean-Pierre Dubois, Rania Minkara
Abstract:
In this paper, we study FPGA implementation of a novel supra-optimal receiver diversity combining technique, generalized maximal ratio combining (GMRC), for wireless transmission over fading channels in SIMO systems. Prior published results using ML-detected GMRC diversity signal driven by BPSK showed superior bit error rate performance to the widely used MRC combining scheme in an imperfect channel estimation (ICE) environment. Under perfect channel estimation conditions, the performance of GMRC and MRC were identical. The main drawback of the GMRC study was that it was theoretical, thus successful FPGA implementation of it using pipeline techniques is needed as a wireless communication test-bed for practical real-life situations. Simulation results showed that the hardware implementation was efficient both in terms of speed and area. Since diversity combining is especially effective in small femto- and picocells, internet-associated wireless peripheral systems are to benefit most from GMRC. As a result, many spinoff applications can be made to the hardware of IP-based 4th generation networks.Keywords: Femto-internet cells, field-programmable gate array, generalized maximal-ratio combining, Lyapunov fractal dimension, pipelining technique, wireless SIMO channels.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26011067 Comparison of Different Techniques to Estimate Surface Soil Moisture
Authors: S. Farid F. Mojtahedi, Ali Khosravi, Behnaz Naeimian, S. Adel A. Hosseini
Abstract:
Land subsidence is a gradual settling or sudden sinking of the land surface from changes that take place underground. There are different causes of land subsidence; most notably, ground-water overdraft and severe weather conditions. Subsidence of the land surface due to ground water overdraft is caused by an increase in the intergranular pressure in unconsolidated aquifers, which results in a loss of buoyancy of solid particles in the zone dewatered by the falling water table and accordingly compaction of the aquifer. On the other hand, exploitation of underground water may result in significant changes in degree of saturation of soil layers above the water table, increasing the effective stress in these layers, and considerable soil settlements. This study focuses on estimation of soil moisture at surface using different methods. Specifically, different methods for the estimation of moisture content at the soil surface, as an important term to solve Richard’s equation and estimate soil moisture profile are presented, and their results are discussed through comparison with field measurements obtained from Yanco1 station in south-eastern Australia. Surface soil moisture is not easy to measure at the spatial scale of a catchment. Due to the heterogeneity of soil type, land use, and topography, surface soil moisture may change considerably in space and time.
Keywords: Artificial neural network, empirical method, remote sensing, surface soil moisture, unsaturated soil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21341066 Thresholding Approach for Automatic Detection of Pseudomonas aeruginosa Biofilms from Fluorescence in situ Hybridization Images
Authors: Zonglin Yang, Tatsuya Akiyama, Kerry S. Williamson, Michael J. Franklin, Thiruvarangan Ramaraj
Abstract:
Pseudomonas aeruginosa is an opportunistic pathogen that forms surface-associated microbial communities (biofilms) on artificial implant devices and on human tissue. Biofilm infections are difficult to treat with antibiotics, in part, because the bacteria in biofilms are physiologically heterogeneous. One measure of biological heterogeneity in a population of cells is to quantify the cellular concentrations of ribosomes, which can be probed with fluorescently labeled nucleic acids. The fluorescent signal intensity following fluorescence in situ hybridization (FISH) analysis correlates to the cellular level of ribosomes. The goals here are to provide computationally and statistically robust approaches to automatically quantify cellular heterogeneity in biofilms from a large library of epifluorescent microscopy FISH images. In this work, the initial steps were developed toward these goals by developing an automated biofilm detection approach for use with FISH images. The approach allows rapid identification of biofilm regions from FISH images that are counterstained with fluorescent dyes. This methodology provides advances over other computational methods, allowing subtraction of spurious signals and non-biological fluorescent substrata. This method will be a robust and user-friendly approach which will enable users to semi-automatically detect biofilm boundaries and extract intensity values from fluorescent images for quantitative analysis of biofilm heterogeneity.
Keywords: Image informatics, Pseudomonas aeruginosa, biofilm, FISH, computer vision, data visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11811065 A Multi-layer Artificial Neural Network Architecture Design for Load Forecasting in Power Systems
Authors: Axay J Mehta, Hema A Mehta, T.C.Manjunath, C. Ardil
Abstract:
In this paper, the modelling and design of artificial neural network architecture for load forecasting purposes is investigated. The primary pre-requisite for power system planning is to arrive at realistic estimates of future demand of power, which is known as Load Forecasting. Short Term Load Forecasting (STLF) helps in determining the economic, reliable and secure operating strategies for power system. The dependence of load on several factors makes the load forecasting a very challenging job. An over estimation of the load may cause premature investment and unnecessary blocking of the capital where as under estimation of load may result in shortage of equipment and circuits. It is always better to plan the system for the load slightly higher than expected one so that no exigency may arise. In this paper, a load-forecasting model is proposed using a multilayer neural network with an appropriately modified back propagation learning algorithm. Once the neural network model is designed and trained, it can forecast the load of the power system 24 hours ahead on daily basis and can also forecast the cumulative load on daily basis. The real load data that is used for the Artificial Neural Network training was taken from LDC, Gujarat Electricity Board, Jambuva, Gujarat, India. The results show that the load forecasting of the ANN model follows the actual load pattern more accurately throughout the forecasted period.
Keywords: Power system, Load forecasting, Neural Network, Neuron, Stabilization, Network structure, Load.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34231064 Evaluation of Efficient CSI Based Channel Feedback Techniques for Adaptive MIMO-OFDM Systems
Authors: Muhammad Rehan Khalid, Muhammad Haroon Siddiqui, Danish Ilyas
Abstract:
This paper explores the implementation of adaptive coding and modulation schemes for Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) feedback systems. Adaptive coding and modulation enables robust and spectrally-efficient transmission over time-varying channels. The basic premise is to estimate the channel at the receiver and feed this estimate back to the transmitter, so that the transmission scheme can be adapted relative to the channel characteristics. Two types of codebook based channel feedback techniques are used in this work. The longterm and short-term CSI at the transmitter is used for efficient channel utilization. OFDM is a powerful technique employed in communication systems suffering from frequency selectivity. Combined with multiple antennas at the transmitter and receiver, OFDM proves to be robust against delay spread. Moreover, it leads to significant data rates with improved bit error performance over links having only a single antenna at both the transmitter and receiver. The coded modulation increases the effective transmit power relative to uncoded variablerate variable-power MQAM performance for MIMO-OFDM feedback system. Hence proposed arrangement becomes an attractive approach to achieve enhanced spectral efficiency and improved error rate performance for next generation high speed wireless communication systems.Keywords: Adaptive Coded Modulation, MQAM, MIMO, OFDM, Codebooks, Feedback.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19081063 A Model for Estimation of Efforts in Development of Software Systems
Authors: Parvinder S. Sandhu, Manisha Prashar, Pourush Bassi, Atul Bisht
Abstract:
Software effort estimation is the process of predicting the most realistic use of effort required to develop or maintain software based on incomplete, uncertain and/or noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets. There are various models like Halstead, Walston-Felix, Bailey-Basili, Doty and GA Based models which have already used to estimate the software effort for projects. In this study Statistical Models, Fuzzy-GA and Neuro-Fuzzy (NF) Inference Systems are experimented to estimate the software effort for projects. The performances of the developed models were tested on NASA software project datasets and results are compared with the Halstead, Walston-Felix, Bailey-Basili, Doty and Genetic Algorithm Based models mentioned in the literature. The result shows that the NF Model has the lowest MMRE and RMSE values. The NF Model shows the best results as compared with the Fuzzy-GA based hybrid Inference System and other existing Models that are being used for the Effort Prediction with lowest MMRE and RMSE values.Keywords: Neuro-Fuzzy Model, Halstead Model, Walston-Felix Model, Bailey-Basili Model, Doty Model, GA Based Model, Genetic Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32271062 A Hybrid Fuzzy AGC in a Competitive Electricity Environment
Authors: H. Shayeghi, A. Jalili
Abstract:
This paper presents a new Hybrid Fuzzy (HF) PID type controller based on Genetic Algorithms (GA-s) for solution of the Automatic generation Control (AGC) problem in a deregulated electricity environment. In order for a fuzzy rule based control system to perform well, the fuzzy sets must be carefully designed. A major problem plaguing the effective use of this method is the difficulty of accurately constructing the membership functions, because it is a computationally expensive combinatorial optimization problem. On the other hand, GAs is a technique that emulates biological evolutionary theories to solve complex optimization problems by using directed random searches to derive a set of optimal solutions. For this reason, the membership functions are tuned automatically using a modified GA-s based on the hill climbing method. The motivation for using the modified GA-s is to reduce fuzzy system effort and take large parametric uncertainties into account. The global optimum value is guaranteed using the proposed method and the speed of the algorithm-s convergence is extremely improved, too. This newly developed control strategy combines the advantage of GA-s and fuzzy system control techniques and leads to a flexible controller with simple stricture that is easy to implement. The proposed GA based HF (GAHF) controller is tested on a threearea deregulated power system under different operating conditions and contract variations. The results of the proposed GAHF controller are compared with those of Multi Stage Fuzzy (MSF) controller, robust mixed H2/H∞ and classical PID controllers through some performance indices to illustrate its robust performance for a wide range of system parameters and load changes.
Keywords: AGC, Hybrid Fuzzy Controller, Deregulated Power System, Power System Control, GAs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17351061 Efficiency of Robust Heuristic Gradient Based Enumerative and Tunneling Algorithms for Constrained Integer Programming Problems
Authors: Vijaya K. Srivastava, Davide Spinello
Abstract:
This paper presents performance of two robust gradient-based heuristic optimization procedures based on 3n enumeration and tunneling approach to seek global optimum of constrained integer problems. Both these procedures consist of two distinct phases for locating the global optimum of integer problems with a linear or non-linear objective function subject to linear or non-linear constraints. In both procedures, in the first phase, a local minimum of the function is found using the gradient approach coupled with hemstitching moves when a constraint is violated in order to return the search to the feasible region. In the second phase, in one optimization procedure, the second sub-procedure examines 3n integer combinations on the boundary and within hypercube volume encompassing the result neighboring the result from the first phase and in the second optimization procedure a tunneling function is constructed at the local minimum of the first phase so as to find another point on the other side of the barrier where the function value is approximately the same. In the next cycle, the search for the global optimum commences in both optimization procedures again using this new-found point as the starting vector. The search continues and repeated for various step sizes along the function gradient as well as that along the vector normal to the violated constraints until no improvement in optimum value is found. The results from both these proposed optimization methods are presented and compared with one provided by popular MS Excel solver that is provided within MS Office suite and other published results.
Keywords: Constrained integer problems, enumerative search algorithm, Heuristic algorithm, tunneling algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8001060 Design of an Intelligent Location Identification Scheme Based On LANDMARC and BPNs
Authors: S. Chaisit, H.Y. Kung, N.T. Phuong
Abstract:
Radio frequency identification (RFID) applications have grown rapidly in many industries, especially in indoor location identification. The advantage of using received signal strength indicator (RSSI) values as an indoor location measurement method is a cost-effective approach without installing extra hardware. Because the accuracy of many positioning schemes using RSSI values is limited by interference factors and the environment, thus it is challenging to use RFID location techniques based on integrating positioning algorithm design. This study proposes the location estimation approach and analyzes a scheme relying on RSSI values to minimize location errors. In addition, this paper examines different factors that affect location accuracy by integrating the backpropagation neural network (BPN) with the LANDMARC algorithm in a training phase and an online phase. First, the training phase computes coordinates obtained from the LANDMARC algorithm, which uses RSSI values and the real coordinates of reference tags as training data for constructing an appropriate BPN architecture and training length. Second, in the online phase, the LANDMARC algorithm calculates the coordinates of tracking tags, which are then used as BPN inputs to obtain location estimates. The results show that the proposed scheme can estimate locations more accurately compared to LANDMARC without extra devices.
Keywords: BPNs, indoor location, location estimation, intelligent location identification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20111059 Digital Twin of Real Electrical Distribution System with Real Time Recursive Load Flow Calculation and State Estimation
Authors: Anosh Arshad Sundhu, Francesco Giordano, Giacomo Della Croce, Maurizio Arnone
Abstract:
Digital Twin (DT) is a technology that generates a virtual representation of a physical system or process, enabling real-time monitoring, analysis, and simulation. DT of an Electrical Distribution System (EDS) can perform online analysis by integrating the static and real-time data in order to show the current grid status and predictions about the future status to the Distribution System Operator (DSO), producers and consumers. DT technology for EDS also offers the opportunity to DSO to test hypothetical scenarios. This paper discusses the development of a DT of an EDS by Smart Grid Controller (SGC) application, which is developed using open-source libraries and languages. The developed application can be integrated with Supervisory Control and Data Acquisition System (SCADA) of any EDS for creating the DT. The paper shows the performance of developed tools inside the application, tested on real EDS for grid observability, Smart Recursive Load Flow (SRLF) calculation and state estimation of loads in MV feeders.
Keywords: Digital Twin, Distribution System Operator, Electrical Distribution System, Smart Grid Controller, Supervisory Control and Data Acquisition System, Smart Recursive Load Flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2551058 Emergency Generator Sizing and Motor Starting Analysis
Authors: Mukesh Kumar Kirar, Ganga Agnihotri
Abstract:
This paper investigates the preliminary sizing of generator set to design electrical system at the early phase of a project, dynamic behavior of generator-unit, as well as induction motors, during start-up of the induction motor drives fed from emergency generator unit. The information in this paper simplifies generator set selection and eliminates common errors in selection. It covers load estimation, step loading capacity test, transient analysis for the emergency generator set. The dynamic behavior of the generator-unit, power, power factor, voltage, during Direct-on-Line start-up of the induction motor drives fed from stand alone gene-set is also discussed. It is important to ensure that plant generators operate safely and consistently, power system studies are required at the planning and conceptual design stage of the project. The most widely recognized and studied effect of motor starting is the voltage dip that is experienced throughout an industrial power system as the direct online result of starting large motors. Generator step loading capability and transient voltage dip during starting of largest motor is ensured with the help of Electrical Transient Analyzer Program (ETAP).
Keywords: Sizing, induction motor starting, load estimation, Transient Analyzer Program (ETAP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 139771057 Measurement of Real Time Drive Cycle for Indian Roads and Estimation of Component Sizing for HEV using LABVIEW
Authors: Varsha Shah, Patel Pritesh, Patel Sagar, PrasantaKundu, RanjanMaheshwari
Abstract:
Performance of vehicle depends on driving patterns and vehicle drive train configuration. Driving patterns depends on traffic condition, road condition and driver behavior. HEV design is carried out under certain constrain like vehicle operating range, acceleration, decelerations, maximum speed and road grades which are directly related to the driving patterns. Therefore the detailed study on HEV performance over a different drive cycle is required for selection and sizing of HEV components. A simple hardware is design to measured velocity v/s time profile of the vehicle by operating vehicle on Indian roads under real traffic conditions. To size the HEV components, a detailed dynamic model of the vehicle is developed considering the effect of inertia of rotating components like wheels, drive chain, engine and electric motor. Using vehicle model and different Indian drive cycles data, total tractive power demanded by vehicle and power supplied by individual components has been calculated.Using above information selection and estimation of component sizing for HEV is carried out so that HEV performs efficiently under hostile driving condition. Complete analysis is carried out in LABVIEW.Keywords: BLDC motor, Driving cycle, LABVIEW Ultracapacitors, Vehicle Dynamics,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39011056 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data
Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer
Abstract:
This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.Keywords: Non-stationary, BINARMA(1, 1) model, Poisson Innovations, CML
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5881055 Specification Requirements for a Combined Dehumidifier/Cooling Panel: A Global Scale Analysis
Authors: Damien Gondre, Hatem Ben Maad, Abdelkrim Trabelsi, Frédéric Kuznik, Joseph Virgone
Abstract:
The use of a radiant cooling solution would enable to lower cooling needs which is of great interest when the demand is initially high (hot climate). But, radiant systems are not naturally compatibles with humid climates since a low-temperature surface leads to condensation risks as soon as the surface temperature is close to or lower than the dew point temperature. A radiant cooling system combined to a dehumidification system would enable to remove humidity for the space, thereby lowering the dew point temperature. The humidity removal needs to be especially effective near the cooled surface. This requirement could be fulfilled by a system using a single desiccant fluid for the removal of both excessive heat and moisture. This task aims at providing an estimation of the specification requirements of such system in terms of cooling power and dehumidification rate required to fulfill comfort issues and to prevent any condensation risk on the cool panel surface. The present paper develops a preliminary study on the specification requirements, performances and behavior of a combined dehumidifier/cooling ceiling panel for different operating conditions. This study has been carried using the TRNSYS software which allows nodal calculations of thermal systems. It consists of the dynamic modeling of heat and vapor balances of a 5m x 3m x 2.7m office space. In a first design estimation, this room is equipped with an ideal heating, cooling, humidification and dehumidification system so that the room temperature is always maintained in between 21◦C and 25◦C with a relative humidity in between 40% and 60%. The room is also equipped with a ventilation system that includes a heat recovery heat exchanger and another heat exchanger connected to a heat sink. Main results show that the system should be designed to meet a cooling power of 42W.m−2 and a desiccant rate of 45 gH2O.h−1. In a second time, a parametric study of comfort issues and system performances has been achieved on a more realistic system (that includes a chilled ceiling) under different operating conditions. It enables an estimation of an acceptable range of operating conditions. This preliminary study is intended to provide useful information for the system design.Keywords: Dehumidification, nodal calculation, radiant cooling panel, system sizing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7321054 Estimation of the Bit Side Force by Using Artificial Neural Network
Authors: Mohammad Heidari
Abstract:
Horizontal wells are proven to be better producers because they can be extended for a long distance in the pay zone. Engineers have the technical means to forecast the well productivity for a given horizontal length. However, experiences have shown that the actual production rate is often significantly less than that of forecasted. It is a difficult task, if not impossible to identify the real reason why a horizontal well is not producing what was forecasted. Often the source of problem lies in the drilling of horizontal section such as permeability reduction in the pay zone due to mud invasion or snaky well patterns created during drilling. Although drillers aim to drill a constant inclination hole in the pay zone, the more frequent outcome is a sinusoidal wellbore trajectory. The two factors, which play an important role in wellbore tortuosity, are the inclination and side force at bit. A constant inclination horizontal well can only be drilled if the bit face is maintained perpendicular to longitudinal axis of bottom hole assembly (BHA) while keeping the side force nil at the bit. This approach assumes that there exists no formation force at bit. Hence, an appropriate BHA can be designed if bit side force and bit tilt are determined accurately. The Artificial Neural Network (ANN) is superior to existing analytical techniques. In this study, the neural networks have been employed as a general approximation tool for estimation of the bit side forces. A number of samples are analyzed with ANN for parameters of bit side force and the results are compared with exact analysis. Back Propagation Neural network (BPN) is used to approximation of bit side forces. Resultant low relative error value of the test indicates the usability of the BPN in this area.Keywords: Artificial Neural Network, BHA, Horizontal Well, Stabilizer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19781053 A Rule-based Approach for Anomaly Detection in Subscriber Usage Pattern
Authors: Rupesh K. Gopal, Saroj K. Meher
Abstract:
In this report we present a rule-based approach to detect anomalous telephone calls. The method described here uses subscriber usage CDR (call detail record) data sampled over two observation periods: study period and test period. The study period contains call records of customers- non-anomalous behaviour. Customers are first grouped according to their similar usage behaviour (like, average number of local calls per week, etc). For customers in each group, we develop a probabilistic model to describe their usage. Next, we use maximum likelihood estimation (MLE) to estimate the parameters of the calling behaviour. Then we determine thresholds by calculating acceptable change within a group. MLE is used on the data in the test period to estimate the parameters of the calling behaviour. These parameters are compared against thresholds. Any deviation beyond the threshold is used to raise an alarm. This method has the advantage of identifying local anomalies as compared to techniques which identify global anomalies. The method is tested for 90 days of study data and 10 days of test data of telecom customers. For medium to large deviations in the data in test window, the method is able to identify 90% of anomalous usage with less than 1% false alarm rate.Keywords: Subscription fraud, fraud detection, anomalydetection, maximum likelihood estimation, rule based systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28131052 Application of KL Divergence for Estimation of Each Metabolic Pathway Genes
Authors: Shohei Maruyama, Yasuo Matsuyama, Sachiyo Aburatani
Abstract:
Development of a method to estimate gene functions is an important task in bioinformatics. One of the approaches for the annotation is the identification of the metabolic pathway that genes are involved in. Since gene expression data reflect various intracellular phenomena, those data are considered to be related with genes’ functions. However, it has been difficult to estimate the gene function with high accuracy. It is considered that the low accuracy of the estimation is caused by the difficulty of accurately measuring a gene expression. Even though they are measured under the same condition, the gene expressions will vary usually. In this study, we proposed a feature extraction method focusing on the variability of gene expressions to estimate the genes' metabolic pathway accurately. First, we estimated the distribution of each gene expression from replicate data. Next, we calculated the similarity between all gene pairs by KL divergence, which is a method for calculating the similarity between distributions. Finally, we utilized the similarity vectors as feature vectors and trained the multiclass SVM for identifying the genes' metabolic pathway. To evaluate our developed method, we applied the method to budding yeast and trained the multiclass SVM for identifying the seven metabolic pathways. As a result, the accuracy that calculated by our developed method was higher than the one that calculated from the raw gene expression data. Thus, our developed method combined with KL divergence is useful for identifying the genes' metabolic pathway.
Keywords: Metabolic pathways, gene expression data, microarray, Kullback–Leibler divergence, KL divergence, support vector machines, SVM, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23361051 Multi-Scale Gabor Feature Based Eye Localization
Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Dusik Oh, Jaemin Kim, Seongwon Cho
Abstract:
Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported so far still need to be improved about precision and computational time for successful applications. In this paper, we propose an eye location method based on multi-scale Gabor feature vectors, which is more robust with respect to initial points. The eye localization based on Gabor feature vectors first needs to constructs an Eye Model Bunch for each eye (left or right eye) which consists of n Gabor jets and average eye coordinates of each eyes obtained from n model face images, and then tries to localize eyes in an incoming face image by utilizing the fact that the true eye coordinates is most likely to be very close to the position where the Gabor jet will have the best Gabor jet similarity matching with a Gabor jet in the Eye Model Bunch. Similar ideas have been already proposed in such as EBGM (Elastic Bunch Graph Matching). However, the method used in EBGM is known to be not robust with respect to initial values and may need extensive search range for achieving the required performance, but extensive search ranges will cause much more computational burden. In this paper, we propose a multi-scale approach with a little increased computational burden where one first tries to localize eyes based on Gabor feature vectors in a coarse face image obtained from down sampling of the original face image, and then localize eyes based on Gabor feature vectors in the original resolution face image by using the eye coordinates localized in the coarse scaled image as initial points. Several experiments and comparisons with other eye localization methods reported in the other papers show the efficiency of our proposed method.Keywords: Eye Localization, Gabor features, Multi-scale, Gabor wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18211050 Rapid Determination of Biochemical Oxygen Demand
Authors: Mayur Milan Kale, Indu Mehrotra
Abstract:
Biochemical Oxygen Demand (BOD) is a measure of the oxygen used in bacteria mediated oxidation of organic substances in water and wastewater. Theoretically an infinite time is required for complete biochemical oxidation of organic matter, but the measurement is made over 5-days at 20 0C or 3-days at 27 0C test period with or without dilution. Researchers have worked to further reduce the time of measurement. The objective of this paper is to review advancement made in BOD measurement primarily to minimize the time and negate the measurement difficulties. Survey of literature review in four such techniques namely BOD-BARTTM, Biosensors, Ferricyanidemediated approach, luminous bacterial immobilized chip method. Basic principle, method of determination, data validation and their advantage and disadvantages have been incorporated of each of the methods. In the BOD-BARTTM method the time lag is calculated for the system to change from oxidative to reductive state. BIOSENSORS are the biological sensing element with a transducer which produces a signal proportional to the analyte concentration. Microbial species has its metabolic deficiencies. Co-immobilization of bacteria using sol-gel biosensor increases the range of substrate. In ferricyanidemediated approach, ferricyanide has been used as e-acceptor instead of oxygen. In Luminous bacterial cells-immobilized chip method, bacterial bioluminescence which is caused by lux genes was observed. Physiological responses is measured and correlated to BOD due to reduction or emission. There is a scope to further probe into the rapid estimation of BOD.Keywords: BOD, Four methods, Rapid estimation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36411049 Video Matting based on Background Estimation
Authors: J.-H. Moon, D.-O Kim, R.-H. Park
Abstract:
This paper presents a video matting method, which extracts the foreground and alpha matte from a video sequence. The objective of video matting is finding the foreground and compositing it with the background that is different from the one in the original image. By finding the motion vectors (MVs) using a sliced block matching algorithm (SBMA), we can extract moving regions from the video sequence under the assumption that the foreground is moving and the background is stationary. In practice, foreground areas are not moving through all frames in an image sequence, thus we accumulate moving regions through the image sequence. The boundaries of moving regions are found by Canny edge detector and the foreground region is separated in each frame of the sequence. Remaining regions are defined as background regions. Extracted backgrounds in each frame are combined and reframed as an integrated single background. Based on the estimated background, we compute the frame difference (FD) of each frame. Regions with the FD larger than the threshold are defined as foreground regions, boundaries of foreground regions are defined as unknown regions and the rest of regions are defined as backgrounds. Segmentation information that classifies an image into foreground, background, and unknown regions is called a trimap. Matting process can extract an alpha matte in the unknown region using pixel information in foreground and background regions, and estimate the values of foreground and background pixels in unknown regions. The proposed video matting approach is adaptive and convenient to extract a foreground automatically and to composite a foreground with a background that is different from the original background.
Keywords: Background estimation, Object segmentation, Blockmatching algorithm, Video matting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18131048 Progressive AAM Based Robust Face Alignment
Authors: Daehwan Kim, Jaemin Kim, Seongwon Cho, Yongsuk Jang, Sun-Tae Chung, Boo-Gyoun Kim
Abstract:
AAM has been successfully applied to face alignment, but its performance is very sensitive to initial values. In case the initial values are a little far distant from the global optimum values, there exists a pretty good possibility that AAM-based face alignment may converge to a local minimum. In this paper, we propose a progressive AAM-based face alignment algorithm which first finds the feature parameter vector fitting the inner facial feature points of the face and later localize the feature points of the whole face using the first information. The proposed progressive AAM-based face alignment algorithm utilizes the fact that the feature points of the inner part of the face are less variant and less affected by the background surrounding the face than those of the outer part (like the chin contour). The proposed algorithm consists of two stages: modeling and relation derivation stage and fitting stage. Modeling and relation derivation stage first needs to construct two AAM models: the inner face AAM model and the whole face AAM model and then derive relation matrix between the inner face AAM parameter vector and the whole face AAM model parameter vector. In the fitting stage, the proposed algorithm aligns face progressively through two phases. In the first phase, the proposed algorithm will find the feature parameter vector fitting the inner facial AAM model into a new input face image, and then in the second phase it localizes the whole facial feature points of the new input face image based on the whole face AAM model using the initial parameter vector estimated from using the inner feature parameter vector obtained in the first phase and the relation matrix obtained in the first stage. Through experiments, it is verified that the proposed progressive AAM-based face alignment algorithm is more robust with respect to pose, illumination, and face background than the conventional basic AAM-based face alignment algorithm.Keywords: Face Alignment, AAM, facial feature detection, model matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16391047 Controller Design for Euler-Bernoulli Smart Structures Using Robust Decentralized FOS via Reduced Order Modeling
Authors: T.C. Manjunath, B. Bandyopadhyay
Abstract:
This paper features the modeling and design of a Robust Decentralized Fast Output Sampling (RDFOS) Feedback control technique for the active vibration control of a smart flexible multimodel Euler-Bernoulli cantilever beams for a multivariable (MIMO) case by retaining the first 6 vibratory modes. The beam structure is modeled in state space form using the concept of piezoelectric theory, the Euler-Bernoulli beam theory and the Finite Element Method (FEM) technique by dividing the beam into 4 finite elements and placing the piezoelectric sensor / actuator at two finite element locations (positions 2 and 4) as collocated pairs, i.e., as surface mounted sensor / actuator, thus giving rise to a multivariable model of the smart structure plant with two inputs and two outputs. Five such multivariable models are obtained by varying the dimensions (aspect ratios) of the aluminium beam. Using model order reduction technique, the reduced order model of the higher order system is obtained based on dominant Eigen value retention and the Davison technique. RDFOS feedback controllers are designed for the above 5 multivariable-multimodel plant. The closed loop responses with the RDFOS feedback gain and the magnitudes of the control input are obtained and the performance of the proposed multimodel smart structure system is evaluated for vibration control.Keywords: Smart structure, Euler-Bernoulli beam theory, Fastoutput sampling feedback control, Finite Element Method, Statespace model, Vibration control, LMI, Model order Reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17531046 Instant Location Detection of Objects Moving at High-Speedin C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev
Abstract:
The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data of the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as «signaling parameters» (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of COTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources, but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as rule. This report contains describing of the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework.
Keywords: C-OTDR-system, co-processing of signaling parameters, high-speed objects localization, multichannel monitoring systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19081045 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis
Authors: Petr Gurný
Abstract:
One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the creditscoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.
Keywords: Credit-scoring Models, Multidimensional Subordinated Lévy Model, Probability of Default.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19191044 Is Curcumine Effect Comparable to 5- Aminosalicylic Acid or Budesonide on a Rat Model of Ulcerative Colitis Induced by Trinitrobenzene Sulfonic Acid?
Authors: Inas E. Darwish, Alia M. Arab, Tarek A. Azeim, Teshreen M. Zeitoun, Wafaa A. Hewedy, Moemen A. Heiba, Iman S. Emara
Abstract:
Inflammatory bowel disease (IBD) is a chronic relapsing-remitting condition that afflicts millions of people throughout the world and impairs their daily functions and quality of life. Treatment of IBD depends largely on 5-aminosalicylic acid (5- ASA) and corticosteroids. The present study aimed to clarify the effects of 5-aminosalicylic acid, budesonide and currcumin on 90 male albino rats against trinitrobenzene sulfonic acid (TNB) induced colitis. TNB was injected intrarectally to 50 rats. The other 40 rats served as control groups. Both 5-ASA (in a dose of 120 mg/kg) and budesonide (in a dose of 0.1 mg/kg) were administered daily for one week whereas currcumin was injected intraperitonially (in a dose of 30 mg/kg daily) for 14 days after injection of either TNB in the colitis rats (group B) or saline in control groups (group A). The study included estimation of macroscopic score index, histological examination of H&E stained sections of the colonic tissue, biochemical estimation of myeloperoxidase (MPO), nitric oxide (NO), and caspase-3 levels, in addition to studying the effect of tested drugs on colonic motility. It was found that budesonide and curcumin improved mucosal healing, reduced both NO production and caspase- 3 level. They had the best impact on the disturbed colonic motility in TNBS-model of colitis.
Keywords: Colitis, curcumin, nitric oxide.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16701043 Estimation of Attenuation and Phase Delay in Driving Voltage Waveform of a Digital-Noiseless, Ultra-High-Speed Image Sensor
Authors: V. T. S. Dao, T. G. Etoh, C. Vo Le, H. D. Nguyen, K. Takehara, T. Akino, K. Nishi
Abstract:
Since 2004, we have been developing an in-situ storage image sensor (ISIS) that captures more than 100 consecutive images at a frame rate of 10 Mfps with ultra-high sensitivity as well as the video camera for use with this ISIS. Currently, basic research is continuing in an attempt to increase the frame rate up to 100 Mfps and above. In order to suppress electro-magnetic noise at such high frequency, a digital-noiseless imaging transfer scheme has been developed utilizing solely sinusoidal driving voltages. This paper presents highly efficient-yet-accurate expressions to estimate attenuation as well as phase delay of driving voltages through RC networks of an ultra-high-speed image sensor. Elmore metric for a fundamental RC chain is employed as the first-order approximation. By application of dimensional analysis to SPICE data, we found a simple expression that significantly improves the accuracy of the approximation. Similarly, another simple closed-form model to estimate phase delay through fundamental RC networks is also obtained. Estimation error of both expressions is much less than previous works, only less 2% for most of the cases . The framework of this analysis can be extended to address similar issues of other VLSI structures.
Keywords: Dimensional Analysis, ISIS, Digital-noiseless, RC network, Attenuation, Phase Delay, Elmore model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14541042 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows
Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid
Abstract:
Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.Keywords: Optimal control, ensemble Kalman Filter, topography reconstruction, data assimilation, shallow water equations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6791041 Estimation of Crustal Thickness within the Sokoto Basin North-Western Nigeria Using Bouguer Gravity Anomaly Data
Authors: T. T. Olugbenga, A. I. Augie
Abstract:
This research proposes an interpretation of the Bouguer’ gravity anomaly data of some parts of Sokoto basin for the estimation of crustal thickness. The study area is bounded between latitudes 1100′0″N and 1300′0″N, and longitudes 400′0″E and 600′0″E that covered Koko, Jega, B/Kebbi, Argungu, Lema, Bodinga, Tamgaza, Gunmi,Daki Takwas, Dange, Sokoto, Ilella, T/Mafara, Anka, Maru, Gusau, K/Namoda, and Sabon Birni within Sokoto, Kebbi and Zamfara state respectively. The established map of the study area was digitized in X, Y and Z format using excel software package and the digitized data were processed using Surfer version 13 software. The Moho and Conrad depths based on a relationship between Bouguer’ gravity anomaly determined crustal thickness were estimated as 35 to 37 km and 19 to 21 km, respectively. The crustal region has been categorized into: Crustal thinning zone that is the region with high gravity anomaly value due to its greater geothermal energy and also Crustal thickening zone which the region with low anomaly values due to its lower geothermal energy. Birnin kebbi, Jega, Sokoto were identified as the region of hydrocarbon potential with an estimate of 35 km thickness within the crustal region which is referred to as crustal thickening as a result of its low but sufficient geothermal energy to decompose organic matter within the region to form hydrocarbons.
Keywords: Bouguer gravity anomaly, crustal thickness, geothermal energy, hydrocarbons, Moho and Conrad Depths.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 652