Search results for: Similarity Estimate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1130

Search results for: Similarity Estimate

920 Bayesian Networks for Earthquake Magnitude Classification in a Early Warning System

Authors: G. Zazzaro, F.M. Pisano, G. Romano

Abstract:

During last decades, worldwide researchers dedicated efforts to develop machine-based seismic Early Warning systems, aiming at reducing the huge human losses and economic damages. The elaboration time of seismic waveforms is to be reduced in order to increase the time interval available for the activation of safety measures. This paper suggests a Data Mining model able to correctly and quickly estimate dangerousness of the running seismic event. Several thousand seismic recordings of Japanese and Italian earthquakes were analyzed and a model was obtained by means of a Bayesian Network (BN), which was tested just over the first recordings of seismic events in order to reduce the decision time and the test results were very satisfactory. The model was integrated within an Early Warning System prototype able to collect and elaborate data from a seismic sensor network, estimate the dangerousness of the running earthquake and take the decision of activating the warning promptly.

Keywords: Bayesian Networks, Decision Support System, Magnitude Classification, Seismic Early Warning System

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3555
919 A Spatial Information Network Traffic Prediction Method Based on Hybrid Model

Authors: Jingling Li, Yi Zhang, Wei Liang, Tao Cui, Jun Li

Abstract:

Compared with terrestrial network, the traffic of spatial information network has both self-similarity and short correlation characteristics. By studying its traffic prediction method, the resource utilization of spatial information network can be improved, and the method can provide an important basis for traffic planning of a spatial information network. In this paper, considering the accuracy and complexity of the algorithm, the spatial information network traffic is decomposed into approximate component with long correlation and detail component with short correlation, and a time series hybrid prediction model based on wavelet decomposition is proposed to predict the spatial network traffic. Firstly, the original traffic data are decomposed to approximate components and detail components by using wavelet decomposition algorithm. According to the autocorrelation and partial correlation smearing and truncation characteristics of each component, the corresponding model (AR/MA/ARMA) of each detail component can be directly established, while the type of approximate component modeling can be established by ARIMA model after smoothing. Finally, the prediction results of the multiple models are fitted to obtain the prediction results of the original data. The method not only considers the self-similarity of a spatial information network, but also takes into account the short correlation caused by network burst information, which is verified by using the measured data of a certain back bone network released by the MAWI working group in 2018. Compared with the typical time series model, the predicted data of hybrid model is closer to the real traffic data and has a smaller relative root means square error, which is more suitable for a spatial information network.

Keywords: Spatial Information Network, Traffic prediction, Wavelet decomposition, Time series model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 571
918 Could Thermal Oceanic Hotspot Increase Climate Changes Activities in North Tropical Atlantic: Example of the 2005 Caribbean Coral Bleaching Hotspot and Hurricane Katrina Interaction

Authors: J- L. Siméon

Abstract:

This paper reviews recent studies and particularly the effects of Climate Change in the North Tropical Atlantic by studying atmospheric conditions that prevailed in 2005 ; Coral Bleaching HotSpot and Hurricane Katrina. In the aim to better understand and estimate the impact of the physical phenomenon, i.e. Thermal Oceanic HotSpot (TOHS), isotopic studies of δ18O and δ13C on marine animals from Guadeloupe (French Caribbean Island) were carried out. Recorded measures show Sea Surface Temperature (SST) up to 35°C in August which is much higher than data recorded by NOAA satellites 32°C. After having reviewed the process that led to the creation of Hurricane Katrina which hit New Orleans in August 29, 2005, it will be shown that the climatic conditions in the Caribbean from August to October 2005 have influenced Katrina evolution. This TOHS is a combined effect of various phenomenon which represent an additional factor to estimate future climate changes.

Keywords: Climate Change, Thermal Ocean HotSpot, Isotope, Hurricane, Connection, Uncertainty, Sea, Science.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1638
917 Neural Networks and Particle Swarm Optimization Based MPPT for Small Wind Power Generator

Authors: Chun-Yao Lee, Yi-Xing Shen, Jung-Cheng Cheng, Yi-Yin Li, Chih-Wen Chang

Abstract:

This paper proposes the method combining artificial neural network (ANN) with particle swarm optimization (PSO) to implement the maximum power point tracking (MPPT) by controlling the rotor speed of the wind generator. First, the measurements of wind speed, rotor speed of wind power generator and output power of wind power generator are applied to train artificial neural network and to estimate the wind speed. Second, the method mentioned above is applied to estimate and control the optimal rotor speed of the wind turbine so as to output the maximum power. Finally, the result reveals that the control system discussed in this paper extracts the maximum output power of wind generator within the short duration even in the conditions of wind speed and load impedance variation.

Keywords: Maximum power point tracking, artificial neuralnetwork, particle swarm optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2165
916 An Approach to Measure Snow Depth of Winter Accumulation at Basin Scale Using Satellite Data

Authors: M. Geetha Priya, D. Krishnaveni

Abstract:

Snow depth estimation and monitoring studies have been carried out for decades using empirical relationship or extrapolation of point measurements carried out in field. With the development of advanced satellite based remote sensing techniques, a modified approach is proposed in the present study to estimate the winter accumulated snow depth at basin scale. Assessment of snow depth by differencing Digital Elevation Model (DEM) generated at the beginning and end of winter season can be experimented for the region of interest (Himalayan and polar regions) accounting for winter accumulation (solid precipitation). The proposed approach is based on existing geodetic method that is being used for glacier mass balance estimation. Considering the satellite datasets purely acquired during beginning and end of winter season, it is possible to estimate the change in depth or thickness for the snow that is accumulated during the winter as it takes one year for the snow to get transformed into firn (snow that has survived one summer or one-year old snow).

Keywords: Digital elevation model, snow depth, geodetic method, snow cover.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 662
915 Image Haze Removal Using Scene Depth Based Spatially Varying Atmospheric Light in Haar Lifting Wavelet Domain

Authors: Prabh Preet Singh, Harpreet Kaur

Abstract:

This paper presents a method for single image dehazing based on dark channel prior (DCP). The property that the intensity of the dark channel gives an approximate thickness of the haze is used to estimate the transmission and atmospheric light. Instead of constant atmospheric light, the proposed method employs scene depth to estimate spatially varying atmospheric light as it truly occurs in nature. Haze imaging model together with the soft matting method has been used in this work to produce high quality haze free image. Experimental results demonstrate that the proposed approach produces better results than the classic DCP approach as color fidelity and contrast of haze free image are improved and no over-saturation in the sky region is observed. Further, lifting Haar wavelet transform is employed to reduce overall execution time by a factor of two to three as compared to the conventional approach.

Keywords: Depth based atmospheric light, dark channel prior, lifting wavelet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 494
914 Conjugate Heat Transfer Analysis of a Combustion Chamber using ANSYS Computational Fluid Dynamics to Estimate the Thermocouple Positioning in a Chamber Wall

Authors: Muzna Tariq, Ihtzaz Qamar

Abstract:

In most engineering cases, the working temperatures inside a combustion chamber are high enough that they lie beyond the operational range of thermocouples. Furthermore, design and manufacturing limitations restrict the use of internal thermocouples in many applications. Heat transfer inside a combustion chamber is caused due to interaction of the post-combustion hot fluid with the chamber wall. Heat transfer that involves an interaction between the fluid and solid is categorized as Conjugate Heat Transfer (CHT). Therefore, to satisfy the needs of CHT, CHT Analysis is performed by using ANSYS CFD tool to estimate theoretically precise thermocouple positions at the combustion chamber wall where excessive temperatures (beyond thermocouple range) can be avoided. In accordance with these Computational Fluid Dynamics (CFD) results, a combustion chamber is designed, and a prototype is manufactured with multiple thermocouple ports positioned at the specified distances so that the temperature of hot gases can be measured on the chamber wall where the temperatures do not exceed the thermocouple working range.

Keywords: Computational Fluid Dynamics, CFD, conduction, conjugate heat transfer, CHT, convection, fluid flow, thermocouples.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 623
913 Using Linear Quadratic Gaussian Optimal Control for Lateral Motion of Aircraft

Authors: A. Maddi, A. Guessoum, D. Berkani

Abstract:

The purpose of this paper is to provide a practical example to the Linear Quadratic Gaussian (LQG) controller. This method includes a description and some discussion of the discrete Kalman state estimator. One aspect of this optimality is that the estimator incorporates all information that can be provided to it. It processes all available measurements, regardless of their precision, to estimate the current value of the variables of interest, with use of knowledge of the system and measurement device dynamics, the statistical description of the system noises, measurement errors, and uncertainty in the dynamics models. Since the time of its introduction, the Kalman filter has been the subject of extensive research and application, particularly in the area of autonomous or assisted navigation. For example, to determine the velocity of an aircraft or sideslip angle, one could use a Doppler radar, the velocity indications of an inertial navigation system, or the relative wind information in the air data system. Rather than ignore any of these outputs, a Kalman filter could be built to combine all of this data and knowledge of the various systems- dynamics to generate an overall best estimate of velocity and sideslip angle.

Keywords: Aircraft motion, Kalman filter, LQG control, Lateral stability, State estimator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2427
912 Text Mining Technique for Data Mining Application

Authors: M. Govindarajan

Abstract:

Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In decision tree approach is most useful in classification problem. With this technique, tree is constructed to model the classification process. There are two basic steps in the technique: building the tree and applying the tree to the database. This paper describes a proposed C5.0 classifier that performs rulesets, cross validation and boosting for original C5.0 in order to reduce the optimization of error ratio. The feasibility and the benefits of the proposed approach are demonstrated by means of medial data set like hypothyroid. It is shown that, the performance of a classifier on the training cases from which it was constructed gives a poor estimate by sampling or using a separate test file, either way, the classifier is evaluated on cases that were not used to build and evaluate the classifier are both are large. If the cases in hypothyroid.data and hypothyroid.test were to be shuffled and divided into a new 2772 case training set and a 1000 case test set, C5.0 might construct a different classifier with a lower or higher error rate on the test cases. An important feature of see5 is its ability to classifiers called rulesets. The ruleset has an error rate 0.5 % on the test cases. The standard errors of the means provide an estimate of the variability of results. One way to get a more reliable estimate of predictive is by f-fold –cross- validation. The error rate of a classifier produced from all the cases is estimated as the ratio of the total number of errors on the hold-out cases to the total number of cases. The Boost option with x trials instructs See5 to construct up to x classifiers in this manner. Trials over numerous datasets, large and small, show that on average 10-classifier boosting reduces the error rate for test cases by about 25%.

Keywords: C5.0, Error Ratio, text mining, training data, test data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2436
911 Development of Cooling Demand by Computerize

Authors: Bobby Anak John, Zamri Noranai, Md. Norrizam Mohmad Jaat, Hamidon Salleh, Mohammad Zainal Md Yusof

Abstract:

Air conditioning is mainly use as human comfort cooling medium. It use more in high temperatures are country such as Malaysia. Proper estimation of cooling load will archive ideal temperature. Without proper estimation can lead to over estimation or under estimation. The ideal temperature should be comfort enough. This study is to develop a program to calculate an ideal cooling load demand, which is match with heat gain. Through this study, it is easy to calculate cooling load estimation. Objective of this study are to develop user-friendly and easy excess cooling load program. This is to insure the cooling load can be estimate by any of the individual rather than them using rule-of-thumb. Developed software is carryout by using Matlab-GUI. These developments are only valid for common building in Malaysia only. An office building was select as case study to verify the applicable and accuracy of develop software. In conclusion, the main objective has successfully where developed software is user friendly and easily to estimate cooling load demand.

Keywords: Cooling Load, Heat Gain, Building and GUI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1997
910 Data Oriented Model of Image: as a Framework for Image Processing

Authors: A. Habibizad Navin, A. Sadighi, M. Naghian Fesharaki, M. Mirnia, M. Teshnelab, R. Keshmiri

Abstract:

This paper presents a new data oriented model of image. Then a representation of it, ADBT, is introduced. The ability of ADBT is clustering, segmentation, measuring similarity of images etc, with desired precision and corresponding speed.

Keywords: Data oriented modelling, image, clustering, segmentation, classification, ADBT and image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1753
909 Examination of Flood Runoff Reproductivity for Different Rainfall Sources in Central Vietnam

Authors: Do Hoai Nam, Keiko Udo, Akira Mano

Abstract:

This paper presents the combination of different precipitation data sets and the distributed hydrological model, in order to examine the flood runoff reproductivity of scattered observation catchments. The precipitation data sets were obtained from observation using rain-gages, satellite based estimate (TRMM), and numerical weather prediction model (NWP), then were coupled with the super tank model. The case study was conducted in three basins (small, medium, and large size) located in Central Vietnam. Calculated hydrographs based on ground observation rainfall showed best fit to measured stream flow, while those obtained from TRMM and NWP showed high uncertainty of peak discharges. However, calculated hydrographs using the adjusted rainfield depicted a promising alternative for the application of TRMM and NWP in flood modeling for scattered observation catchments, especially for the extension of forecast lead time.

Keywords: Flood forecast, rainfall-runoff model, satellite rainfall estimate, numerical weather prediction, quantitative precipitation forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1575
908 Effects of the Second Entrant in GSM Telecommunication Market in MENA Region

Authors: A.R. Yari, M.R. Sadri

Abstract:

For the first incumbent operator it is very important to understand how to react when the second operator comes to the market. In this paper which is prepared for preliminary study of GSM market in Iran, we have studied five MENA markets according to the similarity point of view. This paper aims at analyzing the impact of second entrants in selected markets on certain marketing key performance indicators (KPI) such as: Market shares (by operator), prepaid share, minutes of use (MoU), Price and average revenue per user (ARPU) (for total market each).

Keywords: GSM Market, Second entrant, MENA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1434
907 An Estimating Parameter of the Mean in Normal Distribution by Maximum Likelihood, Bayes, and Markov Chain Monte Carlo Methods

Authors: Autcha Araveeporn

Abstract:

This paper is to compare the parameter estimation of the mean in normal distribution by Maximum Likelihood (ML), Bayes, and Markov Chain Monte Carlo (MCMC) methods. The ML estimator is estimated by the average of data, the Bayes method is considered from the prior distribution to estimate Bayes estimator, and MCMC estimator is approximated by Gibbs sampling from posterior distribution. These methods are also to estimate a parameter then the hypothesis testing is used to check a robustness of the estimators. Data are simulated from normal distribution with the true parameter of mean 2, and variance 4, 9, and 16 when the sample sizes is set as 10, 20, 30, and 50. From the results, it can be seen that the estimation of MLE, and MCMC are perceivably different from the true parameter when the sample size is 10 and 20 with variance 16. Furthermore, the Bayes estimator is estimated from the prior distribution when mean is 1, and variance is 12 which showed the significant difference in mean with variance 9 at the sample size 10 and 20.

Keywords: Bayes method, Markov Chain Monte Carlo method, Maximum Likelihood method, normal distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1387
906 A Local Invariant Generalized Hough Transform Method for Integrated Circuit Visual Positioning

Authors: Fei Long Wei, Hua Yang, Hai Tao Zhang, Zhou Ping Yin

Abstract:

In this study, an local invariant generalized Houghtransform (LI-GHT) method is proposed for integrated circuit (IC) visual positioning. The original generalized Hough transform (GHT) is robust to external noise; however, it is not suitable for visual positioning of IC chips due to the four-dimensionality (4D) of parameter space which leads to the substantial storage requirement and high computational complexity. The proposed LI-GHT method can reduce the dimensionality of parameter space to 2D thanks to the rotational invariance of local invariant geometric feature and it can estimate the accuracy position and rotation angle of IC chips in real-time under noise and blur influence. The experiment results show that the proposed LI-GHT can estimate position and rotation angle of IC chips with high accuracy and fast speed. The proposed LI-GHT algorithm was implemented in IC visual positioning system of radio frequency identification (RFID) packaging equipment.

Keywords: Integrated Circuit Visual Positioning, Generalized Hough Transform, Local invariant Generalized Hough Transform, ICpacking equipment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2171
905 Different Views and Evaluations of IT Artifacts

Authors: Sameh Al-Natour, Izak Benbasat

Abstract:

The introduction of a multitude of new and interactive e-commerce information technology (IT) artifacts has impacted adoption research. Rather than solely functioning as productivity tools, new IT artifacts assume the roles of interaction mediators and social actors. This paper describes the varying roles assumed by IT artifacts, and proposes and distinguishes between four distinct foci of how the artifacts are evaluated. It further proposes a theoretical model that maps the different views of IT artifacts to four distinct types of evaluations.

Keywords: IT adoption, IT artifacts, similarity, social actor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1302
904 Developing a Regulator for Improving the Operation Modes of the Electrical Drive Motor

Authors: Baghdasaryan Marinka

Abstract:

The operation modes of the synchronous motors used in the production processes are greatly conditioned by the accidentally changing technological and power indices.  As a result, the electrical drive synchronous motor may appear in irregular operation regimes. Although there are numerous works devoted to the development of the regulator for the synchronous motor operation modes, their application for the motors working in the irregular modes is not expedient. In this work, to estimate the issues concerning the stability of the synchronous electrical drive system, the transfer functions of the electrical drive synchronous motors operating in the synchronous and induction modes have been obtained.  For that purpose, a model for investigating the frequency characteristics has been developed in the LabView environment. Frequency characteristics for assessing the transient process of the electrical drive system, operating in the synchronous and induction modes have been obtained, and based on their assessment, a regulator for improving the operation modes of the motor has been proposed. The proposed regulator can be successfully used to prevent the irregular modes of the electrical drive synchronous motor, as well as to estimate the operation state of the drive motor of the mechanism with a changing load.

Keywords: Electrical drive system, synchronous motor, regulator, stability, transition process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 650
903 Building an Arithmetic Model to Assess Visual Consistency in Townscape

Authors: Dheyaa Hussein, Peter Armstrong

Abstract:

The phenomenon of visual disorder is prominent in contemporary townscapes. This paper provides a theoretical framework for the assessment of visual consistency in townscape in order to achieve more favourable outcomes for users. In this paper, visual consistency refers to the amount of similarity between adjacent components of townscape. The paper investigates parameters which relate to visual consistency in townscape, explores the relationships between them and highlights their significance. The paper uses arithmetic methods from outside the domain of urban design to enable the establishment of an objective approach of assessment which considers subjective indicators including users’ preferences. These methods involve the standard of deviation, colour distance and the distance between points. The paper identifies urban space as a key representative of the visual parameters of townscape. It focuses on its two components, geometry and colour in the evaluation of the visual consistency of townscape. Accordingly, this article proposes four measurements. The first quantifies the number of vertices, which are points in the three-dimensional space that are connected, by lines, to represent the appearance of elements. The second evaluates the visual surroundings of urban space through assessing the location of their vertices. The last two measurements calculate the visual similarity in both vertices and colour in townscape by the calculation of their variation using methods including standard of deviation and colour difference. The proposed quantitative assessment is based on users’ preferences towards these measurements. The paper offers a theoretical basis for a practical tool which can alter the current understanding of architectural form and its application in urban space. This tool is currently under development. The proposed method underpins expert subjective assessment and permits the establishment of a unified framework which adds to creativity by the achievement of a higher level of consistency and satisfaction among the citizens of evolving townscapes.

Keywords: Townscape, Urban Design, Visual Assessment, Visual Consistency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1583
902 Comparison of ANFIS and ANN for Estimation of Biochemical Oxygen Demand Parameter in Surface Water

Authors: S. Areerachakul

Abstract:

Nowadays, several techniques such as; Fuzzy Inference System (FIS) and Neural Network (NN) are employed for developing of the predictive models to estimate parameters of water quality. The main objective of this study is to compare between the predictive ability of the Adaptive Neuro-Fuzzy Inference System (ANFIS) model and Artificial Neural Network (ANN) model to estimate the Biochemical Oxygen Demand (BOD) on data from 11 sampling sites of Saen Saep canal in Bangkok, Thailand. The data is obtained from the Department of Drainage and Sewerage, Bangkok Metropolitan Administration, during 2004-2011. The five parameters of water quality namely Dissolved Oxygen (DO), Chemical Oxygen Demand (COD), Ammonia Nitrogen (NH3N), Nitrate Nitrogen (NO3N), and Total Coliform bacteria (T-coliform) are used as the input of the models. These water quality indices affect the biochemical oxygen demand. The experimental results indicate that the ANN model provides a higher correlation coefficient (R=0.73) and a lower root mean square error (RMSE=4.53) than the corresponding ANFIS model.

Keywords: adaptive neuro-fuzzy inference system, artificial neural network, biochemical oxygen demand, surface water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2482
901 Probabilistic Modeling of Network-induced Delays in Networked Control Systems

Authors: Manoj Kumar, A.K. Verma, A. Srividya

Abstract:

Time varying network induced delays in networked control systems (NCS) are known for degrading control system-s quality of performance (QoP) and causing stability problems. In literature, a control method employing modeling of communication delays as probability distribution, proves to be a better method. This paper focuses on modeling of network induced delays as probability distribution. CAN and MIL-STD-1553B are extensively used to carry periodic control and monitoring data in networked control systems. In literature, methods to estimate only the worst-case delays for these networks are available. In this paper probabilistic network delay model for CAN and MIL-STD-1553B networks are given. A systematic method to estimate values to model parameters from network parameters is given. A method to predict network delay in next cycle based on the present network delay is presented. Effect of active network redundancy and redundancy at node level on network delay and system response-time is also analyzed.

Keywords: NCS (networked control system), delay analysis, response-time distribution, worst-case delay, CAN, MIL-STD-1553B, redundancy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1738
900 Offline Parameter Identification and State-of-Charge Estimation for Healthy and Aged Electric Vehicle Batteries Based on the Combined Model

Authors: Xiaowei Zhang, Min Xu, Saeid Habibi, Fengjun Yan, Ryan Ahmed

Abstract:

Recently, Electric Vehicles (EVs) have received extensive consideration since they offer a more sustainable and greener transportation alternative compared to fossil-fuel propelled vehicles. Lithium-Ion (Li-ion) batteries are increasingly being deployed in EVs because of their high energy density, high cell-level voltage, and low rate of self-discharge. Since Li-ion batteries represent the most expensive component in the EV powertrain, accurate monitoring and control strategies must be executed to ensure their prolonged lifespan. The Battery Management System (BMS) has to accurately estimate parameters such as the battery State-of-Charge (SOC), State-of-Health (SOH), and Remaining Useful Life (RUL). In order for the BMS to estimate these parameters, an accurate and control-oriented battery model has to work collaboratively with a robust state and parameter estimation strategy. Since battery physical parameters, such as the internal resistance and diffusion coefficient change depending on the battery state-of-life (SOL), the BMS has to be adaptive to accommodate for this change. In this paper, an extensive battery aging study has been conducted over 12-months period on 5.4 Ah, 3.7 V Lithium polymer cells. Instead of using fixed charging/discharging aging cycles at fixed C-rate, a set of real-world driving scenarios have been used to age the cells. The test has been interrupted every 5% capacity degradation by a set of reference performance tests to assess the battery degradation and track model parameters. As battery ages, the combined model parameters are optimized and tracked in an offline mode over the entire batteries lifespan. Based on the optimized model, a state and parameter estimation strategy based on the Extended Kalman Filter (EKF) and the relatively new Smooth Variable Structure Filter (SVSF) have been applied to estimate the SOC at various states of life.

Keywords: Lithium-Ion batteries, genetic algorithm optimization, battery aging test, and parameter identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491
899 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez

Abstract:

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

Keywords: Structural reliability, reinforced concrete bridges, mixing approaches, point estimate method, Monte Carlo simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1361
898 Fast Algorithm of Shot Cut Detection

Authors: Lenka Krulikovská, Jaroslav Polec, Tomáš Hirner

Abstract:

In this paper we present a novel method, which reduces the computational complexity of abrupt cut detection. We have proposed fast algorithm, where the similarity of frames within defined step is evaluated instead of comparing successive frames. Based on the results of simulation on large video collection, the proposed fast algorithm is able to achieve 80% reduction of needed frames comparisons compared to actually used methods without the shot cut detection accuracy degradation.

Keywords: Abrupt cut, fast algorithm, shot cut detection, Pearson correlation coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1711
897 A Robust and Adaptive Unscented Kalman Filter for the Air Fine Alignment of the Strapdown Inertial Navigation System/GPS

Authors: Jian Shi, Baoguo Yu, Haonan Jia, Meng Liu, Ping Huang

Abstract:

Adapting to the flexibility of war, a large number of guided weapons launch from aircraft. Therefore, the inertial navigation system loaded in the weapon needs to undergo an alignment process in the air. This article proposes the following methods to the problem of inaccurate modeling of the system under large misalignment angles, the accuracy reduction of filtering caused by outliers, and the noise changes in GPS signals: first, considering the large misalignment errors of Strapdown Inertial Navigation System (SINS)/GPS, a more accurate model is made rather than to make a small-angle approximation, and the Unscented Kalman Filter (UKF) algorithms are used to estimate the state; then, taking into account the impact of GPS noise changes on the fine alignment algorithm, the innovation adaptive filtering algorithm is introduced to estimate the GPS’s noise in real-time; at the same time, in order to improve the anti-interference ability of the air fine alignment algorithm, a robust filtering algorithm based on outlier detection is combined with the air fine alignment algorithm to improve the robustness of the algorithm. The algorithm can improve the alignment accuracy and robustness under interference conditions, which is verified by simulation.

Keywords: Air alignment, fine alignment, inertial navigation system, integrated navigation system, UKF.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 471
896 The Highest Art Tasks of the World and Humans Transforming

Authors: K. Khalykov, G. Begalinova

Abstract:

In the given article the creative arts is being investigated in the modern era and from the aspect of the artistic interrelationship, having created by the character of his personality and as the viewer. A study in the identity formation terms, the definition of its being unique, unity and similarity as a global issue of the XXI century has been conducted by the analyzing the definitions which characterize the human nature in the arts. Spiritual universality and human existence have been considered in the art system as a human who is a creator, as the man hero and as the character who is the recipient as well as the analyses which have been conducted along with the worldwide cultural and historical processes.

Keywords: author, being, creative function of art, recipient and cultural contexts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1373
895 Optimization of the Characteristic Straight Line Method by a “Best Estimate“ of Observed, Normal Orthometric Elevation Differences

Authors: Mahmoud M. S. Albattah

Abstract:

In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.

Keywords: Characteristic straight line method, dynamic height, landslides, orthometric height, systematic errors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526
894 Heat Transfer of an Impinging Jet on a Plane Surface

Authors: Jian-Jun Shu

Abstract:

A cold, thin film of liquid impinging on an isothermal hot, horizontal surface has been investigated. An approximate solution for the velocity and temperature distributions in the flow along the horizontal surface is developed, which exploits the hydrodynamic similarity solution for thin film flow. The approximate solution may provide a valuable basis for assessing flow and heat transfer in more complex settings.

Keywords: Flux, free impinging jet, solid-surface, uniform wall temperature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1946
893 Deoiling Hydrocyclones Flow Field-A Comparison between k-Epsilon and LES

Authors: Maysam Saidi, Reza Maddahian, Bijan Farhanieh

Abstract:

In this research a comparison between k-epsilon and LES model for a deoiling hydrocyclone is conducted. Flow field of hydrocyclone is obtained by three-dimensional simulations with OpenFOAM code. Potential of prediction for both methods of this complex swirl flow is discussed. Large eddy simulation method results have more similarity to experiment and its results are presented in figures from different hydrocyclone cross sections.

Keywords: Deoiling hydrocyclones, k-epsilon model, Largeeddy simulation, OpenFOAM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2484
892 A Renovated Cook's Distance Based On The Buckley-James Estimate In Censored Regression

Authors: Nazrina Aziz, Dong Q. Wang

Abstract:

There have been various methods created based on the regression ideas to resolve the problem of data set containing censored observations, i.e. the Buckley-James method, Miller-s method, Cox method, and Koul-Susarla-Van Ryzin estimators. Even though comparison studies show the Buckley-James method performs better than some other methods, it is still rarely used by researchers mainly because of the limited diagnostics analysis developed for the Buckley-James method thus far. Therefore, a diagnostic tool for the Buckley-James method is proposed in this paper. It is called the renovated Cook-s Distance, (RD* i ) and has been developed based on the Cook-s idea. The renovated Cook-s Distance (RD* i ) has advantages (depending on the analyst demand) over (i) the change in the fitted value for a single case, DFIT* i as it measures the influence of case i on all n fitted values Yˆ∗ (not just the fitted value for case i as DFIT* i) (ii) the change in the estimate of the coefficient when the ith case is deleted, DBETA* i since DBETA* i corresponds to the number of variables p so it is usually easier to look at a diagnostic measure such as RD* i since information from p variables can be considered simultaneously. Finally, an example using Stanford Heart Transplant data is provided to illustrate the proposed diagnostic tool.

Keywords: Buckley-James estimators, censored regression, censored data, diagnostic analysis, product-limit estimator, renovated Cook's Distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1394
891 The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination

Authors: O. Abiodun Adeyinka, B. Adeyemo Adesesan

Abstract:

The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.

Keywords: Logistic Regression LoR, Kernel Density Estimator KDE, Handwriting, Confidence Interval, Repeatability, Reproducibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 414