Search results for: a posteriori error estimate.
421 A Hybrid Feature Selection by Resampling, Chi squared and Consistency Evaluation Techniques
Authors: Amir-Massoud Bidgoli, Mehdi Naseri Parsa
Abstract:
In this paper a combined feature selection method is proposed which takes advantages of sample domain filtering, resampling and feature subset evaluation methods to reduce dimensions of huge datasets and select reliable features. This method utilizes both feature space and sample domain to improve the process of feature selection and uses a combination of Chi squared with Consistency attribute evaluation methods to seek reliable features. This method consists of two phases. The first phase filters and resamples the sample domain and the second phase adopts a hybrid procedure to find the optimal feature space by applying Chi squared, Consistency subset evaluation methods and genetic search. Experiments on various sized datasets from UCI Repository of Machine Learning databases show that the performance of five classifiers (Naïve Bayes, Logistic, Multilayer Perceptron, Best First Decision Tree and JRIP) improves simultaneously and the classification error for these classifiers decreases considerably. The experiments also show that this method outperforms other feature selection methods.Keywords: feature selection, resampling, reliable features, Consistency Subset Evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2591420 A Review and Comparative Analysis on Cluster Ensemble Methods
Authors: S. Sarumathi, P. Ranjetha, C. Saraswathy, M. Vaishnavi, S. Geetha
Abstract:
Clustering is an unsupervised learning technique for aggregating data objects into meaningful classes so that intra cluster similarity is maximized and inter cluster similarity is minimized in data mining. However, no single clustering algorithm proves to be the most effective in producing the best result. As a result, a new challenging technique known as the cluster ensemble approach has blossomed in order to determine the solution to this problem. For the cluster analysis issue, this new technique is a successful approach. The cluster ensemble's main goal is to combine similar clustering solutions in a way that achieves the precision while also improving the quality of individual data clustering. Because of the massive and rapid creation of new approaches in the field of data mining, the ongoing interest in inventing novel algorithms necessitates a thorough examination of current techniques and future innovation. This paper presents a comparative analysis of various cluster ensemble approaches, including their methodologies, formal working process, and standard accuracy and error rates. As a result, the society of clustering practitioners will benefit from this exploratory and clear research, which will aid in determining the most appropriate solution to the problem at hand.
Keywords: Clustering, cluster ensemble methods, consensus function, data mining, unsupervised learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 834419 Relative Radiometric Correction of Cloudy Multitemporal Satellite Imagery
Authors: Seema Biday, Udhav Bhosle
Abstract:
Repeated observation of a given area over time yields potential for many forms of change detection analysis. These repeated observations are confounded in terms of radiometric consistency due to changes in sensor calibration over time, differences in illumination, observation angles and variation in atmospheric effects. This paper demonstrates applicability of an empirical relative radiometric normalization method to a set of multitemporal cloudy images acquired by Resourcesat1 LISS III sensor. Objective of this study is to detect and remove cloud cover and normalize an image radiometrically. Cloud detection is achieved by using Average Brightness Threshold (ABT) algorithm. The detected cloud is removed and replaced with data from another images of the same area. After cloud removal, the proposed normalization method is applied to reduce the radiometric influence caused by non surface factors. This process identifies landscape elements whose reflectance values are nearly constant over time, i.e. the subset of non-changing pixels are identified using frequency based correlation technique. The quality of radiometric normalization is statistically assessed by R2 value and mean square error (MSE) between each pair of analogous band.Keywords: Correlation, Frequency domain, Multitemporal, Relative Radiometric Correction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1983418 Coding based Synchronization Algorithm for Secondary Synchronization Channel in WCDMA
Authors: Deng Liao, Dongyu Qiu, Ahmed K. Elhakeem
Abstract:
A new code synchronization algorithm is proposed in this paper for the secondary cell-search stage in wideband CDMA systems. Rather than using the Cyclically Permutable (CP) code in the Secondary Synchronization Channel (S-SCH) to simultaneously determine the frame boundary and scrambling code group, the new synchronization algorithm implements the same function with less system complexity and less Mean Acquisition Time (MAT). The Secondary Synchronization Code (SSC) is redesigned by splitting into two sub-sequences. We treat the information of scrambling code group as data bits and use simple time diversity BCH coding for further reliability. It avoids involved and time-costly Reed-Solomon (RS) code computations and comparisons. Analysis and simulation results show that the Synchronization Error Rate (SER) yielded by the new algorithm in Rayleigh fading channels is close to that of the conventional algorithm in the standard. This new synchronization algorithm reduces system complexities, shortens the average cell-search time and can be implemented in the slot-based cell-search pipeline. By taking antenna diversity and pipelining correlation processes, the new algorithm also shows its flexible application in multiple antenna systems.Keywords: WCDMA cell-search, synchronization algorithm, secondary synchronization channel, antenna diversity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2395417 Fast Search Method for Large Video Database Using Histogram Features and Temporal Division
Authors: Feifei Lee, Qiu Chen, Koji Kotani, Tadahiro Ohmi
Abstract:
In this paper, we propose an improved fast search algorithm using combined histogram features and temporal division method for short MPEG video clips from large video database. There are two types of histogram features used to generate more robust features. The first one is based on the adjacent pixel intensity difference quantization (APIDQ) algorithm, which had been reliably applied to human face recognition previously. An APIDQ histogram is utilized as the feature vector of the frame image. Another one is ordinal feature which is robust to color distortion. Combined with active search [4], a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by 6 hours of video to search for given 200 MPEG video clips which each length is 30 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 120ms, and Equal Error Rate (ERR) of 1% is achieved, which is more accurately and robust than conventional fast video search algorithm.Keywords: Fast search, Adjacent pixel intensity differencequantization (APIDQ), DC image, Histogram feature.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1628416 Adaptive Block State Update Method for Separating Background
Authors: Youngsuck Ji, Youngjoon Han, Hernsoo Hahn
Abstract:
In this paper, we proposed the robust mobile object detection method for light effect in the night street image block based updating reference background model using block state analysis. Experiment image is acquired sequence color video from steady camera. When suddenly appeared artificial illumination, reference background model update this information such as street light, sign light. Generally natural illumination is change by temporal, but artificial illumination is suddenly appearance. So in this paper for exactly detect artificial illumination have 2 state process. First process is compare difference between current image and reference background by block based, it can know changed blocks. Second process is difference between current image-s edge map and reference background image-s edge map, it possible to estimate illumination at any block. This information is possible to exactly detect object, artificial illumination and it was generating reference background more clearly. Block is classified by block-state analysis. Block-state has a 4 state (i.e. transient, stationary, background, artificial illumination). Fig. 1 is show characteristic of block-state respectively [1]. Experimental results show that the presented approach works well in the presence of illumination variance.Keywords: Block-state, Edge component, Reference backgroundi, Artificial illumination.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1326415 Optimization of Process Parameters of Pressure Die Casting using Taguchi Methodology
Authors: Satish Kumar, Arun Kumar Gupta, Pankaj Chandna
Abstract:
The present work analyses different parameters of pressure die casting to minimize the casting defects. Pressure diecasting is usually applied for casting of aluminium alloys. Good surface finish with required tolerances and dimensional accuracy can be achieved by optimization of controllable process parameters such as solidification time, molten temperature, filling time, injection pressure and plunger velocity. Moreover, by selection of optimum process parameters the pressure die casting defects such as porosity, insufficient spread of molten material, flash etc. are also minimized. Therefore, a pressure die casting component, carburetor housing of aluminium alloy (Al2Si2O5) has been considered. The effects of selected process parameters on casting defects and subsequent setting of parameters with the levels have been accomplished by Taguchi-s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L18 orthogonal array. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the percent contribution of different process parameters. Confidence interval has also been estimated for 95% consistency level and three conformational experiments have been performed to validate the optimum level of different parameters. Overall 2.352% reduction in defects has been observed with the help of suggested optimum process parameters.
Keywords: Aluminium Casting, Pressure Die Casting, Taguchi Methodology, Design of Experiments
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7338414 Computer Aided Design of Reshaping Process of Circular Pipes into Square Pipes
Authors: Parviz Alinezhad, Ali Sanati, Koorosh Naser Momtahen
Abstract:
Square pipes (pipes with square cross sections) are being used for various industrial objectives, such as machine structure components and housing/building elements. The utilization of them is extending rapidly and widely. Hence, the out-put of those pipes is increasing and new application fields are continually developing. Due to various demands in recent time, the products have to satisfy difficult specifications with high accuracy in dimensions. The reshaping process design of pipes with square cross sections; however, is performed by trial and error and based on expert-s experience. In this paper, a computer-aided simulation is developed based on the 2-D elastic-plastic method with consideration of the shear deformation to analyze the reshaping process. Effect of various parameters such as diameter of the circular pipe and mechanical properties of metal on product dimension and quality can be evaluated by using this simulation. Moreover, design of reshaping process include determination of shrinkage of cross section, necessary number of stands, radius of rolls and height of pipe at each stand, are investigated. Further, it is shown that there are good agreements between the results of the design method and the experimental results.Keywords: Circular Pipes, Square Pipes, Shear Deformation, Reshaping Process, Numerical Simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1399413 A Multigrid Approach for Three-Dimensional Inverse Heat Conduction Problems
Authors: Jianhua Zhou, Yuwen Zhang
Abstract:
A two-step multigrid approach is proposed to solve the inverse heat conduction problem in a 3-D object under laser irradiation. In the first step, the location of the laser center is estimated using a coarse and uniform grid system. In the second step, the front-surface temperature is recovered in good accuracy using a multiple grid system in which fine mesh is used at laser spot center to capture the drastic temperature rise in this region but coarse mesh is employed in the peripheral region to reduce the total number of sensors required. The effectiveness of the two-step approach and the multiple grid system are demonstrated by the illustrative inverse solutions. If the measurement data for the temperature and heat flux on the back surface do not contain random error, the proposed multigrid approach can yield more accurate inverse solutions. When the back-surface measurement data contain random noise, accurate inverse solutions cannot be obtained if both temperature and heat flux are measured on the back surface.
Keywords: Conduction, inverse problems, conjugated gradient method, laser.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 849412 Simulation and Design of Single Fed Circularly Polarized Triangular Microstrip Antenna with Wide Band Tuning Stub
Authors: R. Irani, A. Ghavidel, F. Hodjat Kashani
Abstract:
Recently, several designs of single fed circularly polarized microstrip antennas have been studied. Relatively, a few designs for achieving circular polarization using triangular microstrip antenna are available. Typically existing design of single fed circularly polarized triangular microstrip antennas include the use of equilateral triangular patch with a slit or a horizontal slot on the patch or addition a narrow band stub on the edge or a vertex of triangular patch. In other word, with using a narrow band tune stub on middle of an edge of triangle causes of facility to compensate the possible fabrication error and substrate materials with easier adjusting the tuner stub length. Even though disadvantages of this method is very long of stub (approximate 1/3 length of triangle edge). In this paper, instead of narrow band stub, a wide band stub has been applied, therefore the length of stub by this method has been decreased around 1/10 edge of triangle in addition changing the aperture angle of stub, provides more facility for designing and producing circular polarization wave.Keywords: Circular polarization, Microstrip antenna, single feed, wide band stub.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2013411 Model of High-Speed Train Energy Consumption
Authors: Romain Bosquet, Pierre-Olivier Vandanjon, Alex Coiret, Tristan Lorino
Abstract:
In the hardening energy context, the transport sector which constitutes a large worldwide energy demand has to be improving for decrease energy demand and global warming impacts. In a controversial situation where subsists an increasing demand for long-distance and high-speed travels, high-speed trains offer many advantages, as consuming significantly less energy than road or air transports. At the project phase of new rail infrastructures, it is nowadays important to characterize accurately the energy that will be induced by its operation phase, in addition to other more classical criteria as construction costs and travel time. Current literature consumption models used to estimate railways operation phase are obsolete or not enough accurate for taking into account the newest train or railways technologies. In this paper, an updated model of consumption for high-speed is proposed, based on experimental data obtained from full-scale tests performed on a new high-speed line. The assessment of the model is achieved by identifying train parameters and measured power consumptions for more than one hundred train routes. Perspectives are then discussed to use this updated model for accurately assess the energy impact of future railway infrastructures.Keywords: High-speed train, energy, model, track profile, infrastructure
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5215410 Reliability Evaluation of Composite Electric Power System Based On Latin Hypercube Sampling
Authors: R. Ashok Bakkiyaraj, N. Kumarappan
Abstract:
This paper investigates the suitability of Latin Hypercube sampling (LHS) for composite electric power system reliability analysis. Each sample generated in LHS is mapped into an equivalent system state and used for evaluating the annualized system and load point indices. DC loadflow based state evaluation model is solved for each sampled contingency state. The indices evaluated are loss of load probability, loss of load expectation, expected demand not served and expected energy not supplied. The application of the LHS is illustrated through case studies carried out using RBTS and IEEE-RTS test systems. Results obtained are compared with non-sequential Monte Carlo simulation and state enumeration analytical approaches. An error analysis is also carried out to check the LHS method’s ability to capture the distributions of the reliability indices. It is found that LHS approach estimates indices nearer to actual value and gives tighter bounds of indices than non-sequential Monte Carlo simulation.
Keywords: Composite power system, Latin Hypercube sampling, Monte Carlo simulation, Reliability evaluation, Variance analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3112409 A Comparison of Marginal and Joint Generalized Quasi-likelihood Estimating Equations Based On the Com-Poisson GLM: Application to Car Breakdowns Data
Authors: N. Mamode Khan, V. Jowaheer
Abstract:
In this paper, we apply and compare two generalized estimating equation approaches to the analysis of car breakdowns data in Mauritius. Number of breakdowns experienced by a machinery is a highly under-dispersed count random variable and its value can be attributed to the factors related to the mechanical input and output of that machinery. Analyzing such under-dispersed count observation as a function of the explanatory factors has been a challenging problem. In this paper, we aim at estimating the effects of various factors on the number of breakdowns experienced by a passenger car based on a study performed in Mauritius over a year. We remark that the number of passenger car breakdowns is highly under-dispersed. These data are therefore modelled and analyzed using Com-Poisson regression model. We use the two types of quasi-likelihood estimation approaches to estimate the parameters of the model: marginal and joint generalized quasi-likelihood estimating equation approaches. Under-dispersion parameter is estimated to be around 2.14 justifying the appropriateness of Com-Poisson distribution in modelling underdispersed count responses recorded in this study.
Keywords: Breakdowns, under-dispersion, com-poisson, generalized linear model, marginal quasi-likelihood estimation, joint quasi-likelihood estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1473408 Detection of Ultrasonic Images in the Presence of a Random Number of Scatterers: A Statistical Learning Approach
Authors: J. P. Dubois, O. M. Abdul-Latif
Abstract:
Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of medical ultrasound images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to clinical ultrasound images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected ultrasound images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (detection hypotheses) in the original images.
Keywords: LS-SVM, medical ultrasound imaging, partially developed speckle, multi-look model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1348407 Evaluation of the Hepatitis C Virus and Classical and Modern Immunoassays Used Nowadays to Diagnose It in Tirana
Authors: Stela Papa, Klementina Puto, Migena Pllaha
Abstract:
HCV is a hepatotropic RNA virus, transmitted primarily via the blood route, which causes progressive disease such as chronic hepatitis, liver cirrhosis, or hepatocellular carcinoma. HCV nowadays is a global healthcare problem. A variety of immunoassays including old and new technologies are being applied to detect HCV in our country. These methods include Immunochromatography assays (ICA), Fluorescence immunoassay (FIA), Enzyme linked fluorescent assay (ELFA), and Enzyme linked immunosorbent assay (ELISA) to detect HCV antibodies in blood serum, which lately is being slowly replaced by more sensitive methods such as rapid automated analyzer chemiluminescence immunoassay (CLIA). The aim of this study is to estimate HCV infection in carriers and chronic acute patients and to evaluate the use of new diagnostic methods. This study was realized from September 2016 to May 2018. During this study period, 2913 patients were analyzed for the presence of HCV by taking samples from their blood serum. The immunoassays performed were ICA, FIA, ELFA, ELISA, and CLIA assays. Concluding, 82% of patients taken in this study, resulted infected with HCV. Diagnostic methods in clinical laboratories are crucial in the early stages of infection, in the management of chronic hepatitis and in the treatment of patients during their disease.
Keywords: CLIA, ELISA, hepatitis C virus, immunoassay.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 751406 SVM-Based Detection of SAR Images in Partially Developed Speckle Noise
Authors: J. P. Dubois, O. M. Abdul-Latif
Abstract:
Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of SAR (synthetic aperture radar) images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to real SAR images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected SAR images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (the detection hypotheses) in the original images.Keywords: Least Square-Support Vector Machine, SyntheticAperture Radar. Partially Developed Speckle, Multi-Look Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1539405 Simulation Study on the Indoor Thermal Comfort with Insulation on Interior Structural Components of Super High-Rise Residences
Authors: Y. Wang, H. Fukuda, A. Ozaki, H. Sato
Abstract:
In this study, we discussed the effects on the thermal comfort of super high-rise residences that how effected by the high thermal capacity structural components. We considered different building orientations, structures, and insulation methods. We used the dynamic simulation software THERB (simulation of the thermal environment of residential buildings). It can estimate the temperature, humidity, sensible temperature, and heating/cooling load for multiple buildings. In the past studies, we examined the impact of air-conditioning loads (hereinafter referred to as AC loads) on the interior structural parts and the AC-usage patterns of super-high-rise residences. Super-high-rise residences have more structural components such as pillars and beams than do ordinary apartment buildings. The skeleton is generally made of concrete and steel, which have high thermal-storage capacities. The thermal-storage capacity of super-high-rise residences is considered to have a larger impact on the AC load and thermal comfort than that of ordinary residences. We show that the AC load of super-high-rise units would be reduced by installing insulation on the surfaces of interior walls that are not usually insulated in Japan.Keywords: High-rise Residences, AC Load, Thermal Comfort, Thermal Storage, Insulation Patterns
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1546404 Multi-Objective Multi-Mode Resource-Constrained Project Scheduling Problem by Preemptive Fuzzy Goal Programming
Authors: Phruksaphanrat B.
Abstract:
This research proposes a preemptive fuzzy goal programming model for multi-objective multi-mode resource constrained project scheduling problem. The objectives of the problem are minimization of the total time and the total cost of the project. Objective in a multi-mode resource-constrained project scheduling problem is often a minimization of makespan. However, both time and cost should be considered at the same time with different level of important priorities. Moreover, all elements of cost functions in a project are not included in the conventional cost objective function. Incomplete total project cost causes an error in finding the project scheduling time. In this research, preemptive fuzzy goal programming is presented to solve the multi-objective multi-mode resource constrained project scheduling problem. It can find the compromise solution of the problem. Moreover, it is also flexible in adjusting to find a variety of alternative solutions.
Keywords: Multi-mode resource constrained project scheduling problem, Fuzzy set, Goal programming, Preemptive fuzzy goal programming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2763403 A New Brazilian Friction-Resistant Low Alloy High Strength Steel – A Life Testing Approach
Authors: D. I. De Souza, G. P. Azevedo, R. Rocha
Abstract:
In this paper we will develop a sequential life test approach applied to a modified low alloy-high strength steel part used in highway overpasses in Brazil.We will consider two possible underlying sampling distributions: the Normal and theInverse Weibull models. The minimum life will be considered equal to zero. We will use the two underlying models to analyze a fatigue life test situation, comparing the results obtained from both.Since a major chemical component of this low alloy-high strength steel part has been changed, there is little information available about the possible values that the parameters of the corresponding Normal and Inverse Weibull underlying sampling distributions could have. To estimate the shape and the scale parameters of these two sampling models we will use a maximum likelihood approach for censored failure data. We will also develop a truncation mechanism for the Inverse Weibull and Normal models. We will provide rules to truncate a sequential life testing situation making one of the two possible decisions at the moment of truncation; that is, accept or reject the null hypothesis H0. An example will develop the proposed truncated sequential life testing approach for the Inverse Weibull and Normal models.
Keywords: Sequential life testing, normal and inverse Weibull models, maximum likelihood approach, truncation mechanism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1434402 A Data Driven Approach for the Degradation of a Lithium-Ion Battery Based on Accelerated Life Test
Authors: Alyaa M. Younes, Nermine Harraz, Mohammad H. Elwany
Abstract:
Lithium ion batteries are currently used for many applications including satellites, electric vehicles and mobile electronics. Their ability to store relatively large amount of energy in a limited space make them most appropriate for critical applications. Evaluation of the life of these batteries and their reliability becomes crucial to the systems they support. Reliability of Li-Ion batteries has been mainly considered based on its lifetime. However, another important factor that can be considered critical in many applications such as in electric vehicles is the cycle duration. The present work presents the results of an experimental investigation on the degradation behavior of a Laptop Li-ion battery (type TKV2V) and the effect of applied load on the battery cycle time. The reliability was evaluated using an accelerated life test. Least squares linear regression with median rank estimation was used to estimate the Weibull distribution parameters needed for the reliability functions estimation. The probability density function, failure rate and reliability function under each of the applied loads were evaluated and compared. An inverse power model is introduced that can predict cycle time at any stress level given.
Keywords: Accelerated life test, inverse power law, lithium ion battery, reliability evaluation, Weibull distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 842401 Evaluation of Hazelnut Hulls as an Alternative Forage Resource for Ruminant Animals
Authors: N. Cetinkaya, Y. S. Kuleyin
Abstract:
The aim of this study was to estimate the digestibility of the fruit internal skin of different varieties of hazelnuts to propose hazelnut fruit skin as an alternative feed source as roughage in ruminant nutrition. In 2015, the fruit internal skins of three different varieties of round hazelnuts (RH), pointed hazelnuts (PH) and almond hazelnuts (AH) were obtained from hazelnut processing factory then their crude nutrients analysis were carried out. Organic matter digestibility (OMD) and metabolisable energy (ME) values of hazelnut fruit skins were estimated from gas measured by in vitro gas production method. Their antioxidant activities were determined by spectrophotometric method. Crude nutrient values of three different varieties were; organic matter (OM): 87.83, 87.81 and 87.78%), crude protein (CP): 5.97, 5.93 and 5.89%, neutral detergent fiber (NDF): 30.30, 30.29 and 30.29%, acid detergent fiber (ADF): 48.68, 48.67 and 48.66% and acid detergent lignin (ADL): 25.43, 25.43 and 25.39% respectively. OMD from 24 h incubation time of RH, PH and AH were 22.04, 22.46 and 22.74%; MEGP values were 3.69, 3.75 and 3.79 MJ/kg DM; and antioxidant activity values were 94.60, 94.54 and 94.52 IC 50 mg/mL respectively. The fruit internal skin of different varieties of hazelnuts may be considered as an alternative roughage for ruminant nutrition regarding to their crude and digestible nutritive values. Moreover, hazelnut fruit skin has a rich antioxidant content so it may be used as a feed additive for both ruminant and non-ruminant animals.
Keywords: Antioxidant activity, hazelnut fruit skin, metabolizable energy, organic matter digestibility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1621400 GIS-based Non-point Sources of Pollution Simulation in Cameron Highlands, Malaysia
Authors: M. Eisakhani, A. Pauzi, O. Karim, A. Malakahmad, S.R. Mohamed Kutty, M. H. Isa
Abstract:
Cameron Highlands is a mountainous area subjected to torrential tropical showers. It extracts 5.8 million liters of water per day for drinking supply from its rivers at several intake points. The water quality of rivers in Cameron Highlands, however, has deteriorated significantly due to land clearing for agriculture, excessive usage of pesticides and fertilizers as well as construction activities in rapidly developing urban areas. On the other hand, these pollution sources known as non-point pollution sources are diverse and hard to identify and therefore they are difficult to estimate. Hence, Geographical Information Systems (GIS) was used to provide an extensive approach to evaluate landuse and other mapping characteristics to explain the spatial distribution of non-point sources of contamination in Cameron Highlands. The method to assess pollution sources has been developed by using Cameron Highlands Master Plan (2006-2010) for integrating GIS, databases, as well as pollution loads in the area of study. The results show highest annual runoff is created by forest, 3.56 × 108 m3/yr followed by urban development, 1.46 × 108 m3/yr. Furthermore, urban development causes highest BOD load (1.31 × 106 kgBOD/yr) while agricultural activities and forest contribute the highest annual loads for phosphorus (6.91 × 104 kgP/yr) and nitrogen (2.50 × 105 kgN/yr), respectively. Therefore, best management practices (BMPs) are suggested to be applied to reduce pollution level in the area.Keywords: Cameron Highlands, Land use, Non-point Sources of Pollution
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2882399 Assessment of Energy Demand Considering Different Model Simulations in a Low Energy Demand House
Authors: M. Cañada-Soriano, C. Aparicio-Fernández, P. Sebastián Ferrer Gisbert, M. Val Field, J.-L. Vivancos-Bono
Abstract:
The lack of insulation along with the existence of air leakages constitute a meaningful impact on the energy performance of buildings. Both of them lead to increases in the energy demand through additional heating and/or cooling loads. Additionally, they cause thermal discomfort. In order to quantify these uncontrolled air currents, the Blower Door test can be used. It is a standardized procedure that determines the airtightness of a space by characterizing the rate of air leakages through the envelope surface. In this sense, the low-energy buildings complying with the Passive House design criteria are required to achieve high levels of airtightness. Due to the invisible nature of air leakages, additional tools are often considered to identify where the infiltrations take place such as the infrared thermography. The aim of this study is to assess the airtightness of a typical Mediterranean dwelling house, refurbished under the Passive House standard, using the Blower Door test. Moreover, the building energy performance modelling tools TRNSYS (TRaNsient System Simulation program) and TRNFlow (TRaNsient Flow) have been used to estimate the energy demand in different scenarios. In this sense, a sequential implementation of three different energy improvement measures (insulation thickness, glazing type and infiltrations) have been analyzed.
Keywords: Airtightness, blower door, TRNSYS, infrared thermography, energy demand.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 225398 An Improved Learning Algorithm based on the Conjugate Gradient Method for Back Propagation Neural Networks
Authors: N. M. Nawi, M. R. Ransing, R. S. Ransing
Abstract:
The conjugate gradient optimization algorithm usually used for nonlinear least squares is presented and is combined with the modified back propagation algorithm yielding a new fast training multilayer perceptron (MLP) algorithm (CGFR/AG). The approaches presented in the paper consist of three steps: (1) Modification on standard back propagation algorithm by introducing gain variation term of the activation function, (2) Calculating the gradient descent on error with respect to the weights and gains values and (3) the determination of the new search direction by exploiting the information calculated by gradient descent in step (2) as well as the previous search direction. The proposed method improved the training efficiency of back propagation algorithm by adaptively modifying the initial search direction. Performance of the proposed method is demonstrated by comparing to the conjugate gradient algorithm from neural network toolbox for the chosen benchmark. The results show that the number of iterations required by the proposed method to converge is less than 20% of what is required by the standard conjugate gradient and neural network toolbox algorithm.Keywords: Back-propagation, activation function, conjugategradient, search direction, gain variation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2842397 Statically Fused Unbiased Converted Measurements Kalman Filter
Authors: Zhengkun Guo, Yanbin Li, Wenqing Wang, Bo Zou
Abstract:
Active radar and sonar systems often report Doppler measurements in addition to the position measurements such as range and bearing. The tracker can perform better by making full use of the Doppler measurements. However, due to the high nonlinearity of the Doppler measurements with respect to the target state in the Cartesian coordinate systems, those measurements are not always fully exploited. This paper mainly focuses on dealing with the Doppler measurements as well as the position measurements in Polar coordinates. The Statically Fused Converted Position and Doppler Measurements Kalman Filter (SF-CMKF) with additive debiased measurement conversion has been presented. However, the exact compensation for the bias of the measurement conversion are multiplicative and depend on the statistics of the cosine of the angle measurement errors. As a result, the consistency and performance of the SF-CMKF may be suboptimal in the large angle error situations. In this paper, the multiplicative unbiased position and Doppler measurement conversion for two-dimensional (Polar-to-Cartesian) tracking are derived, and the SF-CMKF is improved by using those conversion. Monte Carlo simulations are presented to demonstrate the statistic consistency of the multiplicative unbiased conversion and the superior performance of the modified SF-CMKF (SF-UCMKF).
Keywords: Measurement conversion, Doppler, Kalman filter, estimation, tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 378396 The Mitigation Strategy Analysis of Kuosheng Nuclear Power Plant Spent Fuel Pool Using MELCOR2.1/SNAP
Authors: Y. Chiang, J. R. Wang, J. H. Yang, Y. S. Tseng, C. Shih, S. W. Chen
Abstract:
Kuosheng nuclear power plant (NPP) is a BWR/6 plant in Taiwan. There is more concern for the safety of Spent Fuel Pools (SFPs) in Taiwan after Fukushima event. In order to estimate the safety of Kuosheng NPP SFP, by using MELCOR2.1 and SNAP, the safety analysis of Kuosheng NPP SFP was performed combined with the mitigation strategy of NEI 06-12 report. There were several steps in this research. First, the Kuosheng NPP SFP models were established by MELCOR2.1/SNAP. Second, the Station Blackout (SBO) analysis of Kuosheng SFP was done by TRACE and MELCOR under the cooling system failure condition. The results showed that the calculations of MELCOR and TRACE were very similar in this case. Second, the mitigation strategy analysis was done with the MELCOR model by following the NEI 06-12 report. The results showed the effectiveness of NEI 06-12 strategy in Kuosheng NPP SFP. Finally, a sensitivity study of SFP quenching was done to check the differences of different water injection time and the phenomena during the quenching. The results showed that if the cladding temperature was over 1600 K, the water injection may have chance to cause the accident more severe with more hydrogen generation. It was because of the oxidation heat and the “Breakaway” effect of the zirconium-water reaction. An animation model built by SNAP was also shown in this study.
Keywords: MELCOR, SNAP, spent fuel pool, quenching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 960395 Feasibility Investigation of Near Infrared Spectrometry for Particle Size Estimation of Nano Structures
Authors: A. Bagheri Garmarudi, M. Khanmohammadi, N. Khoddami, K. Shabani
Abstract:
Determination of nano particle size is substantial since the nano particle size exerts a significant effect on various properties of nano materials. Accordingly, proposing non-destructive, accurate and rapid techniques for this aim is of high interest. There are some conventional techniques to investigate the morphology and grain size of nano particles such as scanning electron microscopy (SEM), atomic force microscopy (AFM) and X-ray diffractometry (XRD). Vibrational spectroscopy is utilized to characterize different compounds and applied for evaluation of the average particle size based on relationship between particle size and near infrared spectra [1,4] , but it has never been applied in quantitative morphological analysis of nano materials. So far, the potential application of nearinfrared (NIR) spectroscopy with its ability in rapid analysis of powdered materials with minimal sample preparation, has been suggested for particle size determination of powdered pharmaceuticals. The relationship between particle size and diffuse reflectance (DR) spectra in near infrared region has been applied to introduce a method for estimation of particle size. Back propagation artificial neural network (BP-ANN) as a nonlinear model was applied to estimate average particle size based on near infrared diffuse reflectance spectra. Thirty five different nano TiO2 samples with different particle size were analyzed by DR-FTNIR spectrometry and the obtained data were processed by BP- ANN.Keywords: near infrared, particle size, chemometrics, neuralnetwork, nano structure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1844394 Management of Air Pollutants from Point Sources
Authors: N. Lokeshwari, G. Srinikethan, V. S. Hegde
Abstract:
Monitoring is essential to assessing the effectiveness of air pollution control actions. The goal of the air quality information system is through monitoring, to keep authorities, major polluters and the public informed on the short and long-term changes in air quality, thereby helping to raise awareness. Mathematical models are the best tools available for the prediction of the air quality management. The main objective of the work was to apply a Model that predicts the concentration levels of different pollutants at any instant of time. In this study, distribution of air pollutants concentration such as nitrogen dioxides (NO2), sulphur dioxides (SO2) and total suspended particulates (TSP) of industries are determined by using Gaussian model. Besides that, the effect of wind speed and its direction on the pollutant concentration within the affected area were evaluated. In order to determine the efficiency and percentage of error in the modeling, validation process of data was done. Sampling of air quality was conducted in getting existing air quality around a factory and the concentrations of pollutants in a plume were inversely proportional to wind velocity. The resultant ground level concentrations were then compared to the quality standards to determine if there could be a negative impact on health. This study concludes that concentration of pollutants can be significantly predicted using Gaussian Model. The data base management is developed for the air data of Hubli-Dharwad region.
Keywords: DBMS, NO2, SO2, Wind rose plots.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2035393 Long Term Evolution Multiple-Input Multiple-Output Network in Unmanned Air Vehicles Platform
Authors: Ashagrie Getnet Flattie
Abstract:
Line-of-sight (LOS) information, data rates, good quality, and flexible network service are limited by the fact that, for the duration of any given connection, they experience severe variation in signal strength due to fading and path loss. Wireless system faces major challenges in achieving wide coverage and capacity without affecting the system performance and to access data everywhere, all the time. In this paper, the cell coverage and edge rate of different Multiple-input multiple-output (MIMO) schemes in 20 MHz Long Term Evolution (LTE) system under Unmanned Air Vehicles (UAV) platform are investigated. After some background on the enormous potential of UAV, MIMO, and LTE in wireless links, the paper highlights the presented system model which attempts to realize the various benefits of MIMO being incorporated into UAV platform. The performances of the three MIMO LTE schemes are compared with the performance of 4x4 MIMO LTE in UAV scheme carried out to evaluate the improvement in cell radius, BER, and data throughput of the system in different morphology. The results show that significant performance gains such as bit error rate (BER), data rate, and coverage can be achieved by using the presented scenario.Keywords: BER, LTE, MIMO, path loss, UAV.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1395392 Evaluation of Exerting Force on the Heating Surface Due to Bubble Ebullition in Subcooled Flow Boiling
Authors: M. R. Nematollahi
Abstract:
Vibration characteristics of subcooled flow boiling on thin and long structures such as a heating rod were recently investigated by the author. The results show that the intensity of the subcooled boiling-induced vibration (SBIV) was influenced strongly by the conditions of the subcooling temperature, linear power density and flow velocity. Implosive bubble formation and collapse are the main nature of subcooled boiling, and their behaviors are the only sources to originate from SBIV. Therefore, in order to explain the phenomenon of SBIV, it is essential to obtain reliable information about bubble behavior in subcooled boiling conditions. This was investigated at different conditions of coolant subcooling temperatures of 25 to 75°C, coolant flow velocities of 0.16 to 0.53m/s, and linear power densities of 100 to 600 W/cm. High speed photography at 13,500 frames per second was performed at these conditions. The results show that even at the highest subcooling condition, the absolute majority of bubbles collapse very close to the surface after detaching from the heating surface. Based on these observations, a simple model of surface tension and momentum change is introduced to offer a rough quantitative estimate of the force exerted on the heating surface during the bubble ebullition. The formation of a typical bubble in subcooled boiling is predicted to exert an excitation force in the order of 10-4 N.Keywords: Subcooled boiling, vibration mechanism, bubble behavior.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1547