Search results for: the closed form method of variance estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10345

Search results for: the closed form method of variance estimation

7555 Joint Optimization of Pricing and Advertisement for Seasonal Branded Products

Authors: Mohammad Modarres, Shirin Aslani

Abstract:

The goal of this paper is to develop a model to integrate “pricing" and “advertisement" for short life cycle products, such as branded fashion clothing products. To achieve this goal, we apply the concept of “Dynamic Pricing". There are two classes of advertisements, for the brand (regardless of product) and for a particular product. Advertising the brand affects the demand and price of all the products. Thus, the model considers all these products in relation with each other. We develop two different methods to integrate both types of advertisement and pricing. The first model is developed within the framework of dynamic programming. However, due to the complexity of the model, this method cannot be applicable for large size problems. Therefore, we develop another method, called hieratical approach, which is capable of handling the real world problems. Finally, we show the accuracy of this method, both theoretically and also by simulation.

Keywords: Advertising, Dynamic programming, Dynamic pricing, Promotion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641
7554 A PSO-Based Optimum Design of PID Controller for a Linear Brushless DC Motor

Authors: Mehdi Nasri, Hossein Nezamabadi-pour, Malihe Maghfoori

Abstract:

This Paper presents a particle swarm optimization (PSO) method for determining the optimal proportional-integral-derivative (PID) controller parameters, for speed control of a linear brushless DC motor. The proposed approach has superior features, including easy implementation, stable convergence characteristic and good computational efficiency. The brushless DC motor is modelled in Simulink and the PSO algorithm is implemented in MATLAB. Comparing with Genetic Algorithm (GA) and Linear quadratic regulator (LQR) method, the proposed method was more efficient in improving the step response characteristics such as, reducing the steady-states error; rise time, settling time and maximum overshoot in speed control of a linear brushless DC motor.

Keywords: Brushless DC motor, Particle swarm optimization, PID Controller, Optimal control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4972
7553 Principal Component Analysis-Ranking as a Variable Selection Method for the Simultaneous Spectrophotometric Determination of Phenol, Resorcinol and Catechol in Real Samples

Authors: Nahid Ghasemi, Mohammad Goodarzi, Morteza Khosravi

Abstract:

Simultaneous determination of multicomponents of phenol, resorcinol and catechol with a chemometric technique a PCranking artificial neural network (PCranking-ANN) algorithm is reported in this study. Based on the data correlation coefficient method, 3 representative PCs are selected from the scores of original UV spectral data (35 PCs) as the original input patterns for ANN to build a neural network model. The results obtained by iterating 8000 .The RMSEP for phenol, resorcinol and catechol with PCranking- ANN were 0.6680, 0.0766 and 0.1033, respectively. Calibration matrices were 0.50-21.0, 0.50-15.1 and 0.50-20.0 μg ml-1 for phenol, resorcinol and catechol, respectively. The proposed method was successfully applied for the determination of phenol, resorcinol and catechol in synthetic and water samples.

Keywords: Phenol, Resorcinol, Catechol, Principal componentrankingArtificial Neural Network, Chemometrics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1431
7552 A Graph Theoretic Approach for Quantitative Evaluation of NAAC Accreditation Criteria for the Indian University

Authors: Nameesh Miglani, Rajeev Saha, R. S. Parihar

Abstract:

Estimation of the quality regarding higher education within a university is practically long drawn process besides being difficult to measure primarily due to lack of a standard scale. National Assessment and Accreditation Council (NAAC) evolved a methodology of assessment which involves self-appraisal by each university/college and an assessment of performance by an expert committee. The attributes involved in assessing a university may not be totally independent from each other thereby necessitating the consideration of interdependencies. The present study focuses on evaluation of assessment criteria using graph theoretic approach and fuzzy treatment of data collected from the students. The technique will provide a suitable platform to university management team to cross check assessment of education quality by considering interdependencies of the attributes using graph theory.

Keywords: Graph theory, NAAC accreditation criteria, Indian University accreditation process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1133
7551 Efficiency of Post-Tensioning Method for Seismic Retrofitting of Pre-Cast Cylindrical Concrete Reservoirs

Authors: M.E.Karbaschi, R.Goudarzizadeh, N.Hedayat

Abstract:

Cylindrical concrete reservoirs are appropriate choice for storing liquids as water, oil and etc. By using of the pre-cast concrete reservoirs instead of the in-situ constructed reservoirs, the speed and precision of the construction would considerably increase. In this construction method, wall and roof panels would make in factory with high quality materials and precise controlling. Then, pre-cast wall and roof panels would carry out to the construction site for assembling. This method has a few faults such as: the existing weeks in connection of wall panels together and wall panels to foundation. Therefore, these have to be resisted under applied loads such as seismic load. One of the innovative methods which was successfully applied for seismic retrofitting of numerous pre-cast cylindrical water reservoirs in New Zealand, using of the high tensile cables around the reservoirs and post-tensioning them. In this paper, analytical modeling of wall and roof panels and post-tensioned cables are carried out with finite element method and the effect of height to diameter ratio, post-tensioning force value, liquid level in reservoir, installing position of tendons on seismic response of reservoirs are investigated.

Keywords: Seismic Retrofit, Pre-Cast, Concrete Reservoir, Post-Tensioning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2033
7550 ASLT Method for Beer Accelerated Shelf-Life Determination

Authors: Tatjana Rakcejeva, Valentina Skorina, Daina Karklina, Liga Skudra

Abstract:

The aim of current research was to investigate ASLT method suitability for accelerated beer shelf-life determination. The research was accomplished on popular Latvian beer: light filtrated and unfiltered pasteurized beer with alcohol content 5.2%; dark filtrated pasteurized beer with alcohol content 4.2% with shelf-life five months. Bottled in dark glass bottles beer samples were storage during 20 weeks at several temperature regimes: +10±1 °C, +20±1 °C, +30±1 °C, +40±1 °C. Samples quality parameters as physically-chemical and microbiological was tested every two weeks using standard methods. It is possible to determine beer shelf-life rapidly during storage at +30±1 °C for filtered pasteurized light beer by 2.5 times, unfiltered pasteurized light beer by 1.4 times and for filtered pasteurized dark beer by 1.7 times. During preset experiments it was proved, that it is possible to determine beer shelf-life rapidly using ASLT method if beer storage temperature could be increased by +10±1 °C.

Keywords: Beer, shelf-life, ASLT method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6112
7549 Optimal Capacitor Allocation for loss reduction in Distribution System Using Fuzzy and Plant Growth Simulation Algorithm

Authors: R. Srinivasa Rao

Abstract:

This paper presents a new and efficient approach for capacitor placement in radial distribution systems that determine the optimal locations and size of capacitor with an objective of improving the voltage profile and reduction of power loss. The solution methodology has two parts: in part one the loss sensitivity factors are used to select the candidate locations for the capacitor placement and in part two a new algorithm that employs Plant growth Simulation Algorithm (PGSA) is used to estimate the optimal size of capacitors at the optimal buses determined in part one. The main advantage of the proposed method is that it does not require any external control parameters. The other advantage is that it handles the objective function and the constraints separately, avoiding the trouble to determine the barrier factors. The proposed method is applied to 9 and 34 bus radial distribution systems. The solutions obtained by the proposed method are compared with other methods. The proposed method has outperformed the other methods in terms of the quality of solution.

Keywords: Distribution systems, Capacitor allocation, Loss reduction, Fuzzy, PGSA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2286
7548 Introduction of the Harmfulness of the Seismic Signal in the Assessment of the Performance of Reinforced Concrete Frame Structures

Authors: Kahil Amar, Boukais Said, Kezmane Ali, Hamizi Mohand, Hannachi Naceur Eddine

Abstract:

The principle of the seismic performance evaluation methods is to provide a measure of capability for a building or set of buildings to be damaged by an earthquake. The common objective of many of these methods is to supply classification criteria. The purpose of this study is to present a method for assessing the seismic performance of structures, based on Pushover method; we are particularly interested in reinforced concrete frame structures, which represent a significant percentage of damaged structures after a seismic event. The work is based on the characterization of seismic movement of the various earthquake zones in terms of PGA and PGD that is obtained by means of SIMQK_GR and PRISM software and the correlation between the points of performance and the scalar characterizing the earthquakes will developed.

Keywords: Seismic performance, Pushover method, characterization of seismic motion, harmfulness of the seismic signal

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2059
7547 Speech Enhancement Using Kalman Filter in Communication

Authors: Eng. Alaa K. Satti Salih

Abstract:

Revolutions Applications such as telecommunications, hands-free communications, recording, etc. which need at least one microphone, the signal is usually infected by noise and echo. The important application is the speech enhancement, which is done to remove suppressed noises and echoes taken by a microphone, beside preferred speech. Accordingly, the microphone signal has to be cleaned using digital signal processing DSP tools before it is played out, transmitted, or stored. Engineers have so far tried different approaches to improving the speech by get back the desired speech signal from the noisy observations. Especially Mobile communication, so in this paper will do reconstruction of the speech signal, observed in additive background noise, using the Kalman filter technique to estimate the parameters of the Autoregressive Process (AR) in the state space model and the output speech signal obtained by the MATLAB. The accurate estimation by Kalman filter on speech would enhance and reduce the noise then compare and discuss the results between actual values and estimated values which produce the reconstructed signals.

Keywords: Autoregressive Process, Kalman filter, Matlab and Noise speech.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4038
7546 An Automation of Check Focusing on CRUD for Requirements Analysis Model in UML

Authors: Shinpei Ogata, Yoshitaka Aoki, Hirotaka Okuda, Saeko Matsuura

Abstract:

A key to success of high quality software development is to define valid and feasible requirements specification. We have proposed a method of model-driven requirements analysis using Unified Modeling Language (UML). The main feature of our method is to automatically generate a Web user interface mock-up from UML requirements analysis model so that we can confirm validity of input/output data for each page and page transition on the system by directly operating the mock-up. This paper proposes a support method to check the validity of a data life cycle by using a model checking tool “UPPAAL" focusing on CRUD (Create, Read, Update and Delete). Exhaustive checking improves the quality of requirements analysis model which are validated by the customers through automatically generated mock-up. The effectiveness of our method is discussed by a case study of requirements modeling of two small projects which are a library management system and a supportive sales system for text books in a university.

Keywords: CRUD, Model Checking, Model Driven Development, Requirements Analysis, Unified Modeling Language, UPPAAL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1677
7545 Electromagnetic Assessment of Submarine Power Cable Degradation Using Finite Element Method and Sensitivity Analysis

Authors: N. Boutra, N. Ravot, J. Benoit, O. Picon

Abstract:

Submarine power cables used for offshore wind farms electric energy distribution and transmission are subject to numerous threats. Some of the risks are associated with transport, installation and operating in harsh marine environment. This paper describes the feasibility of an electromagnetic low frequency sensing technique for submarine power cable failure prediction. The impact of a structural damage shape and material variability on the induced electric field is evaluated. The analysis is performed by modeling the cable using the finite element method, we use sensitivity analysis in order to identify the main damage characteristics affecting electric field variation. Lastly, we discuss the results obtained.

Keywords: Electromagnetism, defect, finite element method, sensitivity analysis, submarine power cables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1099
7544 Evaluation of Fluoride Contents of Kirkuk City's Drinking Water and Its Source: Lesser Zab River and Its Effect on Human Health

Authors: Abbas R. Ali, Safa H. Abdulrahman

Abstract:

In this study, forty samples had been collected from water of Lesser Zab River and drinking water to determine fluoride concentration and show the impact of fluoride on general health of society of Kirkuk city. Estimation of fluoride concentration and determination of its proportion in water samples were performed attentively using a fluoride ion selective electrode. The fluoride concentrations in the Lesser Zab River samples were between 0.0265 ppm and 0.0863 ppm with an average of 0.0451 ppm, whereas the average fluoride concentration in drinking water samples was 0.102 ppm and ranged from 0.010 to 0.289 ppm. A comparison between results obtained with World Health Organization (WHO) show a low concentration of fluoride in the samples of the study. Thus, for health concerns we should increase the concentration of this ion in water of Kirkuk city at least to about (1.0 ppm) and this will take place after fluorination process.

Keywords: Fluoride concentration, Lesser Zab River, drinking water, health society, Kirkuk city.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1428
7543 From Forbidden States to Linear Constraints

Authors: M. Zareiee, A. Dideban, P. Nazemzadeh

Abstract:

This paper deals with the problem of constructing constraints in non safe Petri Nets and then reducing the number of the constructed constraints. In a system, assigning some linear constraints to forbidden states is possible. Enforcing these constraints on the system prevents it from entering these states. But there is no a systematic method for assigning constraints to forbidden states in non safe Petri Nets. In this paper a useful method is proposed for constructing constraints in non safe Petri Nets. But when the number of these constraints is large enforcing them on the system may complicate the Petri Net model. So, another method is proposed for reducing the number of constructed constraints.

Keywords: discrete event system, Supervisory control, Petri Net, Constraint

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1509
7542 Simultaneous Determination of Reference Free-Stream Temperature and Convective Heat Transfer Coefficient

Authors: Giho Jeong, Sooin Jeong, Kuisoon Kim

Abstract:

It is very important to determine reference temperature when convective temperature because it should be used to calculate the temperature potential. This paper deals with the development of a new method that can determine heat transfer coefficient and reference free stream temperature simultaneously, based on transient heat transfer experiments with using two narrow band thermo-tropic liquid crystals (TLC's). The method is validated through error analysis in terms of the random uncertainties in the measured temperatures. It is shown how the uncertainties in heat transfer coefficient and free stream temperature can be reduced. The general method described in this paper is applicable to many heat transfer models with unknown free stream temperature.

Keywords: Heat transfer coefficient, Thermo-tropic LiquidCrystal (TLC), Free stream temperature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1620
7541 Study on Scheduling of the Planning Method Using the Web-based Visualization System in a Shipbuilding Block Assembly Shop

Authors: A. Eui Koog Ahn, B. Gi-Nam Wang, C. Sang C. Park

Abstract:

Higher productivity and less cost in the ship manufacturing process are required to maintain the international competitiveness of morden manufacturing industries. In shipbuilding, however, the Engineering To Order (ETO) production method and production process is very difficult. Thus, designs change frequently. In accordance with production, planning should be set up according to scene changes. Therefore, fixed production planning is very difficult. Thus, a scheduler must first make sketchy plans, then change the plans based on the work progress and modifications. Thus, data sharing in a shipbuilding block assembly shop is very important. In this paper, we proposed to scheduling method applicable to the shipbuilding industry and decision making support system through web based visualization system.

Keywords: Shipbuilding, Monitoring, Block assembly shop, Visualization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2070
7540 Traffic Density Measurement by Automatic Detection of Vehicles Using Gradient Vectors from Aerial Images

Authors: Saman Ghaffarian, Ilgın Gökasar

Abstract:

This paper presents a new automatic vehicle detection method from very high resolution aerial images to measure traffic density. The proposed method starts by extracting road regions from image using road vector data. Then, the road image is divided into equal sections considering resolution of the images. Gradient vectors of the road image are computed from edge map of the corresponding image. Gradient vectors on the each boundary of the sections are divided where the gradient vectors significantly change their directions. Finally, number of vehicles in each section is carried out by calculating the standard deviation of the gradient vectors in each group and accepting the group as vehicle that has standard deviation above predefined threshold value. The proposed method was tested in four very high resolution aerial images acquired from Istanbul, Turkey which illustrate roads and vehicles with diverse characteristics. The results show the reliability of the proposed method in detecting vehicles by producing 86% overall F1 accuracy value.

Keywords: Aerial images, intelligent transportation systems, traffic density measurement, vehicle detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2942
7539 Application of CFD for Air Flow Analysis underneath Natural Ventilation with Forced Convection in Roof Attic

Authors: C. Nutphuang, S. Chirarattananon, V.D. Hien

Abstract:

In research on natural ventilation, and passive cooling with forced convection, is essential to know how heat flows in a solid object and the pattern of temperature distribution on their surfaces, and eventually how air flows through and convects heat from the surfaces of steel under roof. This paper presents some results from running the computational fluid dynamic program (CFD) by comparison between natural ventilation and forced convection within roof attic that is received directly from solar radiation. The CFD program for modeling air flow inside roof attic has been modified to allow as two cases. First case, the analysis under natural ventilation, is closed area in roof attic and second case, the analysis under forced convection, is opened area in roof attic. These extend of all cases to available predictions of variations such as temperature, pressure, and mass flow rate distributions in each case within roof attic. The comparison shows that this CFD program is an effective model for predicting air flow of temperature and heat transfer coefficient distribution within roof attic. The result shows that forced convection can help to reduce heat transfer through roof attic and an around area of steel core has temperature inner zone lower than natural ventilation type. The different temperature on the steel core of roof attic of two cases was 10-15 oK.

Keywords: CFD program, natural ventilation, forcedconvection, heat transfer, air flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2229
7538 Exploring Inter-Relationships between Events to Identify Strategic Technological Competencies: A Combined Approach

Authors: Cláudio Santos, Madalena Araújo, Nuno Correia

Abstract:

The inherent complexity in nowadays- business environments is forcing organizations to be attentive to the dynamics in several fronts. Therefore, the management of technological innovation is continually faced with uncertainty about the future. These issues lead to a need for a systemic perspective, able to analyze the consequences of interactions between different factors. The field of technology foresight has proposed methods and tools to deal with this broader perspective. In an attempt to provide a method to analyze the complex interactions between events in several areas, departing from the identification of the most strategic competencies, this paper presents a methodology based on the Delphi method and Quality Function Deployment. This methodology is applied in a sheet metal processing equipment manufacturer, as a case study.

Keywords: Competencies, Delphi Method, Quality Function Deployment, Technology Foresight.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710
7537 Statistical Reliability Based Modeling of Series and Parallel Operating Systems using Extreme Value Theory

Authors: Mohamad Mahdavi, Mojtaba Mahdavi

Abstract:

This paper tries to represent a new method for computing the reliability of a system which is arranged in series or parallel model. In this method we estimate life distribution function of whole structure using the asymptotic Extreme Value (EV) distribution of Type I, or Gumbel theory. We use EV distribution in minimal mode, for estimate the life distribution function of series structure and maximal mode for parallel system. All parameters also are estimated by Moments method. Reliability function and failure (hazard) rate and p-th percentile point of each function are determined. Other important indexes such as Mean Time to Failure (MTTF), Mean Time to repair (MTTR), for non-repairable and renewal systems in both of series and parallel structure will be computed.

Keywords: Reliability, extreme value, parallel, series, lifedistribution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2098
7536 Stress and Strain Analysis of Notched Bodies Subject to Non-Proportional Loadings

Authors: A. Ince

Abstract:

In this paper, an analytical simplified method for calculating elasto-plastic stresses strains of notched bodies subject to non-proportional loading paths is discussed. The method was based on the Neuber notch correction, which relates the incremental elastic and elastic-plastic strain energy densities at the notch root and the material constitutive relationship. The validity of the method was presented by comparing computed results of the proposed model against finite element numerical data of notched shaft. The comparison showed that the model estimated notch-root elasto-plastic stresses strains with good accuracy using linear-elastic stresses. The prosed model provides more efficient and simple analysis method preferable to expensive experimental component tests and more complex and time consuming incremental non-linear FE analysis. The model is particularly suitable to perform fatigue life and fatigue damage estimates of notched components subjected to nonproportional loading paths.

Keywords: Elasto-plastic, stress-strain, notch analysis, nonprortional loadings, cyclic plasticity, fatigue.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2564
7535 Determination of Penicillins Residues in Livestock and Marine Products by LC/MS/MS

Authors: Ji Young Song, Soo Jung Hu, Hyunjin Joo, Joung Boon Hwang, Mi Ok Kim, Shin Jung Kang, Dae Hyun Cho

Abstract:

Multi-residue analysis method for penicillins was developed and validated in bovine muscle, chicken, milk, and flatfish. Detection was based on liquid chromatography tandem mass spectrometry (LC/MS/MS). The developed method was validated for specificity, precision, recovery, and linearity. The analytes were extracted with 80% acetonitrile and clean-up by a single reversed-phase solid-phase extraction step. Six penicillins presented recoveries higher than 76% with the exception of Amoxicillin (59.7%). Relative standard deviations (RSDs) were not more than 10%. LOQs values ranged from 0.1 and to 4.5 ug/kg. The method was applied to 128 real samples. Benzylpenicillin was detected in 15 samples and Cloxacillin was detected in 7 samples. Oxacillin was detected in 2 samples. But the detected levels were under the MRL levels for penicillins in samples.

Keywords: Penicillins, livestock product, Multi-residue analysis, LC/MS/MS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3423
7534 Optimal DG Placement in Distribution systems Using Cost/Worth Analysis

Authors: M Ahmadigorji, A. Abbaspour, A Rajabi-Ghahnavieh, M. Fotuhi- Firuzabad

Abstract:

DG application has received increasing attention during recent years. The impact of DG on various aspects of distribution system operation, such as reliability and energy loss, depend highly on DG location in distribution feeder. Optimal DG placement is an important subject which has not been fully discussed yet. This paper presents an optimization method to determine optimal DG placement, based on a cost/worth analysis approach. This method considers technical and economical factors such as energy loss, load point reliability indices and DG costs, and particularly, portability of DG. The proposed method is applied to a test system and the impacts of different parameters such as load growth rate and load forecast uncertainty (LFU) on optimum DG location are studied.

Keywords: Distributed generation, optimal placement, cost/worthanalysis, customer interruption cost, Dynamic programming

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2982
7533 Studding of Number of Dataset on Precision of Estimated Saturated Hydraulic Conductivity

Authors: M. Siosemarde, M. Byzedi

Abstract:

Saturated hydraulic conductivity of Soil is an important property in processes involving water and solute flow in soils. Saturated hydraulic conductivity of soil is difficult to measure and can be highly variable, requiring a large number of replicate samples. In this study, 60 sets of soil samples were collected at Saqhez region of Kurdistan province-IRAN. The statistics such as Correlation Coefficient (R), Root Mean Square Error (RMSE), Mean Bias Error (MBE) and Mean Absolute Error (MAE) were used to evaluation the multiple linear regression models varied with number of dataset. In this study the multiple linear regression models were evaluated when only percentage of sand, silt, and clay content (SSC) were used as inputs, and when SSC and bulk density, Bd, (SSC+Bd) were used as inputs. The R, RMSE, MBE and MAE values of the 50 dataset for method (SSC), were calculated 0.925, 15.29, -1.03 and 12.51 and for method (SSC+Bd), were calculated 0.927, 15.28,-1.11 and 12.92, respectively, for relationship obtained from multiple linear regressions on data. Also the R, RMSE, MBE and MAE values of the 10 dataset for method (SSC), were calculated 0.725, 19.62, - 9.87 and 18.91 and for method (SSC+Bd), were calculated 0.618, 24.69, -17.37 and 22.16, respectively, which shows when number of dataset increase, precision of estimated saturated hydraulic conductivity, increases.

Keywords: dataset, precision, saturated hydraulic conductivity, soil and statistics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1801
7532 Mapping of C* Elements in Finite Element Method using Transformation Matrix

Authors: G. H. Majzoob, B. Sharifi Hamadani

Abstract:

Mapping between local and global coordinates is an important issue in finite element method, as all calculations are performed in local coordinates. The concern arises when subparametric are used, in which the shape functions of the field variable and the geometry of the element are not the same. This is particularly the case for C* elements in which the extra degrees of freedoms added to the nodes make the elements sub-parametric. In the present work, transformation matrix for C1* (an 8-noded hexahedron element with 12 degrees of freedom at each node) is obtained using equivalent C0 elements (with the same number of degrees of freedom). The convergence rate of 8-noded C1* element is nearly equal to its equivalent C0 element, while it consumes less CPU time with respect to the C0 element. The existence of derivative degrees of freedom at the nodes of C1* element along with excellent convergence makes it superior compared with it equivalent C0 element.

Keywords: Mapping, Finite element method, C* elements, Convergence, C0 elements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3158
7531 Optimal Capacitor Placement in a Radial Distribution System using Plant Growth Simulation Algorithm

Authors: R. Srinivasa Rao, S. V. L. Narasimham

Abstract:

This paper presents a new and efficient approach for capacitor placement in radial distribution systems that determine the optimal locations and size of capacitor with an objective of improving the voltage profile and reduction of power loss. The solution methodology has two parts: in part one the loss sensitivity factors are used to select the candidate locations for the capacitor placement and in part two a new algorithm that employs Plant growth Simulation Algorithm (PGSA) is used to estimate the optimal size of capacitors at the optimal buses determined in part one. The main advantage of the proposed method is that it does not require any external control parameters. The other advantage is that it handles the objective function and the constraints separately, avoiding the trouble to determine the barrier factors. The proposed method is applied to 9, 34, and 85-bus radial distribution systems. The solutions obtained by the proposed method are compared with other methods. The proposed method has outperformed the other methods in terms of the quality of solution.

Keywords: Distribution systems, Capacitor placement, loss reduction, Loss sensitivity factors, PGSA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5287
7530 Noise Removal from Surface Respiratory EMG Signal

Authors: Slim Yacoub, Kosai Raoof

Abstract:

The aim of this study was to remove the two principal noises which disturb the surface electromyography signal (Diaphragm). These signals are the electrocardiogram ECG artefact and the power line interference artefact. The algorithm proposed focuses on a new Lean Mean Square (LMS) Widrow adaptive structure. These structures require a reference signal that is correlated with the noise contaminating the signal. The noise references are then extracted : first with a noise reference mathematically constructed using two different cosine functions; 50Hz (the fundamental) function and 150Hz (the first harmonic) function for the power line interference and second with a matching pursuit technique combined to an LMS structure for the ECG artefact estimation. The two removal procedures are attained without the use of supplementary electrodes. These techniques of filtering are validated on real records of surface diaphragm electromyography signal. The performance of the proposed methods was compared with already conducted research results.

Keywords: Surface EMG, Adaptive, Matching Pursuit, Powerline interference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4336
7529 A Monte Carlo Method to Data Stream Analysis

Authors: Kittisak Kerdprasop, Nittaya Kerdprasop, Pairote Sattayatham

Abstract:

Data stream analysis is the process of computing various summaries and derived values from large amounts of data which are continuously generated at a rapid rate. The nature of a stream does not allow a revisit on each data element. Furthermore, data processing must be fast to produce timely analysis results. These requirements impose constraints on the design of the algorithms to balance correctness against timely responses. Several techniques have been proposed over the past few years to address these challenges. These techniques can be categorized as either dataoriented or task-oriented. The data-oriented approach analyzes a subset of data or a smaller transformed representation, whereas taskoriented scheme solves the problem directly via approximation techniques. We propose a hybrid approach to tackle the data stream analysis problem. The data stream has been both statistically transformed to a smaller size and computationally approximated its characteristics. We adopt a Monte Carlo method in the approximation step. The data reduction has been performed horizontally and vertically through our EMR sampling method. The proposed method is analyzed by a series of experiments. We apply our algorithm on clustering and classification tasks to evaluate the utility of our approach.

Keywords: Data Stream, Monte Carlo, Sampling, DensityEstimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1423
7528 Estimation of Tensile Strength for Granitic Rocks by Using Discrete Element Approach

Authors: Aliakbar Golshani, Armin Ramezanzad

Abstract:

Tensile strength which is an important parameter of the rock for engineering applications is difficult to measure directly through physical experiment (i.e. uniaxial tensile test). Therefore, indirect experimental methods such as Brazilian test have been taken into consideration and some relations have been proposed in order to obtain the tensile strength for rocks indirectly. In this research, to calculate numerically the tensile strength for granitic rocks, Particle Flow Code in three-dimension (PFC3D) software were used. First, uniaxial compression tests were simulated and the tensile strength was determined for Inada granite (from a quarry in Kasama, Ibaraki, Japan). Then, by simulating Brazilian test condition for Inada granite, the tensile strength was indirectly calculated again. Results show that the tensile strength calculated numerically agrees well with the experimental results obtained from uniaxial tensile tests on Inada granite samples.

Keywords: Numerical Simulation, PFC, Tensile Strength, Brazilian Test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 729
7527 On Identity Disclosure Risk Measurement for Shared Microdata

Authors: M. N. Huda, S. Yamada, N. Sonehara

Abstract:

Probability-based identity disclosure risk measurement may give the same overall risk for different anonymization strategy of the same dataset. Some entities in the anonymous dataset may have higher identification risks than the others. Individuals are more concerned about higher risks than the average and are more interested to know if they have a possibility of being under higher risk. A notation of overall risk in the above measurement method doesn-t indicate whether some of the involved entities have higher identity disclosure risk than the others. In this paper, we have introduced an identity disclosure risk measurement method that not only implies overall risk, but also indicates whether some of the members have higher risk than the others. The proposed method quantifies the overall risk based on the individual risk values, the percentage of the records that have a risk value higher than the average and how larger the higher risk values are compared to the average. We have analyzed the disclosure risks for different disclosure control techniques applied to original microdata and present the results.

Keywords: Anonymization, microdata, disclosure risk, privacy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1371
7526 Length Dimension Correlates of Longitudinal Physical Conditioning on Indian Male Youth

Authors: Seema Sharma Kaushik, Dhananjoy Shaw

Abstract:

Various length dimensions of the body have been a variable of interest in the research areas of kinanthropometry. However the inclusion of length measurements in various studies remains restricted to reflect characteristics of a particular game/sport at a particular time. Hence, the present investigation was conducted to study various length dimensions correlates of a longitudinal physical conditioning program on Indian male youth. The study was conducted on 90 Indian male youth. The sample was equally divided into three groups namely, progressive load training (PLT), constant load training (CLT) and no load training (NL). The variables included sitting height, leg length, arm length and foot length. The study was conducted by adopting the multi group repeated measure design. Three different groups were measured four times after completion of each of the three meso-cycles of six-weeks duration each. The measurements were taken using the standard landmarks and procedures. Mean, standard deviation and analysis of co-variance were computed to analyze the data statistically. The post-hoc analysis was conducted for the significant F-ratios at 0.05 level. The study concluded that the followed longitudinal physical conditioning program had significant effect on various length dimensions of Indian male youth.

Keywords: Indian male youth, longitudinal, length dimensions, physical conditioning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 608