Search results for: cumulative variance.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 480

Search results for: cumulative variance.

390 Comparing Data Analysis, Communication and Information Technologies Expertise Levels in Undergraduate Psychology Students

Authors: Ana Cázares

Abstract:

Aims for this study: first, to compare the expertise level in data analysis, communication and information technologies in undergraduate psychology students. Second, to verify the factor structure of E-ETICA (Escala de Experticia en Tecnologias de la Informacion, la Comunicacion y el Análisis or Data Analysis, Communication and Information'Expertise Scale) which had shown an excellent internal consistency (α= 0.92) as well as a simple factor structure. Three factors, Complex, Basic Information and Communications Technologies and E-Searching and Download Abilities, explains 63% of variance. In the present study, 260 students (119 juniors and 141 seniors) were asked to respond to ETICA (16 items Likert scale of five points 1: null domain to 5: total domain). The results show that both junior and senior students report having very similar expertise level; however, E-ETICA presents a different factor structure for juniors and four factors explained also 63% of variance: Information E-Searching, Download and Process; Data analysis; Organization; and Communication technologies.

Keywords: Data analysis, Information, Communications Technologies, Expertise'Levels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1236
389 A New Approach for Prioritization of Failure Modes in Design FMEA using ANOVA

Authors: Sellappan Narayanagounder, Karuppusami Gurusami

Abstract:

The traditional Failure Mode and Effects Analysis (FMEA) uses Risk Priority Number (RPN) to evaluate the risk level of a component or process. The RPN index is determined by calculating the product of severity, occurrence and detection indexes. The most critically debated disadvantage of this approach is that various sets of these three indexes may produce an identical value of RPN. This research paper seeks to address the drawbacks in traditional FMEA and to propose a new approach to overcome these shortcomings. The Risk Priority Code (RPC) is used to prioritize failure modes, when two or more failure modes have the same RPN. A new method is proposed to prioritize failure modes, when there is a disagreement in ranking scale for severity, occurrence and detection. An Analysis of Variance (ANOVA) is used to compare means of RPN values. SPSS (Statistical Package for the Social Sciences) statistical analysis package is used to analyze the data. The results presented are based on two case studies. It is found that the proposed new methodology/approach resolves the limitations of traditional FMEA approach.

Keywords: Failure mode and effects analysis, Risk priority code, Critical failure mode, Analysis of variance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5376
388 Statistical Analysis and Predictive Learning of Mechanical Parameters for TiO2 Filled GFRP Composite

Authors: S. Srinivasa Moorthy, K. Manonmani

Abstract:

The new, polymer composites consisting of e-glass fiber reinforcement with titanium oxide filler in the double bonded unsaturated polyester resin matrix were made. The glass fiber and titanium oxide reinforcement composites were made in three different fiber lengths (3cm, 5cm, and 7cm), filler content (2 wt%, 4 wt%, and 6 wt%) and fiber content (20 wt%, 40 wt%, and 60 wt%). 27 different compositions were fabricated and a sequence of experiments were carried out to determine tensile strength and impact strength. The vital influencing factors fiber length, fiber content and filler content were chosen as 3 factors in 3 levels of Taguchi’s L9 orthogonal array. The influences of parameters were determined for tensile strength and impact strength by Analysis of variance (ANOVA) and S/N ratio. Using Artificial Neural Network (ANN) an expert system was devised to predict the properties of hybrid reinforcement GFRP composites. The predict models were experimentally proved with the maximum coincidence.

Keywords: Analysis of variance (ANOVA), Artificial neural network (ANN), Polymer composites, Taguchi’s orthogonal array.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2361
387 Shape Optimization of Impeller Blades for a Bidirectional Axial Flow Pump using Polynomial Surrogate Model

Authors: I. S. Jung, W. H. Jung, S. H. Baek, S. Kang

Abstract:

This paper describes the shape optimization of impeller blades for a anti-heeling bidirectional axial flow pump used in ships. In general, a bidirectional axial pump has an efficiency much lower than the classical unidirectional pump because of the symmetry of the blade type. In this paper, by focusing on a pump impeller, the shape of blades is redesigned to reach a higher efficiency in a bidirectional axial pump. The commercial code employed in this simulation is CFX v.13. CFD result of pump torque, head, and hydraulic efficiency was compared. The orthogonal array (OA) and analysis of variance (ANOVA) techniques and surrogate model based optimization using orthogonal polynomial, are employed to determine the main effects and their optimal design variables. According to the optimal design, we confirm an effective design variable in impeller blades and explain the optimal solution, the usefulness for satisfying the constraints of pump torque and head.

Keywords: Bidirectional axial flow pump, Impeller blade, CFD, Analysis of variance, Polynomial surrogate model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3723
386 Numerical Approximation to the Performance of CUSUM Charts for EMA (1) Process

Authors: K. Petcharat, Y. Areepong, S. Sukparungsri, G. Mititelu

Abstract:

These paper, we approximate the average run length (ARL) for CUSUM chart when observation are an exponential first order moving average sequence (EMA1). We used Gauss-Legendre numerical scheme for integral equations (IE) method for approximate ARL0 and ARL1, where ARL in control and out of control, respectively. We compared the results from IE method and exact solution such that the two methods perform good agreement.

Keywords: Cumulative Sum Chart, Moving Average Observation, Average Run Length, Numerical Approximations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2115
385 Modelling an Investment Portfolio with Mandatory and Voluntary Contributions under M-CEV Model

Authors: Amadi Ugwulo Chinyere, Lewis D. Gbarayorks, Emem N. H. Inamete

Abstract:

In this paper, the mandatory contribution, additional voluntary contribution (AVC) and administrative charges are merged together to determine the optimal investment strategy (OIS) for a pension plan member (PPM) in a defined contribution (DC) pension scheme under the modified constant elasticity of variance (M-CEV) model. We assume that the voluntary contribution is a stochastic process and a portfolio consisting of one risk free asset and one risky asset modeled by the M-CEV model is considered. Also, a stochastic differential equation consisting of PPM’s monthly contributions, voluntary contributions and administrative charges is obtained. More so, an optimization problem in the form of Hamilton Jacobi Bellman equation which is a nonlinear partial differential equation is obtained. Using power transformation and change of variables method, an explicit solution of the OIS and the value function are obtained under constant absolute risk averse (CARA). Furthermore, numerical simulations on the impact of some sensitive parameters on OIS were discussed extensively. Finally, our result generalizes some existing result in the literature.

Keywords: DC pension fund, modified constant elasticity of variance, optimal investment strategies, voluntary contribution, administrative charges.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 288
384 Adaptive Weighted Averaging Filter Using the Appropriate Number of Consecutive Frames

Authors: Mahmoud Saeidi, Ali Nazemipour

Abstract:

In this paper, we propose a novel adaptive spatiotemporal filter that utilizes image sequences in order to remove noise. The consecutive frames include: current, previous and next noisy frames. The filter proposed in this paper is based upon the weighted averaging pixels intensity and noise variance in image sequences. It utilizes the Appropriate Number of Consecutive Frames (ANCF) based on the noisy pixels intensity among the frames. The number of consecutive frames is adaptively calculated for each region in image and its value may change from one region to another region depending on the pixels intensity within the region. The weights are determined by a well-defined mathematical criterion, which is adaptive to the feature of spatiotemporal pixels of the consecutive frames. It is experimentally shown that the proposed filter can preserve image structures and edges under motion while suppressing noise, and thus can be effectively used in image sequences filtering. In addition, the AWA filter using ANCF is particularly well suited for filtering sequences that contain segments with abruptly changing scene content due to, for example, rapid zooming and changes in the view of the camera.

Keywords: Appropriate Number of Consecutive Frames, Adaptive Weighted Averaging, Motion Estimation, Noise Variance, Motion Compensation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1784
383 Mitigation of Radiation Levels for Base Transceiver Stations based on ITU-T Recommendation K.70

Authors: Reyes C., Ramos B.

Abstract:

This essay presents applicative methods to reduce human exposure levels in the area around base transceiver stations in a environment with multiple sources based on ITU-T recommendation K.70. An example is presented to understand the mitigation techniques and their results and also to learn how they can be applied, especially in developing countries where there is not much research on non-ionizing radiations.

Keywords: Electromagnetic fields (EMF), human exposure limits, intentional radiator, cumulative exposure ratio, base transceiver station (BTS), radiation levels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2649
382 Machining Parameters Optimization of Developed Yttria Stabilized Zirconia Toughened Alumina Ceramic Inserts While Machining AISI 4340 Steel

Authors: Nilrudra Mandal, B Doloi, B Mondal

Abstract:

An attempt has been made to investigate the machinability of zirconia toughened alumina (ZTA) inserts while turning AISI 4340 steel. The insert was prepared by powder metallurgy process route and the machining experiments were performed based on Response Surface Methodology (RSM) design called Central Composite Design (CCD). The mathematical model of flank wear, cutting force and surface roughness have been developed using second order regression analysis. The adequacy of model has been carried out based on Analysis of variance (ANOVA) techniques. It can be concluded that cutting speed and feed rate are the two most influential factor for flank wear and cutting force prediction. For surface roughness determination, the cutting speed & depth of cut both have significant contribution. Key parameters effect on each response has also been presented in graphical contours for choosing the operating parameter preciously. 83% desirability level has been achieved using this optimized condition.

Keywords: Analysis of variance (ANOVA), Central Composite Design (CCD), Response Surface Methodology (RSM), Zirconia Toughened Alumina (ZTA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2736
381 The Robust Clustering with Reduction Dimension

Authors: Dyah E. Herwindiati

Abstract:

A clustering is process to identify a homogeneous groups of object called as cluster. Clustering is one interesting topic on data mining. A group or class behaves similarly characteristics. This paper discusses a robust clustering process for data images with two reduction dimension approaches; i.e. the two dimensional principal component analysis (2DPCA) and principal component analysis (PCA). A standard approach to overcome this problem is dimension reduction, which transforms a high-dimensional data into a lower-dimensional space with limited loss of information. One of the most common forms of dimensionality reduction is the principal components analysis (PCA). The 2DPCA is often called a variant of principal component (PCA), the image matrices were directly treated as 2D matrices; they do not need to be transformed into a vector so that the covariance matrix of image can be constructed directly using the original image matrices. The decomposed classical covariance matrix is very sensitive to outlying observations. The objective of paper is to compare the performance of robust minimizing vector variance (MVV) in the two dimensional projection PCA (2DPCA) and the PCA for clustering on an arbitrary data image when outliers are hiden in the data set. The simulation aspects of robustness and the illustration of clustering images are discussed in the end of paper

Keywords: Breakdown point, Consistency, 2DPCA, PCA, Outlier, Vector Variance

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1657
380 An Optimization of Machine Parameters for Modified Horizontal Boring Tool Using Taguchi Method

Authors: Thirasak Panyaphirawat, Pairoj Sapsmarnwong, Teeratas Pornyungyuen

Abstract:

This paper presents the findings of an experimental investigation of important machining parameters for the horizontal boring tool modified to mouth with a horizontal lathe machine to bore an overlength workpiece. In order to verify a usability of a modified tool, design of experiment based on Taguchi method is performed. The parameters investigated are spindle speed, feed rate, depth of cut and length of workpiece. Taguchi L9 orthogonal array is selected for four factors three level parameters in order to minimize surface roughness (Ra and Rz) of S45C steel tubes. Signal to noise ratio analysis and analysis of variance (ANOVA) is performed to study an effect of said parameters and to optimize the machine setting for best surface finish. The controlled factors with most effect are depth of cut, spindle speed, length of workpiece, and feed rate in order. The confirmation test is performed to test the optimal setting obtained from Taguchi method and the result is satisfactory.

Keywords: Design of Experiment, Taguchi Design, Optimization, Analysis of Variance, Machining Parameters, Horizontal Boring Tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2674
379 Texture Feature-Based Language Identification Using Wavelet-Domain BDIP and BVLC Features and FFT Feature

Authors: Ick Hoon Jang, Hoon Jae Lee, Dae Hoon Kwon, Ui Young Pak

Abstract:

In this paper, we propose a texture feature-based language identification using wavelet-domain BDIP (block difference of inverse probabilities) and BVLC (block variance of local correlation coefficients) features and FFT (fast Fourier transform) feature. In the proposed method, wavelet subbands are first obtained by wavelet transform from a test image and denoised by Donoho-s soft-thresholding. BDIP and BVLC operators are next applied to the wavelet subbands. FFT blocks are also obtained by 2D (twodimensional) FFT from the blocks into which the test image is partitioned. Some significant FFT coefficients in each block are selected and magnitude operator is applied to them. Moments for each subband of BDIP and BVLC and for each magnitude of significant FFT coefficients are then computed and fused into a feature vector. In classification, a stabilized Bayesian classifier, which adopts variance thresholding, searches the training feature vector most similar to the test feature vector. Experimental results show that the proposed method with the three operations yields excellent language identification even with rather low feature dimension.

Keywords: BDIP, BVLC, FFT, language identification, texture feature, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2109
378 Approximate Range-Sum Queries over Data Cubes Using Cosine Transform

Authors: Wen-Chi Hou, Cheng Luo, Zhewei Jiang, Feng Yan

Abstract:

In this research, we propose to use the discrete cosine transform to approximate the cumulative distributions of data cube cells- values. The cosine transform is known to have a good energy compaction property and thus can approximate data distribution functions easily with small number of coefficients. The derived estimator is accurate and easy to update. We perform experiments to compare its performance with a well-known technique - the (Haar) wavelet. The experimental results show that the cosine transform performs much better than the wavelet in estimation accuracy, speed, space efficiency, and update easiness.

Keywords: DCT, Data Cube

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1909
377 Dissociation of CDS from CVA Valuation under Notation Changes

Authors: R. Henry, J-B. Paulin, St. Fauchille, Ph. Delord, K. Benkirane, A. Brunel

Abstract:

In this paper the CVA computation of interest rate swap is presented based on its rating. Rating and probability default given by Moody’s Investors Service are used to calculate our CVA for a specific swap with different maturities. With this computation the influence of rating variation can be shown on CVA. Application is made to the analysis of Greek CDS variation during the period of Greek crisis between 2008 and 2011. The main point is the determination of correlation between the fluctuation of Greek CDS cumulative value and the variation of swap CVA due to change of rating.

Keywords: CDS, Computation, CVA, Greek Crisis, Interest Rate Swap, Maturity, Rating, Swap.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2457
376 An Improved Algorithm for Channel Estimations of OFDM System based Pilot Signal

Authors: Ahmed N. H. Alnuaimy, Mahamod Ismail, Mohd. A. M. Ali, Kasmiran Jumari, Ayman A. El-Saleh

Abstract:

This paper presents a new algorithm for the channel estimation of the OFDM system based on a pilot signal for the new generation of high data rate communication systems. In orthogonal frequency division multiplexing (OFDM) systems over fast-varying fading channels, channel estimation and tracking is generally carried out by transmitting known pilot symbols in given positions of the frequency-time grid. In this paper, we propose to derive an improved algorithm based on the calculation of the mean and the variance of the adjacent pilot signals for a specific distribution of the pilot signals in the OFDM frequency-time grid then calculating of the entire unknown channel coefficients from the equation of the mean and the variance. Simulation results shows that the performance of the OFDM system increase as the length of the channel increase where the accuracy of the estimated channel will be increased using this low complexity algorithm, also the number of the pilot signal needed to be inserted in the OFDM signal will be reduced which lead to increase in the throughput of the signal over the OFDM system in compared with other type of the distribution such as Comb type and Block type channel estimation.

Keywords: Channel estimation, orthogonal frequency divisionmultiplexing (OFDM), comb type channel estimation, block typechannel estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1372
375 Image Segmentation by Mathematical Morphology: An Approach through Linear, Bilinear and Conformal Transformation

Authors: Dibyendu Ghoshal, Pinaki Pratim Acharjya

Abstract:

Image segmentation process based on mathematical morphology has been studied in the paper. It has been established from the first principles of the morphological process, the entire segmentation is although a nonlinear signal processing task, the constituent wise, the intermediate steps are linear, bilinear and conformal transformation and they give rise to a non linear affect in a cumulative manner.

Keywords: Image segmentation, linear transform, bilinear transform, conformal transform, mathematical morphology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2135
374 Numerical Optimization within Vector of Parameters Estimation in Volatility Models

Authors: J. Arneric, A. Rozga

Abstract:

In this paper usefulness of quasi-Newton iteration procedure in parameters estimation of the conditional variance equation within BHHH algorithm is presented. Analytical solution of maximization of the likelihood function using first and second derivatives is too complex when the variance is time-varying. The advantage of BHHH algorithm in comparison to the other optimization algorithms is that requires no third derivatives with assured convergence. To simplify optimization procedure BHHH algorithm uses the approximation of the matrix of second derivatives according to information identity. However, parameters estimation in a/symmetric GARCH(1,1) model assuming normal distribution of returns is not that simple, i.e. it is difficult to solve it analytically. Maximum of the likelihood function can be founded by iteration procedure until no further increase can be found. Because the solutions of the numerical optimization are very sensitive to the initial values, GARCH(1,1) model starting parameters are defined. The number of iterations can be reduced using starting values close to the global maximum. Optimization procedure will be illustrated in framework of modeling volatility on daily basis of the most liquid stocks on Croatian capital market: Podravka stocks (food industry), Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla stocks (information-s-communications industry).

Keywords: Heteroscedasticity, Log-likelihood Maximization, Quasi-Newton iteration procedure, Volatility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2604
373 The Impact of Transaction Costs on Rebalancing an Investment Portfolio in Portfolio Optimization

Authors: B. Marasović, S. Pivac, S. V. Vukasović

Abstract:

Constructing a portfolio of investments is one of the most significant financial decisions facing individuals and institutions. In accordance with the modern portfolio theory maximization of return at minimal risk should be the investment goal of any successful investor. In addition, the costs incurred when setting up a new portfolio or rebalancing an existing portfolio must be included in any realistic analysis. In this paper rebalancing an investment portfolio in the presence of transaction costs on the Croatian capital market is analyzed. The model applied in the paper is an extension of the standard portfolio mean-variance optimization model in which transaction costs are incurred to rebalance an investment portfolio. This model allows different costs for different securities, and different costs for buying and selling. In order to find efficient portfolio, using this model, first, the solution of quadratic programming problem of similar size to the Markowitz model, and then the solution of a linear programming problem have to be found. Furthermore, in the paper the impact of transaction costs on the efficient frontier is investigated. Moreover, it is shown that global minimum variance portfolio on the efficient frontier always has the same level of the risk regardless of the amount of transaction costs. Although efficient frontier position depends of both transaction costs amount and initial portfolio it can be concluded that extreme right portfolio on the efficient frontier always contains only one stock with the highest expected return and the highest risk.

Keywords: Croatian capital market, Fractional quadratic programming, Markowitz model, Portfolio optimization, Transaction costs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2898
372 The Development of a Teachers- Self-Efficacy Instrument for High School Physical Education Teacher

Authors: Yi-Hsiang Pan

Abstract:

The purpose of this study was to develop a “teachers’ self-efficacy scale for high school physical education teachers (TSES-HSPET)” in Taiwan. This scale is based on the self-efficacy theory of Bandura [1], [2]. This study used exploratory and confirmatory factor analyses to test the reliability and validity. The participants were high school physical education teachers in Taiwan. Both stratified random sampling and cluster sampling were used to sample participants for the study. 350 teachers were sampled in the first stage and 234 valid scales (male 133, female 101) returned. During the second stage, 350 teachers were sampled and 257 valid scales (male 143, female 110, 4 did not indicate gender) returned. The exploratory factor analysis was used in the first stage, and it got 60.77% of total variance for construct validity. The Cronbach’s alpha coefficient of internal consistency was 0.91 for sumscale, and subscales were 0.84 and 0.90. In the second stage, confirmatory factor analysis was used to test construct validity. The result showed that the fit index could be accepted (χ2 (75) =167.94, p <.05, RMSEA =0.07, SRMR=0.05, GFI=0.92, NNFI=0.97, CFI=0.98, PNFI=0.79). Average variance extracted of latent variables were 0.43 and 0.53, which composite reliability are 0.78 and 0.90. It is concluded that the TSES-HSPET is a well-considered measurement instrument with acceptable validity and reliability. It may be used to estimate teachers’ self-efficacy for high school physical education teachers.

Keywords: teaching in physical education, teacher's self-efficacy, teacher's belief

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3145
371 Design of Auto Exposure Unit Based On 2-Way Histogram Equalization

Authors: Junghwan Choi, Seongsoo Lee

Abstract:

Histogram equalization is often used in image enhancement, but it can be also used in auto exposure. However, conventional histogram equalization does not work well when many pixels are concentrated in a narrow luminance range.This paper proposes an auto exposure method based on 2-way histogram equalization. Two cumulative distribution functions are used, where one is from dark to bright and the other is from bright to dark. In this paper, the proposed auto exposure method is also designed and implemented for image signal processors with full-HD images.

Keywords: Histogram equalization, Auto exposure, Image signal processor, Low-cost, Full HD Video.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3344
370 A Novel QoS Optimization Architecture for 4G Networks

Authors: Aaqif Afzaal Abbasi, Javaid Iqbal, Akhtar Nawaz Malik

Abstract:

4G Communication Networks provide heterogeneous wireless technologies to mobile subscribers through IP based networks and users can avail high speed access while roaming across multiple wireless channels; possible by an organized way to manage the Quality of Service (QoS) functionalities in these networks. This paper proposes the idea of developing a novel QoS optimization architecture that will judge the user requirements and knowing peak times of services utilization can save the bandwidth/cost factors. The proposed architecture can be customized according to the network usage priorities so as to considerably improve a network-s QoS performance.

Keywords: QoS, Network Coverage Boundary, ServicesArchives Units (SAU), Cumulative Services Archives Units (CSAU).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1979
369 No one Set of Parameter Values Can Simulate the Epidemics Due to SARS Occurring at Different Localities

Authors: Weerachi Sarakorn, I-Ming Tang

Abstract:

A mathematical model for the transmission of SARS is developed. In addition to dividing the population into susceptible (high and low risk), exposed, infected, quarantined, diagnosed and recovered classes, we have included a class called untraced. The model simulates the Gompertz curves which are the best representation of the cumulative numbers of probable SARS cases in Hong Kong and Singapore. The values of the parameters in the model which produces the best fit of the observed data for each city are obtained by using a differential evolution algorithm. It is seen that the values for the parameters needed to simulate the observed daily behaviors of the two epidemics are different.

Keywords: SARS, mathematical modelling, differential evolution algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408
368 The Effectiveness of Solution-Focused Group Therapy on Improving Depressed Mothers of Child Abuser Families

Authors: Roya Maqami, Kaveh Qaderi Bagajan, Mohammad Mahdi Yousefi, Saeed Moradi

Abstract:

The purpose of this study is to investigate the efficacy of solution-focused group therapy on improving the depressed mothers of child abuser families. This study was carried out in the form of a semi-pilot, pre-test and post-test on two groups (experimental and control). Subjects include all mothers and their children that are the members of Shush and Naser Khosro child home. Beck Depression Inventory and Child Trauma Questionnaire were used to collect data. First, child abuse questionnaire was completed by children, Then Beck Depression Inventory was completed by their mothers that 22 of them were recognized as depressed and randomly divided in two groups of experimental and control. After applying pre-test for both of these groups, the intervention of solution- focused group therapy was performed in five sessions on experimental group. Finally, post-test was applied on both groups and subsequently in a month, follow-up test was performed. T-test, multivariate variance, and repeated measurement analysis of variance were used to analyze the data. According to the findings, it can be concluded that this therapy leads to the improvement of depressed mother's mood. As a result, the intervention of solution-focused group therapy is useful in order to improve the depressing mood of mothers of child abuser families.

Keywords: Child Abuse, Depressed Mothers, Child Abuser Families, Solution-focused Group Therapy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1723
367 The Relationship of Private Savings and Economic Growth: Case of Croatia

Authors: Irena Palić

Abstract:

The main objective of the research in this paper is to empirically assess the causal relationship of private savings and economic growth in the Republic of Croatia. Households’ savings are approximated by household deposits in banks, while domestic income is approximated by industrial production volume indices. Vector Autoregression model and Granger causality tests are used to in order to analyse the relationship among private savings and economic growth. Since ADF unit root tests have shown that both mentioned series are non stationary at levels, series are first differenced in order to become stationary. Therefore, VAR model is estimated with percentage change in private savings and percentage change in domestic income, which can be interpreted as economic growth in case of positive percentage change in domestic income. The Granger causality test has shown that there is no causal relationship among private savings and economic growth in Croatia. The impulse response functions have shown that the impact of shock in domestic income on private savings change is stronger than the impact of private saving on growth. Variance decompositions show that both economic growth and private saving change explain the largest part of its own forecast variance. The research has shown that the link between private savings economic and growth in Croatia is weak, what is in line with relevant empirical research in small open economies.

Keywords: Economic growth, Granger causality, innovation analysis, private savings, Vector Autoregression model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1412
366 Power System Damping Using Hierarchical Fuzzy Multi- Input PSS and Communication Lines Active Power Deviations Input and SVC

Authors: Mohammad Hasan Raouf, Ahmad Rouhani, Mohammad Abedini, Ebrahim Rasooli Anarmarzi

Abstract:

In this paper the application of a hierarchical fuzzy system (HFS) based on MPSS and SVC in multi-machine environment is studied. Also the effect of communication lines active power variance signal between two ΔPTie-line regions, as one of the inputs of hierarchical fuzzy multi-input PSS and SVC (HFMPSS & SVC), on the increase of low frequency oscillation damping is examined. In the MPSS, to have better efficiency an auxiliary signal of reactive power deviation (ΔQ) is added with ΔP+ Δω input type PSS. The number of rules grows exponentially with the number of variables in a classic fuzzy system. To reduce the number of rules the HFS consists of a number of low-dimensional fuzzy systems in a hierarchical structure. Phasor model of SVC is described and used in this paper. The performances of MPSS and ΔPTie-line based HFMPSS and also the proposed method in damping inter-area mode of oscillation are examined in response to disturbances. The efficiency of the proposed model is examined by simulating a four-machine power system. Results show that the proposed method is performing satisfactorily within the whole range of disturbances and reduces the cost of system.

Keywords: Communication lines active power variance signal, Hierarchical fuzzy system (HFS), Multi-input power system stabilizer (MPSS), Static VAR compensator (SVC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1629
365 Reconstitute Information about Discontinued Water Quality Variables in the Nile Delta Monitoring Network Using Two Record Extension Techniques

Authors: Bahaa Khalil, Taha B. M. J. Ouarda, André St-Hilaire

Abstract:

The world economic crises and budget constraints have caused authorities, especially those in developing countries, to rationalize water quality monitoring activities. Rationalization consists of reducing the number of monitoring sites, the number of samples, and/or the number of water quality variables measured. The reduction in water quality variables is usually based on correlation. If two variables exhibit high correlation, it is an indication that some of the information produced may be redundant. Consequently, one variable can be discontinued, and the other continues to be measured. Later, the ordinary least squares (OLS) regression technique is employed to reconstitute information about discontinued variable by using the continuously measured one as an explanatory variable. In this paper, two record extension techniques are employed to reconstitute information about discontinued water quality variables, the OLS and the Line of Organic Correlation (LOC). An empirical experiment is conducted using water quality records from the Nile Delta water quality monitoring network in Egypt. The record extension techniques are compared for their ability to predict different statistical parameters of the discontinued variables. Results show that the OLS is better at estimating individual water quality records. However, results indicate an underestimation of the variance in the extended records. The LOC technique is superior in preserving characteristics of the entire distribution and avoids underestimation of the variance. It is concluded from this study that the OLS can be used for the substitution of missing values, while LOC is preferable for inferring statements about the probability distribution.

Keywords: Record extension, record augmentation, monitoringnetworks, water quality indicators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1574
364 A Study on the Performance Characteristics of Variable Valve for Reverse Continuous Damper

Authors: Se Kyung Oh, Young Hwan Yoon, Ary Bachtiar Krishna

Abstract:

Nowadays, a passenger car suspension must has high performance criteria with light weight, low cost, and low energy consumption. Pilot controlled proportional valve is designed and analyzed to get small pressure change rate after blow-off, and to get a fast response of the damper, a reverse damping mechanism is adapted. The reverse continuous variable damper is designed as a HS-SH damper which offers good body control with reduced transferred input force from the tire, compared with any other type of suspension system. The damper structure is designed, so that rebound and compression damping forces can be tuned independently, of which the variable valve is placed externally. The rate of pressure change with respect to the flow rate after blow-off becomes smooth when the fixed orifice size increases, which means that the blow-off slope is controllable using the fixed orifice size. Damping forces are measured with the change of the solenoid current at the different piston velocities to confirm the maximum hysteresis of 20 N, linearity, and variance of damping force. The damping force variance is wide and continuous, and is controlled by the spool opening, of which scheme is usually adapted in proportional valves. The reverse continuous variable damper developed in this study is expected to be utilized in the semi-active suspension systems in passenger cars after its performance and simplicity of the design is confirmed through a real car test.

Keywords: Blow-off, damping force, pilot controlledproportional valve, reverse continuous damper.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2384
363 Implementation of the Recursive Formula for Evaluation of the Strength of Daniels’ Model

Authors: Václav Sadílek, Miroslav Vořechovský

Abstract:

The paper deals with the classical fiber bundle model of equal load sharing, sometimes referred to as the Daniels’ bundle or the democratic bundle. Daniels formulated a multidimensional integral and also a recursive formula for evaluation of the strength cumulative distribution function. This paper describes three algorithms for evaluation of the recursive formula and also their implementations with source codes in the Python high-level programming language. A comparison of the algorithms are provided with respect to execution time. Analysis of orders of magnitudes of addends in the recursion is also provided.

Keywords: Daniels bundle model, equal load sharing, Python, mpmath.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2058
362 Scholar Index for Research Performance Evaluation Using Multiple Criteria Decision Making Analysis

Authors: C. Ardil

Abstract:

This paper aims to present an objective quantitative methodology on how to evaluate individual’s scholarly research output using multiple criteria decision analysis. A multiple criteria decision making analysis (MCDMA) methodological process is adopted to build a multiple criteria evaluation model. With the introduction of the scholar index, which gives significant information about a researcher's productivity and the scholarly impact of his or her publications in a single number (s is the number of publications with at least s citations); cumulative research citation index; the scholar index is included in the citation databases to cover the multidimensional complexity of scholarly research performance and to undertake objective evaluations with scholar index. The scholar index, one of publication activity indexes, is analyzed by considering it to be the most appropriate sciencemetric indicator which allows to smooth over many drawbacks of scholarly output assessment by mere calculation of the number of publications (quantity) and citations (quality). Hence, this study includes a set of indicators-based scholar index to be used for evaluating scholarly researchers. Google Scholar open science database was used to assess and discuss scholarly productivity and impact of researchers. Based on the experiment of computing the scholar index, and its derivative indexes for a set of researchers on open research database platform, quantitative methods of assessing scholarly research output were successfully considered to rank researchers. The proposed methodology considers the ranking, and the selection of data on which a scholarly research performance evaluation was based, the analysis of the data, and the presentation of the multiple criteria analysis results.

Keywords: Multiple Criteria Decision Making Analysis, MCDMA, Research Performance Evaluation, Scholar Index, h index, Science Citation Index, Science Efficiency, Cumulative Citation Index, Sciencemetrics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 415
361 Evaluation the Distribution of Implant Supported Prostheses between 2005-2009 Years

Authors: Atay A, Suer BT

Abstract:

The aim of this retrospective study was to evaluate the parameters of dental implants such as patient gender, number of implant, failed implant before prosthetic restorations and failed implant after implantation and failed implant after prosthetic restorations. 135 male and 99 female patients, total 234 implant patients which have been treated with 450 implant between 2005- 2009 years in GATA Haydarpasa Training Hospital Dental Service. Twelve implants were failed before prosthetic restorations. Four implant were failed after fixed prosthetic restorations. Cumulative survival rate after prostheses were 97.56 % during 6 years period.

Keywords: Dental implants, implant supported prostheses, single implants, single crown

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1563