Search results for: linearly constrained minimum variance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1509

Search results for: linearly constrained minimum variance

1359 Classification of Initial Stripe Height Patterns using Radial Basis Function Neural Network for Proportional Gain Prediction

Authors: Prasit Wonglersak, Prakarnkiat Youngkong, Ittipon Cheowanish

Abstract:

This paper aims to improve a fine lapping process of hard disk drive (HDD) lapping machines by removing materials from each slider together with controlling the strip height (SH) variation to minimum value. The standard deviation is the key parameter to evaluate the strip height variation, hence it is minimized. In this paper, a design of experiment (DOE) with factorial analysis by twoway analysis of variance (ANOVA) is adopted to obtain a statistically information. The statistics results reveal that initial stripe height patterns affect the final SH variation. Therefore, initial SH classification using a radial basis function neural network is implemented to achieve the proportional gain prediction.

Keywords: Stripe height variation, Two-way analysis ofvariance (ANOVA), Radial basis function neural network, Proportional gain prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1648
1358 A Budget and Deadline Constrained Fault Tolerant Load Balanced Scheduling Algorithm for Computational Grids

Authors: P. Keerthika, P. Suresh

Abstract:

Grid is an environment with millions of resources which are dynamic and heterogeneous in nature. A computational grid is one in which the resources are computing nodes and is meant for applications that involves larger computations. A scheduling algorithm is said to be efficient if and only if it performs better resource allocation even in case of resource failure. Resource allocation is a tedious issue since it has to consider several requirements such as system load, processing cost and time, user’s deadline and resource failure. This work attempts in designing a resource allocation algorithm which is cost-effective and also targets at load balancing, fault tolerance and user satisfaction by considering the above requirements. The proposed Budget Constrained Load Balancing Fault Tolerant algorithm with user satisfaction (BLBFT) reduces the schedule makespan, schedule cost and task failure rate and improves resource utilization. Evaluation of the proposed BLBFT algorithm is done using Gridsim toolkit and the results are compared with the algorithms which separately concentrates on all these factors. The comparison results ensure that the proposed algorithm works better than its counterparts.

Keywords: Grid Scheduling, Load Balancing, fault tolerance, makespan, cost, resource utilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2129
1357 Finding Pareto Optimal Front for the Multi- Mode Time, Cost Quality Trade-off in Project Scheduling

Authors: H. Iranmanesh, M. R. Skandari, M. Allahverdiloo

Abstract:

Project managers are the ultimate responsible for the overall characteristics of a project, i.e. they should deliver the project on time with minimum cost and with maximum quality. It is vital for any manager to decide a trade-off between these conflicting objectives and they will be benefited of any scientific decision support tool. Our work will try to determine optimal solutions (rather than a single optimal solution) from which the project manager will select his desirable choice to run the project. In this paper, the problem in project scheduling notated as (1,T|cpm,disc,mu|curve:quality,time,cost) will be studied. The problem is multi-objective and the purpose is finding the Pareto optimal front of time, cost and quality of a project (curve:quality,time,cost), whose activities belong to a start to finish activity relationship network (cpm) and they can be done in different possible modes (mu) which are non-continuous or discrete (disc), and each mode has a different cost, time and quality . The project is constrained to a non-renewable resource i.e. money (1,T). Because the problem is NP-Hard, to solve the problem, a meta-heuristic is developed based on a version of genetic algorithm specially adapted to solve multi-objective problems namely FastPGA. A sample project with 30 activities is generated and then solved by the proposed method.

Keywords: FastPGA, Multi-Execution Activity Mode, Pareto Optimality, Project Scheduling, Time-Cost-Quality Trade-Off.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1810
1356 Speech Intelligibility Improvement Using Variable Level Decomposition DWT

Authors: Samba Raju, Chiluveru, Manoj Tripathy

Abstract:

Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methods

Keywords: Discrete Wavelet Transform, speech intelligibility, STOI, standard deviation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 693
1355 An Insurer’s Investment Model with Reinsurance Strategy under the Modified Constant Elasticity of Variance Process

Authors: K. N. C. Njoku, Chinwendu Best Eleje, Christian Chukwuemeka Nwandu

Abstract:

One of the problems facing most insurance companies is how best the burden of paying claims to its policy holders can be managed whenever need arises. Hence there is need for the insurer to buy a reinsurance contract in order to reduce risk which will enable the insurer to share the financial burden with the reinsurer. In this paper, the insurer’s and reinsurer’s strategy is investigated under the modified constant elasticity of variance (M-CEV) process and proportional administrative charges. The insurer considered investment in one risky asset and one risk free asset where the risky asset is modeled based on the M-CEV process which is an extension of constant elasticity of variance (CEV) process. Next, a nonlinear partial differential equation in the form of Hamilton Jacobi Bellman equation is obtained by dynamic programming approach. Using power transformation technique and variable change, the explicit solutions of the optimal investment strategy and optimal reinsurance strategy are obtained. Finally, some numerical simulations of some sensitive parameters were obtained and discussed in details where we observed that the modification factor only affects the optimal investment strategy and not the reinsurance strategy for an insurer with exponential utility function.

Keywords: Reinsurance strategy, Hamilton Jacobi Bellman equation, power transformation, M-CEV process, exponential utility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 333
1354 A Note on the Minimum Cardinality of Critical Sets of Inertias for Irreducible Zero-nonzero Patterns of Order 4

Authors: Ber-Lin Yu, Ting-Zhu Huang

Abstract:

If there exists a nonempty, proper subset S of the set of all (n+1)(n+2)/2 inertias such that S Ôèå i(A) is sufficient for any n×n zero-nonzero pattern A to be inertially arbitrary, then S is called a critical set of inertias for zero-nonzero patterns of order n. If no proper subset of S is a critical set, then S is called a minimal critical set of inertias. In [Kim, Olesky and Driessche, Critical sets of inertias for matrix patterns, Linear and Multilinear Algebra, 57 (3) (2009) 293-306], identifying all minimal critical sets of inertias for n×n zero-nonzero patterns with n ≥ 3 and the minimum cardinality of such a set are posed as two open questions by Kim, Olesky and Driessche. In this note, the minimum cardinality of all critical sets of inertias for 4 × 4 irreducible zero-nonzero patterns is identified.

Keywords: Zero-nonzero pattern, inertia, critical set of inertias, inertially arbitrary.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1198
1353 Analysis of Precipitation Time Series of Urban Centers of Northeastern Brazil using Wavelet Transform

Authors: Celso A. G. Santos, Paula K. M. M. Freire

Abstract:

The urban centers within northeastern Brazil are mainly influenced by the intense rainfalls, which can occur after long periods of drought, when flood events can be observed during such events. Thus, this paper aims to study the rainfall frequencies in such region through the wavelet transform. An application of wavelet analysis is done with long time series of the total monthly rainfall amount at the capital cities of northeastern Brazil. The main frequency components in the time series are studied by the global wavelet spectrum and the modulation in separated periodicity bands were done in order to extract additional information, e.g., the 8 and 16 months band was examined by an average of all scales, giving a measure of the average annual variance versus time, where the periods with low or high variance could be identified. The important increases were identified in the average variance for some periods, e.g. 1947 to 1952 at Teresina city, which can be considered as high wet periods. Although, the precipitation in those sites showed similar global wavelet spectra, the wavelet spectra revealed particular features. This study can be considered an important tool for time series analysis, which can help the studies concerning flood control, mainly when they are applied together with rainfall-runoff simulations.

Keywords: rainfall data, urban center, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2448
1352 A Minimum Spanning Tree-Based Method for Initializing the K-Means Clustering Algorithm

Authors: J. Yang, Y. Ma, X. Zhang, S. Li, Y. Zhang

Abstract:

The traditional k-means algorithm has been widely used as a simple and efficient clustering method. However, the algorithm often converges to local minima for the reason that it is sensitive to the initial cluster centers. In this paper, an algorithm for selecting initial cluster centers on the basis of minimum spanning tree (MST) is presented. The set of vertices in MST with same degree are regarded as a whole which is used to find the skeleton data points. Furthermore, a distance measure between the skeleton data points with consideration of degree and Euclidean distance is presented. Finally, MST-based initialization method for the k-means algorithm is presented, and the corresponding time complexity is analyzed as well. The presented algorithm is tested on five data sets from the UCI Machine Learning Repository. The experimental results illustrate the effectiveness of the presented algorithm compared to three existing initialization methods.

Keywords: Degree, initial cluster center, k-means, minimum spanning tree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1554
1351 K-Means for Spherical Clusters with Large Variance in Sizes

Authors: A. M. Fahim, G. Saake, A. M. Salem, F. A. Torkey, M. A. Ramadan

Abstract:

Data clustering is an important data exploration technique with many applications in data mining. The k-means algorithm is well known for its efficiency in clustering large data sets. However, this algorithm is suitable for spherical shaped clusters of similar sizes and densities. The quality of the resulting clusters decreases when the data set contains spherical shaped with large variance in sizes. In this paper, we introduce a competent procedure to overcome this problem. The proposed method is based on shifting the center of the large cluster toward the small cluster, and recomputing the membership of small cluster points, the experimental results reveal that the proposed algorithm produces satisfactory results.

Keywords: K-Means, Data Clustering, Cluster Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3281
1350 Entropy Generation and Heat Transfer of Cu–Water Nanofluid Mixed Convection in a Cavity

Authors: Mliki Bouchmel, Belgacem Nabil, Abbassi Mohamed Ammar, Geudri Kamel, Omri Ahmed

Abstract:

In this numerical work, mixed convection and entropy generation of Cu–water nanofluid in a lid-driven square cavity have been investigated numerically using the Lattice Boltzmann Method. Horizontal walls of the cavity are adiabatic and vertical walls have constant temperature but different values. The top wall has been considered as moving from left to right at a constant speed, U0. The effects of different parameters such as nanoparticle volume concentration (0–0.05), Rayleigh number (104–106) and Reynolds numbers (1, 10 and 100) on the entropy generation, flow and temperature fields are studied. The results have shown that addition of nanoparticles to the base fluid affects the entropy generation, flow pattern and thermal behavior especially at higher Rayleigh and low Reynolds numbers. For pure fluid as well as nanofluid, the increase of Reynolds number increases the average Nusselt number and the total entropy generation, linearly. The maximum entropy generation occurs in nanofluid at low Rayleigh number and at high Reynolds number. The minimum entropy generation occurs in pure fluid at low Rayleigh and Reynolds numbers. Also at higher Reynolds number, the effect of Cu nanoparticles on enhancement of heat transfer was decreased because the effect of lid-driven cavity was increased. The present results are validated by favorable comparisons with previously published results. The results of the problem are presented in graphical and tabular forms and discussed.

Keywords: Entropy generation, mixed convection, nanofluid, lattice Boltzmann method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1951
1349 Determining the Best Fitting Distributions for Minimum Flows of Streams in Gediz Basin

Authors: Naci Büyükkaracığan

Abstract:

Today, the need for water sources is swiftly increasing due to population growth. At the same time, it is known that some regions will face with shortage of water and drought because of the global warming and climate change. In this context, evaluation and analysis of hydrological data such as the observed trends, drought and flood prediction of short term flow has great deal of importance. The most accurate selection probability distribution is important to describe the low flow statistics for the studies related to drought analysis. As in many basins In Turkey, Gediz River basin will be affected enough by the drought and will decrease the amount of used water. The aim of this study is to derive appropriate probability distributions for frequency analysis of annual minimum flows at 6 gauging stations of the Gediz Basin. After applying 10 different probability distributions, six different parameter estimation methods and 3 fitness test, the Pearson 3 distribution and general extreme values distributions were found to give optimal results.

Keywords: Gediz Basin, goodness-of-fit tests, Minimum flows, probability distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2506
1348 Composition Dependent Formation of Sputtered Co-Cu Film on Cr Under-Layer

Authors: Watcharee Rattanasakulthong, Pichai Sirisangsawang, Supree Pinitsoontorn

Abstract:

Sputtered CoxCu100-x films with the different compositions of x = 57.7, 45.8, 25.5, 13.8, 8.8, 7.5 and 1.8 were deposited on Cr under-layer by RF-sputtering. SEM result reveals that the averaged thickness of Co-Cu film and Cr under-layer are 92 nm and 22nm, respectively. All Co-Cu films are composed of Co (FCC) and Cu (FCC) phases in (111) directions on BCC-Cr (110) under-layers. Magnetic properties, surface roughness and morphology of Co-Cu films are dependent on the film composition. The maximum and minimum surface roughness of 3.24 and 1.16nm are observed on the Co7.5Cu92.5 and Co45.8Cu54.2films, respectively. It can be described that the variance of surface roughness of the film because of the difference of the agglomeration rate of Co and Cu atoms on Cr under-layer. The Co57.5Cu42.3, Co45.8Cu54.2 and Co25.5Cu74.5 films shows the ferromagnetic phase whereas the rest of the film exhibits the paramagnetic phase at room temperature. The saturation magnetization, remnant magnetization and coercive field of Co-Cu films on Cr under-layer are slightly increased with increasing the Co composition. It can be concluded that the required magnetic properties and surface roughness of the Co-Cu film can be adapted by the adjustment of the film composition.

Keywords: Co-Cu films, Under-layers, Sputtering, Surface roughness, Magnetic properties, Atomic force microscopy (AFM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1944
1347 Iterative solutions to the linear matrix equation AXB + CXTD = E

Authors: Yongxin Yuan, Jiashang Jiang

Abstract:

In this paper the gradient based iterative algorithm is presented to solve the linear matrix equation AXB +CXTD = E, where X is unknown matrix, A,B,C,D,E are the given constant matrices. It is proved that if the equation has a solution, then the unique minimum norm solution can be obtained by choosing a special kind of initial matrices. Two numerical examples show that the introduced iterative algorithm is quite efficient.

Keywords: matrix equation, iterative algorithm, parameter estimation, minimum norm solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1558
1346 Climatic Factors Affecting Influenza Cases in Southern Thailand

Authors: S. Youthao, M. Jaroensutasinee, K. Jaroensutasinee

Abstract:

This study investigated climatic factors associated with influenza cases in Southern Thailand. The main aim for use regression analysis to investigate possible causual relationship of climatic factors and variability between the border of the Andaman Sea and the Gulf of Thailand. Southern Thailand had the highest Influenza incidences among four regions (i.e. north, northeast, central and southern Thailand). In this study, there were 14 climatic factors: mean relative humidity, maximum relative humidity, minimum relative humidity, rainfall, rainy days, daily maximum rainfall, pressure, maximum wind speed, mean wind speed, sunshine duration, mean temperature, maximum temperature, minimum temperature, and temperature difference (i.e. maximum – minimum temperature). Multiple stepwise regression technique was used to fit the statistical model. The results indicated that the mean wind speed and the minimum relative humidity were positively associated with the number of influenza cases on the Andaman Sea side. The maximum wind speed was positively associated with the number of influenza cases on the Gulf of Thailand side.

Keywords: Influenza, Climatic Factor, Relative Humidity, Rainfall, Pressure, Wind Speed, sunshine duration, Temperature, Andaman Sea, Gulf of Thailand, Southern Thailand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1625
1345 A Chaotic Study on Tremor Behavior of Parkinsonian Patients under Deep Brain Stimulation

Authors: M. Sadeghi, A.H. Jafari, S.M.P. Firoozabadi

Abstract:

Deep Brain Stimulation or DBS is a surgical treatment for Parkinson-s Disease with three stimulation parameters: frequency, pulse width, and voltage. The parameters should be selected appropriately to achieve effective treatment. This selection now, performs clinically. The aim of this research is to study chaotic behavior of recorded tremor of patients under DBS in order to present a computational method to recognize stimulation optimum voltage. We obtained some chaotic features of tremor signal, and discovered embedding space of it has an attractor, and its largest Lyapunov exponent is positive, which show tremor signal has chaotic behavior, also we found out, in optimal voltage, entropy and embedding space variance of tremor signal have minimum values in comparison with other voltages. These differences can help neurologists recognize optimal voltage numerically, which leads to reduce patients' role and discomfort in optimizing stimulation parameters and to do treatment with high accuracy.

Keywords: Chaos, Deep Brain Stimulation, Parkinson's Disease, Stimulation Parameters, tremor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827
1344 The Study of Rapeseed Characteristics by Factor Analysis under Normal and Drought Stress Conditions

Authors: Ali Bakhtiari Gharibdosti, Mohammad Hosein Bijeh Keshavarzi, Samira Alijani

Abstract:

To understand internal characteristics relationships and determine factors which explain under consideration characteristics in rapeseed varieties, 10 rapeseed genotypes were implemented in complete accidental plot with three-time repetitions under drought stress in 2009-2010 in research field of agriculture college, Islamic Azad University, Karaj branch. In this research, 11 characteristics include of characteristics related to growth, production and functions stages was considered. Variance analysis results showed that there is a significant difference among rapeseed varieties characteristics. By calculating simple correlation coefficient under both conditions, normal and drought stress indicate that seed function characteristics in plant and pod number have positive and significant correlation in 1% probable level with seed function and selection on the base of these characteristics was effective for improving this function. Under normal and drought stress, analyzing the main factors showed that numbers of factors which have more than one amount, had five factors under normal conditions which were 82.72% of total variance totally, but under drought stress four factors diagnosed which were 76.78% of total variance. By considering total results of this research and by assessing effective characteristics for factor analysis and selecting different components of these characteristics, they can be used for modifying works to select applicable and tolerant genotypes in drought stress conditions.

Keywords: Correlation, drought stress, factor analysis, rapeseed.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 701
1343 Steering Velocity Bounded Mobile Robots in Environments with Partially Known Obstacles

Authors: Reza Hossseynie, Amir Jafari

Abstract:

This paper presents a method for steering velocity bounded mobile robots in environments with partially known stationary obstacles. The exact location of obstacles is unknown and only a probability distribution associated with the location of the obstacles is known. Kinematic model of a 2-wheeled differential drive robot is used as the model of mobile robot. The presented control strategy uses the Artificial Potential Field (APF) method for devising a desired direction of movement for the robot at each instant of time while the Constrained Directions Control (CDC) uses the generated direction to produce the control signals required for steering the robot. The location of each obstacle is considered to be the mean value of the 2D probability distribution and similarly, the magnitude of the electric charge in the APF is set as the trace of covariance matrix of the location probability distribution. The method not only captures the challenges of planning the path (i.e. probabilistic nature of the location of unknown obstacles), but it also addresses the output saturation which is considered to be an important issue from the control perspective. Moreover, velocity of the robot can be controlled during the steering. For example, the velocity of robot can be reduced in close vicinity of obstacles and target to ensure safety. Finally, the control strategy is simulated for different scenarios to show how the method can be put into practice.

Keywords: Steering, obstacle avoidance, mobile robots, constrained directions control, artificial potential field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 907
1342 Detecting the Nonlinearity in Time Series from Continuous Dynamic Systems Based on Delay Vector Variance Method

Authors: Shumin Hou, Yourong Li, Sanxing Zhao

Abstract:

Much time series data is generally from continuous dynamic system. Firstly, this paper studies the detection of the nonlinearity of time series from continuous dynamics systems by applying the Phase-randomized surrogate algorithm. Then, the Delay Vector Variance (DVV) method is introduced into nonlinearity test. The results show that under the different sampling conditions, the opposite detection of nonlinearity is obtained via using traditional test statistics methods, which include the third-order autocovariance and the asymmetry due to time reversal. Whereas the DVV method can perform well on determining nonlinear of Lorenz signal. It indicates that the proposed method can describe the continuous dynamics signal effectively.

Keywords: Nonlinearity, Time series, continuous dynamics system, DVV method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1626
1341 Conflation Methodology Applied to Flood Recovery

Authors: E. L. Suarez, D. E. Meeroff, Y. Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: Community resilience, conflation, flood risk, nuisance flooding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 144
1340 Optimization of Thermal and Discretization Parameters in Laser Welding Simulation Nd:YAG Applied for Shin Plate Transparent Mode Of DP600

Authors: Chansopheak Seang, Afia David Kouadri, Eric Ragneau

Abstract:

Three dimensional analysis of thermal model in laser full penetration welding, Nd:YAG, by transparent mode DP600 alloy steel 1.25mm of thickness and gap of 0.1mm. Three models studied the influence of thermal dependent temperature properties, thermal independent temperature and the effect of peak value of specific heat at phase transformation temperature, AC1, on the transient temperature. Another seven models studied the influence of discretization, meshes on the temperature distribution in weld plate. It is shown that for the effects of thermal properties, the errors less 4% of maximum temperature in FZ and HAZ have identified. The minimum value of discretization are at least one third increment per radius for temporal discretization and the spatial discretization requires two elements per radius and four elements through thickness of the assembled plate, which therefore represent the minimum requirements of modeling for the laser welding in order to get minimum errors less than 5% compared to the fine mesh.

Keywords: FEA, welding, discretization, ABAQUS user subroutine DFLUX

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1819
1339 Kernel’s Parameter Selection for Support Vector Domain Description

Authors: Mohamed EL Boujnouni, Mohamed Jedra, Noureddine Zahid

Abstract:

Support Vector Domain Description (SVDD) is one of the best-known one-class support vector learning methods, in which one tries the strategy of using balls defined on the feature space in order to distinguish a set of normal data from all other possible abnormal objects. As all kernel-based learning algorithms its performance depends heavily on the proper choice of the kernel parameter. This paper proposes a new approach to select kernel's parameter based on maximizing the distance between both gravity centers of normal and abnormal classes, and at the same time minimizing the variance within each class. The performance of the proposed algorithm is evaluated on several benchmarks. The experimental results demonstrate the feasibility and the effectiveness of the presented method.

Keywords: Gravity centers, Kernel’s parameter, Support Vector Domain Description, Variance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832
1338 Neural Network Imputation in Complex Survey Design

Authors: Safaa R. Amer

Abstract:

Missing data yields many analysis challenges. In case of complex survey design, in addition to dealing with missing data, researchers need to account for the sampling design to achieve useful inferences. Methods for incorporating sampling weights in neural network imputation were investigated to account for complex survey designs. An estimate of variance to account for the imputation uncertainty as well as the sampling design using neural networks will be provided. A simulation study was conducted to compare estimation results based on complete case analysis, multiple imputation using a Markov Chain Monte Carlo, and neural network imputation. Furthermore, a public-use dataset was used as an example to illustrate neural networks imputation under a complex survey design

Keywords: Complex survey, estimate, imputation, neural networks, variance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1972
1337 Resource Leveling in Construction Projects using Re- Modified Minimum Moment Approach

Authors: Abhay Tawalare, Rajesh Lalwani

Abstract:

An attempt in this paper proposes a re-modification to the minimum moment approach of resource leveling which is a modified minimum moment approach to the traditional method by Harris. The method is based on critical path method. The new approach suggests the difference between the methods in the selection criteria of activity which needs to be shifted for leveling resource histogram. In traditional method, the improvement factor found first to select the activity for each possible day of shifting. In modified method maximum value of the product of Resources Rate and Free Float was found first and improvement factor is then calculated for that activity which needs to be shifted. In the proposed method the activity to be selected first for shifting is based on the largest value of resource rate. The process is repeated for all the remaining activities for possible shifting to get updated histogram. The proposed method significantly reduces the number of iterations and is easier for manual computations.

Keywords: Re-Modified, Resource Leveling, Resources Rate, Free Float, Resource Histogram

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3827
1336 Image Contrast Enhancement based Sub-histogram Equalization Technique without Over-equalization Noise

Authors: Hyunsup Yoon, Youngjoon Han, Hernsoo Hahn

Abstract:

In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.

Keywords: Contrast Enhancement, Histogram Equalization, Histogram Region Equalization, Equalization Noise

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3419
1335 A Computational Model of Minimal Consciousness Functions

Authors: Nabila Charkaoui

Abstract:

Interest in Human Consciousness has been revived in the late 20th century from different scientific disciplines. Consciousness studies involve both its understanding and its application. In this paper, a computational model of the minimum consciousness functions necessary in my point of view for Artificial Intelligence applications is presented with the aim of improving the way computations will be made in the future. In section I, human consciousness is briefly described according to the scope of this paper. In section II, a minimum set of consciousness functions is defined - based on the literature reviewed - to be modelled, and then a computational model of these functions is presented in section III. In section IV, an analysis of the model is carried out to describe its functioning in detail.

Keywords: Consciousness, perception, attention.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1892
1334 Assessment of Water Pollution of Kowsar Dam Reservoir

Authors: Mohammad Mahdi Jabbari, Fardin Boustani

Abstract:

The reservoir of Kowsar dam supply water for different usages such as aquaculture farms , drinking, agricultural and industrial usages for some provinces in south of Iran. The Kowsar dam is located next to the city of Dehdashat in Kohgiluye and Boyerahmad province in southern Iran. There are some towns and villages on the Kowsar dam watersheds, which Dehdasht and Choram are the most important and populated twons in this area, which can to be sources of pollution for water reservoir of the Kowsar dam . This study was done to determine of water pollution of the Kowsar dam reservoir which is one of the most important water resources of Kohkiloye and Boyerahmad and Bushehr provinces in south-west Iran. In this study , water samples during 12 months were collected to examine Biochemical Oxygen Demand (BOD) and Dissolved Oxygen(DO) as a criterion for evaluation of water pollution of the reservoir. In summary ,the study has shown Maximum, average and minimum levels of BOD have observed 25.9 ,9.15 and 2.3 mg/L respectively and statistical parameters of data such as standard deviation , variance and skewness have calculated 7.88, 62 and 1.54 respectively. Finally the results were compared with Iranian national standards. Among the analyzed samples, as the maximum value of BOD (25.9 mg/L) was observed at the May 2010 , was within the maximum admissible limits by the Iranian standards.

Keywords: Kowsar dam, Biochemical Oxygen Demand, water pollution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2177
1333 Statistically Significant Differences of Carbon Dioxide and Carbon Monoxide Emission in Photocopying Process

Authors: Kiurski S. Jelena, Kecić S. Vesna, Oros B. Ivana

Abstract:

Experimental results confirmed the temporal variation of carbon dioxide and carbon monoxide concentration during the working shift of the photocopying process in a small photocopying shop in Novi Sad, Serbia. The statistically significant differences of target gases were examined with two-way analysis of variance without replication followed by Scheffe's post hoc test. The existence of statistically significant differences was obtained for carbon monoxide emission which is pointed out with F-values (12.37 and 31.88) greater than Fcrit (6.94) in contrary to carbon dioxide emission (F-values of 1.23 and 3.12 were less than Fcrit).  Scheffe's post hoc test indicated that sampling point A (near the photocopier machine) and second time interval contribute the most on carbon monoxide emission.

Keywords: Analysis of variance, carbon dioxide, carbon monoxide, photocopying indoor, Scheffe's test

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1599
1332 Thermodynamic Study for Aggregation Behavior of Hydrotropic Solution

Authors: Meghal Desai, Jigisha Parikh

Abstract:

Aggregation behavior of sodium salicylate and sodium cumene sulfonate was studied in aqueous solution at different temperature. Specific conductivity and relative viscosity were measured at different temperature to find minimum hydrotropic concentration. The thermodynamic parameters (free energy, enthalpy and entropy) were evaluated in the temperature range of 30°C-70°C. The free energy decreased with increase in temperature. The aggregation was found to be exothermic in nature and favored by positive value of entropy.

Keywords: Hydrotropes, Enthalpy, Entropy, Free Energy, Minimum Hydrotropic Concentration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2006
1331 Investigations Into the Turning Parameters Effect on the Surface Roughness of Flame Hardened Medium Carbon Steel with TiN-Al2O3-TiCN Coated Inserts based on Taguchi Techniques

Authors: Samir Khrais, Adel Mahammod Hassan , Amro Gazawi

Abstract:

The aim of this research is to evaluate surface roughness and develop a multiple regression model for surface roughness as a function of cutting parameters during the turning of flame hardened medium carbon steel with TiN-Al2O3-TiCN coated inserts. An experimental plan of work and signal-to-noise ratio (S/N) were used to relate the influence of turning parameters to the workpiece surface finish utilizing Taguchi methodology. The effects of turning parameters were studied by using the analysis of variance (ANOVA) method. Evaluated parameters were feed, cutting speed, and depth of cut. It was found that the most significant interaction among the considered turning parameters was between depth of cut and feed. The average surface roughness (Ra) resulted by TiN-Al2O3- TiCN coated inserts was about 2.44 μm and minimum value was 0.74 μm. In addition, the regression model was able to predict values for surface roughness in comparison with experimental values within reasonable limit.

Keywords: Medium carbon steel, Prediction, Surface roughness, Taguchi method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1771
1330 Higher Order Statistics for Identification of Minimum Phase Channels

Authors: Mohammed Zidane, Said Safi, Mohamed Sabri, Ahmed Boumezzough

Abstract:

This paper describes a blind algorithm, which is compared with two another algorithms proposed in the literature, for estimating of the minimum phase channel parameters. In order to identify blindly the impulse response of these channels, we have used Higher Order Statistics (HOS) to build our algorithm. The simulation results in noisy environment, demonstrate that the proposed method could estimate the phase and magnitude with high accuracy of these channels blindly and without any information about the input, except that the input excitation is identically and independent distribute (i.i.d) and non-Gaussian.

Keywords: System Identification, Higher Order Statistics, Communication Channels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1672