Search results for: Slant weighted Toeplitz operator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 555

Search results for: Slant weighted Toeplitz operator

135 Performance Evaluation of Minimum Quantity Lubrication on EN3 Mild Steel Turning

Authors: Swapnil Rajan Jadhav, Ajay Vasantrao Kashikar

Abstract:

Lubrication, cooling and chip removal are the desired functions of any cutting fluid. Conventional or flood lubrication requires high volume flow rate and cost associated with this is higher. In addition, flood lubrication possesses health risks to machine operator. To avoid these consequences, dry machining and minimum quantity are two alternatives. Dry machining cannot be a suited alternative as it can generate greater heat and poor surface finish. Here, turning work is carried out on a Lathe machine using EN3 Mild steel. Variable cutting speeds and depth of cuts are provided and corresponding temperatures and surface roughness values were recorded. Experimental results are analyzed by Minitab software. Regression analysis, main effect plot, and interaction plot conclusion are drawn by using ANOVA. There is a 95.83% reduction in the use of cutting fluid. MQL gives a 9.88% reduction in tool temperature, this will improve tool life. MQL produced a 17.64% improved surface finish. MQL appears to be an economical and environmentally compatible lubrication technique for sustainable manufacturing.

Keywords: ANOVA, MQL, regression analysis, surface roughness

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 324
134 Digital Library Evaluation by SWARA-WASPAS Method

Authors: Mehmet Yörükoğlu, Serhat Aydın

Abstract:

Since the discovery of the manuscript, mechanical methods for storing, transferring and using the information have evolved into digital methods over the time. In this process, libraries that are the center of the information have also become digitized and become accessible from anywhere and at any time in the world by taking on a structure that has no physical boundaries. In this context, some criteria for information obtained from digital libraries have become more important for users. This paper evaluates the user criteria from different perspectives that make a digital library more useful. The Step-Wise Weight Assessment Ratio Analysis-Weighted Aggregated Sum Product Assessment (SWARA-WASPAS) method is used with flexibility and easy calculation steps for the evaluation of digital library criteria. Three different digital libraries are evaluated by information technology experts according to five conflicting main criteria, ‘interface design’, ‘effects on users’, ‘services’, ‘user engagement’ and ‘context’. Finally, alternatives are ranked in descending order.

Keywords: Digital library, multi criteria decision making, SWARA-WASPAS method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 814
133 Binarization of Text Region based on Fuzzy Clustering and Histogram Distribution in Signboards

Authors: Jonghyun Park, Toan Nguyen Dinh, Gueesang Lee

Abstract:

In this paper, we present a novel approach to accurately detect text regions including shop name in signboard images with complex background for mobile system applications. The proposed method is based on the combination of text detection using edge profile and region segmentation using fuzzy c-means method. In the first step, we perform an elaborate canny edge operator to extract all possible object edges. Then, edge profile analysis with vertical and horizontal direction is performed on these edge pixels to detect potential text region existing shop name in a signboard. The edge profile and geometrical characteristics of each object contour are carefully examined to construct candidate text regions and classify the main text region from background. Finally, the fuzzy c-means algorithm is performed to segment and detected binarize text region. Experimental results show that our proposed method is robust in text detection with respect to different character size and color and can provide reliable text binarization result.

Keywords: Text detection, edge profile, signboard image, fuzzy clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2191
132 Medical Image Fusion Based On Redundant Wavelet Transform and Morphological Processing

Authors: P. S. Gomathi, B. Kalaavathi

Abstract:

The process in which the complementary information from multiple images is integrated to provide composite image that contains more information than the original input images is called image fusion. Medical image fusion provides useful information from multimodality medical images that provides additional information to the doctor for diagnosis of diseases in a better way. This paper represents the wavelet based medical image fusion algorithm on different multimodality medical images. In order to fuse the medical images, images are decomposed using Redundant Wavelet Transform (RWT). The high frequency coefficients are convolved with morphological operator followed by the maximum-selection (MS) rule. The low frequency coefficients are processed by MS rule. The reconstructed image is obtained by inverse RWT. The quantitative measures which includes Mean, Standard Deviation, Average Gradient, Spatial frequency, Edge based Similarity Measures are considered for evaluating the fused images. The performance of this proposed method is compared with Pixel averaging, PCA, and DWT fusion methods. When compared with conventional methods, the proposed framework provides better performance for analysis of multimodality medical images.

Keywords: Discrete Wavelet Transform (DWT), Image Fusion, Morphological Processing, Redundant Wavelet Transform (RWT).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2107
131 Classification of Soil Aptness to Establish of Panicum virgatum in Mississippi using Sensitivity Analysis and GIS

Authors: Eduardo F. Arias, William Cooke III, Zhaofei Fan, William Kingery

Abstract:

During the last decade Panicum virgatum, known as Switchgrass, has been broadly studied because of its remarkable attributes as a substitute pasture and as a functional biofuel source. The objective of this investigation was to establish soil suitability for Switchgrass in the State of Mississippi. A linear weighted additive model was developed to forecast soil suitability. Multicriteria analysis and Sensitivity analysis were utilized to adjust and optimize the model. The model was fit using seven years of field data associated with soils characteristics collected from Natural Resources Conservation System - United States Department of Agriculture (NRCS-USDA). The best model was selected by correlating calculated biomass yield with each model's soils-based output for Switchgrass suitability. Coefficient of determination (r2) was the decisive factor used to establish the 'best' soil suitability model. Coefficients associated with the 'best' model were implemented within a Geographic Information System (GIS) to create a map of relative soil suitability for Switchgrass in Mississippi. A Geodatabase associated with soil parameters was built and is available for future Geographic Information System use.

Keywords: Aptness, GIS, sensitivity analysis, switchgrass, soil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1496
130 A TFETI Domain Decompositon Solver for Von Mises Elastoplasticity Model with Combination of Linear Isotropic-Kinematic Hardening

Authors: Martin Cermak, Stanislav Sysala

Abstract:

In this paper we present the efficient parallel implementation of elastoplastic problems based on the TFETI (Total Finite Element Tearing and Interconnecting) domain decomposition method. This approach allow us to use parallel solution and compute this nonlinear problem on the supercomputers and decrease the solution time and compute problems with millions of DOFs. In our approach we consider an associated elastoplastic model with the von Mises plastic criterion and the combination of linear isotropic-kinematic hardening law. This model is discretized by the implicit Euler method in time and by the finite element method in space. We consider the system of nonlinear equations with a strongly semismooth and strongly monotone operator. The semismooth Newton method is applied to solve this nonlinear system. Corresponding linearized problems arising in the Newton iterations are solved in parallel by the above mentioned TFETI. The implementation of this problem is realized in our in-house MatSol packages developed in MatLab.

Keywords: Isotropic-kinematic hardening, TFETI, domain decomposition, parallel solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1716
129 A New Heuristic Approach for Large Size Zero-One Multi Knapsack Problem Using Intercept Matrix

Authors: K. Krishna Veni, S. Raja Balachandar

Abstract:

This paper presents a heuristic to solve large size 0-1 Multi constrained Knapsack problem (01MKP) which is NP-hard. Many researchers are used heuristic operator to identify the redundant constraints of Linear Programming Problem before applying the regular procedure to solve it. We use the intercept matrix to identify the zero valued variables of 01MKP which is known as redundant variables. In this heuristic, first the dominance property of the intercept matrix of constraints is exploited to reduce the search space to find the optimal or near optimal solutions of 01MKP, second, we improve the solution by using the pseudo-utility ratio based on surrogate constraint of 01MKP. This heuristic is tested for benchmark problems of sizes upto 2500, taken from literature and the results are compared with optimum solutions. Space and computational complexity of solving 01MKP using this approach are also presented. The encouraging results especially for relatively large size test problems indicate that this heuristic can successfully be used for finding good solutions for highly constrained NP-hard problems.

Keywords: 0-1 Multi constrained Knapsack problem, heuristic, computational complexity, NP-Hard problems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1817
128 Generational PipeLined Genetic Algorithm (PLGA)using Stochastic Selection

Authors: Malay K. Pakhira, Rajat K. De

Abstract:

In this paper, a pipelined version of genetic algorithm, called PLGA, and a corresponding hardware platform are described. The basic operations of conventional GA (CGA) are made pipelined using an appropriate selection scheme. The selection operator, used here, is stochastic in nature and is called SA-selection. This helps maintaining the basic generational nature of the proposed pipelined GA (PLGA). A number of benchmark problems are used to compare the performances of conventional roulette-wheel selection and the SA-selection. These include unimodal and multimodal functions with dimensionality varying from very small to very large. It is seen that the SA-selection scheme is giving comparable performances with respect to the classical roulette-wheel selection scheme, for all the instances, when quality of solutions and rate of convergence are considered. The speedups obtained by PLGA for different benchmarks are found to be significant. It is shown that a complete hardware pipeline can be developed using the proposed scheme, if parallel evaluation of the fitness expression is possible. In this connection a low-cost but very fast hardware evaluation unit is described. Results of simulation experiments show that in a pipelined hardware environment, PLGA will be much faster than CGA. In terms of efficiency, PLGA is found to outperform parallel GA (PGA) also.

Keywords: Hardware evaluation, Hardware pipeline, Optimization, Pipelined genetic algorithm, SA-selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1412
127 A Robust TVD-WENO Scheme for Conservation Laws

Authors: A. Abdalla, A. Kaltayev

Abstract:

The ultimate goal of this article is to develop a robust and accurate numerical method for solving hyperbolic conservation laws in one and two dimensions. A hybrid numerical method, coupling a cheap fourth order total variation diminishing (TVD) scheme [1] for smooth region and a Robust seventh-order weighted non-oscillatory (WENO) scheme [2] near discontinuities, is considered. High order multi-resolution analysis is used to detect the high gradients regions of the numerical solution in order to capture the shocks with the WENO scheme, while the smooth regions are computed with fourth order total variation diminishing (TVD). For time integration, we use the third order TVD Runge-Kutta scheme. The accuracy of the resulting hybrid high order scheme is comparable with these of WENO, but with significant decrease of the CPU cost. Numerical demonstrates that the proposed scheme is comparable to the high order WENO scheme and superior to the fourth order TVD scheme. Our scheme has the added advantage of simplicity and computational efficiency. Numerical tests are presented which show the robustness and effectiveness of the proposed scheme.

Keywords: WENO scheme, TVD schemes, smoothness indicators, multi-resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1974
126 Image Contrast Enhancement based Sub-histogram Equalization Technique without Over-equalization Noise

Authors: Hyunsup Yoon, Youngjoon Han, Hernsoo Hahn

Abstract:

In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.

Keywords: Contrast Enhancement, Histogram Equalization, Histogram Region Equalization, Equalization Noise

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3372
125 Study of EEGs from Somatosensory Cortex and Alzheimer's Disease Sources

Authors: Md R. Bashar, Yan Li, Peng Wen

Abstract:

This study is to investigate the electroencephalogram (EEG) differences generated from a normal and Alzheimer-s disease (AD) sources. We also investigate the effects of brain tissue distortions due to AD on EEG. We develop a realistic head model from T1 weighted magnetic resonance imaging (MRI) using finite element method (FEM) for normal source (somatosensory cortex (SC) in parietal lobe) and AD sources (right amygdala (RA) and left amygdala (LA) in medial temporal lobe). Then, we compare the AD sourced EEGs to the SC sourced EEG for studying the nature of potential changes due to sources and 5% to 20% brain tissue distortions. We find an average of 0.15 magnification errors produced by AD sourced EEGs. Different brain tissue distortion models also generate the maximum 0.07 magnification. EEGs obtained from AD sources and different brain tissue distortion levels vary scalp potentials from normal source, and the electrodes residing in parietal and temporal lobes are more sensitive than other electrodes for AD sourced EEG.

Keywords: Alzheimer's disease (AD), brain tissue distortion, electroencephalogram, finite element method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1863
124 Level of Service Based Methodology for Municipal Infrastructure Management

Authors: Z. Khan, O. Moselhi, T. Zayed

Abstract:

Development of levels of service in municipal context is a flexible vehicle to assist in performing quality-cost trade-off analysis for municipal services. This trade-off depends on the willingness of a community to pay as well as on the condition of the assets. Community perspective of the performance of an asset from service point of view may be quite different from the municipality perspective of the performance of the same asset from condition point of view. This paper presents a three phased level of service based methodology for water mains that consists of :1)development of an Analytical Hierarchy model of level of service 2) development of Fuzzy Weighted Sum model of water main condition index and 3) deriving a Fuzzy logic based function that maps level of service to asset condition index. This mapping will assist asset managers in quantifying condition improvement requirement to meet service goals and to make more informed decisions on interventions and relayed priorities.

Keywords: Asset Management, Level of Service, Condition Index, Analytical Hierarchy, Fuzzy Logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915
123 Repair and Maintenance Capability and Facilities Availability for MF 285 Tractor Operators in North of Khouzestan Province

Authors: Fatemeh Afsharnia, Mohammad Amin Asoodar, Abbas Abdeshahi, Afshin Marzban

Abstract:

A repairable mechanical system (as agricultural tractor) is subject to deterioration or repeated failure and needs a repair shops and also operator’s capability for the repair and maintenance operations. Data are based on field visits and interviews with 48MF 285 tractor operators from 14 villages collected in north of Khouzestan province. The results showed that most operators were lack the technical skill to service and repair tractors due to insufficient training, specific education and work experience. Inadequate repair and maintenance facilities, such as workshops, mechanics and spare parts depots cause delays in repair work in the survey areas. Farmers do not keep accurate service records and most of them disregard proper maintenance and service of their tractors, such as changing engine oil without following the manufacturer’s recommendations. Since, Repair and maintenance facilities should be established in village areas to guarantee timely repair in case of breakdowns and to make spare parts available at low price. The operators should keep service records accurately and adhere to maintenance and service schedules according to the manufacturer’s instructions. They should also be encouraged to do the service and maintain their tractors properly.

Keywords: Operators’ capability, Facilities availability, Repair and maintenance, MF 285 tractors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795
122 A 24-Bit, 8.1-MS/s D/A Converter for Audio Baseband Channel Applications

Authors: N. Ben Ameur, M. Loulou

Abstract:

This paper study the high-level modelling and design of delta-sigma (ΔΣ) noise shapers for audio Digital-to-Analog Converter (DAC) so as to eliminate the in-band Signal-to-Noise- Ratio (SNR) degradation that accompany one channel mismatch in audio signal. The converter combines a cascaded digital signal interpolation, a noise-shaping single loop delta-sigma modulator with a 5-bit quantizer resolution in the final stage. To reduce sensitivity of Digital-to-Analog Converter (DAC) nonlinearities of the last stage, a high pass second order Data Weighted Averaging (R2DWA) is introduced. This paper presents a MATLAB description modelling approach of the proposed DAC architecture with low distortion and swing suppression integrator designs. The ΔΣ Modulator design can be configured as a 3rd-order and allows 24-bit PCM at sampling rate of 64 kHz for Digital Video Disc (DVD) audio application. The modeling approach provides 139.38 dB of dynamic range for a 32 kHz signal band at -1.6 dBFS input signal level.

Keywords: DVD-audio, DAC, Interpolator and Interpolation Filter, Single-Loop ΔΣ Modulation, R2DWA, Clock Jitter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2588
121 Influence of Mass Flow Rate on Forced Convective Heat Transfer through a Nanofluid Filled Direct Absorption Solar Collector

Authors: Salma Parvin, M. A. Alim

Abstract:

The convective and radiative heat transfer performance and entropy generation on forced convection through a direct absorption solar collector (DASC) is investigated numerically. Four different fluids, including Cu-water nanofluid, Al2O3-waternanofluid, TiO2-waternanofluid, and pure water are used as the working fluid. Entropy production has been taken into account in addition to the collector efficiency and heat transfer enhancement. Penalty finite element method with Galerkin’s weighted residual technique is used to solve the governing non-linear partial differential equations. Numerical simulations are performed for the variation of mass flow rate. The outcomes are presented in the form of isotherms, average output temperature, the average Nusselt number, collector efficiency, average entropy generation, and Bejan number. The results present that the rate of heat transfer and collector efficiency enhance significantly for raising the values of m up to a certain range.

Keywords: DASC, forced convection, mass flow rate, nanofluid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 816
120 Urban Big Data: An Experimental Approach to Building-Value Estimation Using Web-Based Data

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

Current real-estate value estimation, difficult for laymen, usually is performed by specialists. This paper presents an automated estimation process based on big data and machine-learning technology that calculates influences of building conditions on real-estate price measurement. The present study analyzed actual building sales sample data for Nonhyeon-dong, Gangnam-gu, Seoul, Korea, measuring the major influencing factors among the various building conditions. Further to that analysis, a prediction model was established and applied using RapidMiner Studio, a graphical user interface (GUI)-based tool for derivation of machine-learning prototypes. The prediction model is formulated by reference to previous examples. When new examples are applied, it analyses and predicts accordingly. The analysis process discerns the crucial factors effecting price increases by calculation of weighted values. The model was verified, and its accuracy determined, by comparing its predicted values with actual price increases.

Keywords: Big data, building-value analysis, machine learning, price prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1126
119 A Hybrid Multi Objective Algorithm for Flexible Job Shop Scheduling

Authors: Parviz Fattahi

Abstract:

Scheduling for the flexible job shop is very important in both fields of production management and combinatorial optimization. However, it quit difficult to achieve an optimal solution to this problem with traditional optimization approaches owing to the high computational complexity. The combining of several optimization criteria induces additional complexity and new problems. In this paper, a Pareto approach to solve the multi objective flexible job shop scheduling problems is proposed. The objectives considered are to minimize the overall completion time (makespan) and total weighted tardiness (TWT). An effective simulated annealing algorithm based on the proposed approach is presented to solve multi objective flexible job shop scheduling problem. An external memory of non-dominated solutions is considered to save and update the non-dominated solutions during the solution process. Numerical examples are used to evaluate and study the performance of the proposed algorithm. The proposed algorithm can be applied easily in real factory conditions and for large size problems. It should thus be useful to both practitioners and researchers.

Keywords: Flexible job shop, Scheduling, Hierarchical approach, simulated annealing, tabu search, multi objective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1970
118 Kinetic model and Simulation Analysis for Propane Dehydrogenation in an Industrial Moving Bed Reactor

Authors: Chin S. Y., Radzi, S. N. R., Maharon, I. H., Shafawi, M. A.

Abstract:

A kinetic model for propane dehydrogenation in an industrial moving bed reactor is developed based on the reported reaction scheme. The kinetic parameters and activity constant are fine tuned with several sets of balanced plant data. Plant data at different operating conditions is applied to validate the model and the results show a good agreement between the model predictions and plant observations in terms of the amount of main product, propylene produced. The simulation analysis of key variables such as inlet temperature of each reactor (Tinrx) and hydrogen to total hydrocarbon ratio (H2/THC) affecting process performance is performed to identify the operating condition to maximize the production of propylene. Within the range of operating conditions applied in the present studies, the operating condition to maximize the propylene production at the same weighted average inlet temperature (WAIT) is ΔTinrx1= -2, ΔTinrx2= +1, ΔTinrx3= +1 , ΔTinrx4= +2 and ΔH2/THC= -0.02. Under this condition, the surplus propylene produced is 7.07 tons/day as compared with base case.

Keywords: kinetic model, dehydrogenation, simulation, modeling, propane

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4360
117 Efficient Variants of Square Contour Algorithm for Blind Equalization of QAM Signals

Authors: Ahmad Tariq Sheikh, Shahzad Amin Sheikh

Abstract:

A new distance-adjusted approach is proposed in which static square contours are defined around an estimated symbol in a QAM constellation, which create regions that correspond to fixed step sizes and weighting factors. As a result, the equalizer tap adjustment consists of a linearly weighted sum of adaptation criteria that is scaled by a variable step size. This approach is the basis of two new algorithms: the Variable step size Square Contour Algorithm (VSCA) and the Variable step size Square Contour Decision-Directed Algorithm (VSDA). The proposed schemes are compared with existing blind equalization algorithms in the SCA family in terms of convergence speed, constellation eye opening and residual ISI suppression. Simulation results for 64-QAM signaling over empirically derived microwave radio channels confirm the efficacy of the proposed algorithms. An RTL implementation of the blind adaptive equalizer based on the proposed schemes is presented and the system is configured to operate in VSCA error signal mode, for square QAM signals up to 64-QAM.

Keywords: Adaptive filtering, Blind Equalization, Square Contour Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
116 Power Flow Tracing Based Reactive Power Ancillary Service (AS) in Restructured Power Market

Authors: M. Susithra, R. Gnanadass

Abstract:

Ancillary services are support services which are essential for humanizing and enhancing the reliability and security of the electric power system. Reactive power ancillary service is one of the important ancillary services in a restructured electricity market which determines the cost of supplying ancillary services and finding of how this cost would change with respect to operating decisions. This paper presents a new formation that can be used to minimize the Independent System Operator (ISO)’s total payment for reactive power ancillary service. The modified power flow tracing algorithm estimates the availability of reserve reactive power for ancillary service. In order to find optimum reactive power dispatch, Biogeography based optimization method (BPO) is proposed. Market Reactive Clearing Price (MRCP) is then estimated and it encourages generator companies (GENCOs) to participate in an ancillary service. Finally, optimal weighting factor and real time utilization factor of reactive power give the minimum ISO’s total payment. The effectiveness of proposed design is verified using IEEE 30 bus system.

Keywords: Biogeography based optimization method, Power flow tracing method, Reactive generation capability curve and Reactive power ancillary service.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3192
115 Simulation of Non-Linear Behavior of Shear Wall under Seismic Loading

Authors: M. A. Ghorbani, M. Pasbani Khiavi

Abstract:

The seismic response of steel shear wall system considering nonlinearity effects using finite element method is investigated in this paper. The non-linear finite element analysis has potential as usable and reliable means for analyzing of civil structures with the availability of computer technology. In this research the large displacements and materially nonlinear behavior of shear wall is presented with developing of finite element code. A numerical model based on the finite element method for the seismic analysis of shear wall is presented with developing of finite element code in this research. To develop the finite element code, the standard Galerkin weighted residual formulation is used. Two-dimensional plane stress model and total Lagrangian formulation was carried out to present the shear wall response and the Newton-Raphson method is applied for the solution of nonlinear transient equations. The presented model in this paper can be developed for analysis of civil engineering structures with different material behavior and complicated geometry.

Keywords: Finite element, steel shear wall, nonlinear, earthquake

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1798
114 The Application of Queuing Theory in Multi-Stage Production Lines

Authors: Hani Shafeek, Muhammed Marsudi

Abstract:

The purpose of this work is examining the multiproduct multi-stage in a battery production line. To improve the performances of an assembly production line by determine the efficiency of each workstation. Data collected from every workstation. The data are throughput rate, number of operator, and number of parts that arrive and leaves during part processing. Data for the number of parts that arrives and leaves are collected at least at the amount of ten samples to make the data is possible to be analyzed by Chi-Squared Goodness Test and queuing theory. Measures of this model served as the comparison with the standard data available in the company. Validation of the task time value resulted by comparing it with the task time value based on the company database. Some performance factors for the multi-product multi-stage in a battery production line in this work are shown. The efficiency in each workstation was also shown. Total production time to produce each part can be determined by adding the total task time in each workstation. To reduce the queuing time and increase the efficiency based on the analysis any probably improvement should be done. One probably action is by increasing the number of operators how manually operate this workstation.

Keywords: Production line, manufacturing, performance measurement, queuing theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3095
113 Hybrid of Hunting Search and Modified Simplex Methods for Grease Position Parameter Design Optimisation

Authors: P. Luangpaiboon, S. Boonhao

Abstract:

This study proposes a multi-response surface optimization problem (MRSOP) for determining the proper choices of a process parameter design (PPD) decision problem in a noisy environment of a grease position process in an electronic industry. The proposed models attempts to maximize dual process responses on the mean of parts between failure on left and right processes. The conventional modified simplex method and its hybridization of the stochastic operator from the hunting search algorithm are applied to determine the proper levels of controllable design parameters affecting the quality performances. A numerical example demonstrates the feasibility of applying the proposed model to the PPD problem via two iterative methods. Its advantages are also discussed. Numerical results demonstrate that the hybridization is superior to the use of the conventional method. In this study, the mean of parts between failure on left and right lines improve by 39.51%, approximately. All experimental data presented in this research have been normalized to disguise actual performance measures as raw data are considered to be confidential.

Keywords: Grease Position Process, Multi-response Surfaces, Modified Simplex Method, Hunting Search Method, Desirability Function Approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1644
112 Dynamic Correlations and Portfolio Optimization between Islamic and Conventional Equity Indexes: A Vine Copula-Based Approach

Authors: Imen Dhaou

Abstract:

This study examines conditional Value at Risk by applying the GJR-EVT-Copula model, and finds the optimal portfolio for eight Dow Jones Islamic-conventional pairs. Our methodology consists of modeling the data by a bivariate GJR-GARCH model in which we extract the filtered residuals and then apply the Peak over threshold model (POT) to fit the residual tails in order to model marginal distributions. After that, we use pair-copula to find the optimal portfolio risk dependence structure. Finally, with Monte Carlo simulations, we estimate the Value at Risk (VaR) and the conditional Value at Risk (CVaR). The empirical results show the VaR and CVaR values for an equally weighted portfolio of Dow Jones Islamic-conventional pairs. In sum, we found that the optimal investment focuses on Islamic-conventional US Market index pairs because of high investment proportion; however, all other index pairs have low investment proportion. These results deliver some real repercussions for portfolio managers and policymakers concerning to optimal asset allocations, portfolio risk management and the diversification advantages of these markets.

Keywords: CVaR, Dow Jones Islamic index, GJR-GARCH-EVT-pair copula, portfolio optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 960
111 Nonlinear Analysis of Shear Wall Using Finite Element Model

Authors: M. A. Ghorbani, M. Pasbani Khiavi, F. Rezaie Moghaddam

Abstract:

In the analysis of structures, the nonlinear effects due to large displacement, large rotation and materially-nonlinear are very important and must be considered for the reliable analysis. The non-linear fmite element analysis has potential as usable and reliable means for analyzing of civil structures with the availability of computer technology. In this research the large displacements and materially nonlinear behavior of shear wall is presented with developing of fmite element code using the standard Galerkin weighted residual formulation. Two-dimensional plane stress model was carried out to present the shear wall response. Total Lagangian formulation, which is computationally more effective, is used in the formulation of stiffness matrices and the Newton-Raphson method is applied for the solution of nonlinear transient equations. The details of the program formulation are highlighted and the results of the analyses are presented, along with a comparison of the response of the structure with Ansys software results. The presented model in this paper can be developed for nonlinear analysis of civil engineering structures with different material behavior and complicated geometry.

Keywords: Finite element, large displacements, materially nonlinear, shear wall.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1703
110 An Improved k Nearest Neighbor Classifier Using Interestingness Measures for Medical Image Mining

Authors: J. Alamelu Mangai, Satej Wagle, V. Santhosh Kumar

Abstract:

The exponential increase in the volume of medical image database has imposed new challenges to clinical routine in maintaining patient history, diagnosis, treatment and monitoring. With the advent of data mining and machine learning techniques it is possible to automate and/or assist physicians in clinical diagnosis. In this research a medical image classification framework using data mining techniques is proposed. It involves feature extraction, feature selection, feature discretization and classification. In the classification phase, the performance of the traditional kNN k nearest neighbor classifier is improved using a feature weighting scheme and a distance weighted voting instead of simple majority voting. Feature weights are calculated using the interestingness measures used in association rule mining. Experiments on the retinal fundus images show that the proposed framework improves the classification accuracy of traditional kNN from 78.57 % to 92.85 %.

Keywords: Medical Image Mining, Data Mining, Feature Weighting, Association Rule Mining, k nearest neighbor classifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3265
109 Dynamic Variational Multiscale LES of Bluff Body Flows on Unstructured Grids

Authors: Carine Moussaed, Stephen Wornom, Bruno Koobus, Maria Vittoria Salvetti, Alain Dervieux,

Abstract:

The effects of dynamic subgrid scale (SGS) models are investigated in variational multiscale (VMS) LES simulations of bluff body flows. The spatial discretization is based on a mixed finite element/finite volume formulation on unstructured grids. In the VMS approach used in this work, the separation between the largest and the smallest resolved scales is obtained through a variational projection operator and a finite volume cell agglomeration. The dynamic version of Smagorinsky and WALE SGS models are used to account for the effects of the unresolved scales. In the VMS approach, these effects are only modeled in the smallest resolved scales. The dynamic VMS-LES approach is applied to the simulation of the flow around a circular cylinder at Reynolds numbers 3900 and 20000 and to the flow around a square cylinder at Reynolds numbers 22000 and 175000. It is observed as in previous studies that the dynamic SGS procedure has a smaller impact on the results within the VMS approach than in LES. But improvements are demonstrated for important feature like recirculating part of the flow. The global prediction is improved for a small computational extra cost.

Keywords: variational multiscale LES, dynamic SGS model, unstructured grids, circular cylinder, square cylinder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1785
108 Genetic Algorithm Parameters Optimization for Bi-Criteria Multiprocessor Task Scheduling Using Design of Experiments

Authors: Sunita Dhingra, Satinder Bal Gupta, Ranjit Biswas

Abstract:

Multiprocessor task scheduling is a NP-hard problem and Genetic Algorithm (GA) has been revealed as an excellent technique for finding an optimal solution. In the past, several methods have been considered for the solution of this problem based on GAs. But, all these methods consider single criteria and in the present work, minimization of the bi-criteria multiprocessor task scheduling problem has been considered which includes weighted sum of makespan & total completion time. Efficiency and effectiveness of genetic algorithm can be achieved by optimization of its different parameters such as crossover, mutation, crossover probability, selection function etc. The effects of GA parameters on minimization of bi-criteria fitness function and subsequent setting of parameters have been accomplished by central composite design (CCD) approach of response surface methodology (RSM) of Design of Experiments. The experiments have been performed with different levels of GA parameters and analysis of variance has been performed for significant parameters for minimisation of makespan and total completion time simultaneously.

Keywords: Multiprocessor task scheduling, Design of experiments, Genetic Algorithm, Makespan, Total completion time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2749
107 Artificial Intelligence Model to Predict Surface Roughness of Ti-15-3 Alloy in EDM Process

Authors: Md. Ashikur Rahman Khan, M. M. Rahman, K. Kadirgama, M.A. Maleque, Rosli A. Bakar

Abstract:

Conventionally the selection of parameters depends intensely on the operator-s experience or conservative technological data provided by the EDM equipment manufacturers that assign inconsistent machining performance. The parameter settings given by the manufacturers are only relevant with common steel grades. A single parameter change influences the process in a complex way. Hence, the present research proposes artificial neural network (ANN) models for the prediction of surface roughness on first commenced Ti-15-3 alloy in electrical discharge machining (EDM) process. The proposed models use peak current, pulse on time, pulse off time and servo voltage as input parameters. Multilayer perceptron (MLP) with three hidden layer feedforward networks are applied. An assessment is carried out with the models of distinct hidden layer. Training of the models is performed with data from an extensive series of experiments utilizing copper electrode as positive polarity. The predictions based on the above developed models have been verified with another set of experiments and are found to be in good agreement with the experimental results. Beside this they can be exercised as precious tools for the process planning for EDM.

Keywords: Ti-15l-3, surface roughness, copper, positive polarity, multi-layered perceptron.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1858
106 Data Envelopment Analysis with Partially Perfect Objects

Authors: Alexander Y. Vaninsky

Abstract:

This paper presents a simplified version of Data Envelopment Analysis (DEA) - a conventional approach to evaluating the performance and ranking of competitive objects characterized by two groups of factors acting in opposite directions: inputs and outputs. DEA with a Perfect Object (DEA PO) augments the group of actual objects with a virtual Perfect Object - the one having greatest outputs and smallest inputs. It allows for obtaining an explicit analytical solution and making a step to an absolute efficiency. This paper develops this approach further and introduces a DEA model with Partially Perfect Objects. DEA PPO consecutively eliminates the smallest relative inputs or greatest relative outputs, and applies DEA PO to the reduced collections of indicators. The partial efficiency scores are combined to get the weighted efficiency score. The computational scheme remains simple, like that of DEA PO, but the advantage of the DEA PPO is taking into account all of the inputs and outputs for each actual object. Firm evaluation is considered as an example.

Keywords: Data Envelopment Analysis, Perfect object, Partially perfect object, Partial efficiency, Explicit solution, Simplified algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658