Search results for: Applied%20industrial%20engineering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3298

Search results for: Applied%20industrial%20engineering

1108 A Neuro Adaptive Control Strategy for Movable Power Source of Proton Exchange Membrane Fuel Cell Using Wavelets

Authors: M. Sedighizadeh, A. Rezazadeh

Abstract:

Movable power sources of proton exchange membrane fuel cells (PEMFC) are the important research done in the current fuel cells (FC) field. The PEMFC system control influences the cell performance greatly and it is a control system for industrial complex problems, due to the imprecision, uncertainty and partial truth and intrinsic nonlinear characteristics of PEMFCs. In this paper an adaptive PI control strategy using neural network adaptive Morlet wavelet for control is proposed. It is based on a single layer feed forward neural networks with hidden nodes of adaptive morlet wavelet functions controller and an infinite impulse response (IIR) recurrent structure. The IIR is combined by cascading to the network to provide double local structure resulting in improving speed of learning. The proposed method is applied to a typical 1 KW PEMFC system and the results show the proposed method has more accuracy against to MLP (Multi Layer Perceptron) method.

Keywords: Adaptive Control, Morlet Wavelets, PEMFC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1843
1107 Feasibility of a Biopolymer as Lightweight Aggregate in Perlite Concrete

Authors: Ali A. Sayadi, Thomas R. Neitzert, G. Charles Clifton

Abstract:

Lightweight concrete is being used in the construction industry as a building material in its own right. Ultra-lightweight concrete can be applied as a filler and support material for the manufacturing of composite building materials. This paper is about the development of a stable and reproducible ultra-lightweight concrete with the inclusion of poly-lactic acid (PLA) beads and assessing the feasibility of PLA as a lightweight aggregate that will deliver advantages such as a more eco-friendly concrete and a non-petroleum polymer aggregate. In total, sixty-three samples were prepared and the effectiveness of mineral admixture, curing conditions, water-cement ratio, PLA ratio, EPS ratio and perlite ratio on compressive strength of perlite concrete are studied. The results show that PLA particles are sensitive to alkali environment of cement paste and considerably shrank and lost their strength. A higher compressive strength and a lower density was observed when expanded polystyrene (EPS) particles replaced PLA beads. In addition, a set of equations is proposed to estimate the water-cement ratio, cement content and compressive strength of perlite concrete.

Keywords: Perlite concrete, poly-lactic acid, expanded polystyrene, concrete.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2209
1106 Heat Transfer Modeling in Multi-Layer Cookware using Finite Element Method

Authors: Mohammad Reza Sedighi, Behnam Nilforooshan Dardashti

Abstract:

The high temperature degree and uniform Temperature Distribution (TD) on surface of cookware which contact with food are effective factors for improving cookware application. Additionally, the ability of pan material in retaining the heat and nonreactivity with foods are other significant properties. It is difficult for single material to meet a wide variety of demands such as superior thermal and chemical properties. Multi-Layer Plate (MLP) makes more regular TD. In this study the main objectives are to find the best structure (single or multi-layer) and materials to provide maximum temperature degree and uniform TD up side surface of pan. And also heat retaining of used metals with goal of improving the thermal quality of pan to economize the energy. To achieve this aim were employed Finite Element Method (FEM) for analyzing transient thermal behavior of applied materials. The analysis has been extended for different metals, we achieved the best temperature profile and heat retaining in Copper/ Stainless Steel MLP.

Keywords: Cookware, Energy optimization, Heat retaining, Laminated plate, Temperature distribution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2363
1105 Preconcentration and Determination of Cyproheptadine in Biological Samples by Hollow Fiber Liquid Phase Microextraction Coupled with High Performance Liquid Chromatography

Authors: Najari Moghadam Sh., Qomi M., Raofie F., Khadiv J.

Abstract:

In this study, a liquid phase microextraction by hollow fiber (HF-LPME) combined with high performance liquid chromatography-UV detector was applied to preconcentrate and determine trace levels of Cyproheptadine in human urine and plasma samples. Cyproheptadine was extracted from 10 mL alkaline aqueous solution (pH: 9.81) into an organic solvent (n-octnol) which was immobilized in the wall pores of a hollow fiber. Then was back-extracted into an acidified aqueous solution (pH: 2.59) located inside the lumen of the hollow fiber. This method is simple, efficient and cost-effective. It is based on pH gradient and differences between two aqueous phases. In order to optimize the HF-LPME some affecting parameters including the pH of donor and acceptor phases, the type of organic solvent, ionic strength, stirring rate, extraction time and temperature were studied and optimized. Under optimal conditions enrichment factor, limit of detection (LOD) and relative standard deviation (RSD(%), n=3) were up to 112, 15 μg.L−1 and 2.7, respectively.

Keywords: Biological samples, Cyproheptadine, hollow fiber, liquid phase microextraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2215
1104 Advanced Neural Network Learning Applied to Pulping Modeling

Authors: Z. Zainuddin, W. D. Wan Rosli, R. Lanouette, S. Sathasivam

Abstract:

This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of pulping problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified odified problem M-1 Ax= M-1b where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.

Keywords: Convergence, pulping modeling, neural networks, preconditioned conjugate gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1387
1103 Quick Sequential Search Algorithm Used to Decode High-Frequency Matrices

Authors: Mohammed M. Siddeq, Mohammed H. Rasheed, Omar M. Salih, Marcos A. Rodrigues

Abstract:

This research proposes a data encoding and decoding method based on the Matrix Minimization algorithm. This algorithm is applied to high-frequency coefficients for compression/encoding. The algorithm starts by converting every three coefficients to a single value; this is accomplished based on three different keys. The decoding/decompression uses a search method called QSS (Quick Sequential Search) Decoding Algorithm presented in this research based on the sequential search to recover the exact coefficients. In the next step, the decoded data are saved in an auxiliary array. The basic idea behind the auxiliary array is to save all possible decoded coefficients; this is because another algorithm, such as conventional sequential search, could retrieve encoded/compressed data independently from the proposed algorithm. The experimental results showed that our proposed decoding algorithm retrieves original data faster than conventional sequential search algorithms.

Keywords: Matrix Minimization Algorithm, Decoding Sequential Search Algorithm, image compression, Discrete Cosine Transform, Discrete Wavelet Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 207
1102 Effect of Ground Subsidence on Load Sharing and Settlement of Raft and Piled Raft Foundations

Authors: T.V. Tran, S. Teramoto, M. Kimura, T. Boonyatee, Le Ba Vinh

Abstract:

In this paper, two centrifugal model tests (case 1: raft foundation, case 2: 2x2 piled raft foundation) were conducted in order to evaluate the effect of ground subsidence on load sharing among piles and raft and settlement of raft and piled raft foundations. For each case, two conditions consisting of undrained (without groundwater pumping) and drained (with groundwater pumping) conditions were considered. Vertical loads were applied to the models after the foundations were completely consolidated by selfweight at 50g. The results show that load sharing by the piles in piled raft foundation (piled load share) for drained condition decreases faster than that for undrained condition. Settlement of both raft and piled raft foundations for drained condition increases more quickly than that for undrained condition. In addition, the settlement of raft foundation increases more largely than the settlement of piled raft foundation for drained condition.

Keywords: Ground subsidence, Piled raft, Load sharing, Centrifugal model test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2902
1101 Optimal Economic Restructuring Aimed at an Increase in GDP Constrained by a Decrease in Energy Consumption and CO2 Emissions

Authors: Alexander Y. Vaninsky

Abstract:

The objective of this paper is finding the way of economic restructuring - that is, change in the shares of sectoral gross outputs - resulting in the maximum possible increase in the gross domestic product (GDP) combined with decreases in energy consumption and CO2 emissions. It uses an input-output model for the GDP and factorial models for the energy consumption and CO2 emissions to determine the projection of the gradient of GDP, and the antigradients of the energy consumption and CO2 emissions, respectively, on a subspace formed by the structure-related variables. Since the gradient (antigradient) provides a direction of the steepest increase (decrease) of the objective function, and their projections retain this property for the functions' limitation to the subspace, each of the three directional vectors solves a particular problem of optimal structural change. In the next step, a type of factor analysis is applied to find a convex combination of the projected gradient and antigradients having maximal possible positive correlation with each of the three. This convex combination provides the desired direction of the structural change. The national economy of the United States is used as an example of applications.

Keywords: Economic restructuring, Input-Output analysis, Divisia index, Factorial decomposition, E3 models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1591
1100 Application of IED to Condition Based Maintenance of Medium Voltage GCB/VCB

Authors: Ming-Ta Yang, Jyh-Cherng Gu, Chun-Wei Huang, Jin-Lung Guan

Abstract:

Time base maintenance (TBM) is conventionally applied by the power utilities to maintain circuit breakers (CBs), transformers, bus bars and cables, which may result in under maintenance or over maintenance. As information and communication technology (ICT) industry develops, the maintenance policies of many power utilities have gradually changed from TBM to condition base maintenance (CBM) to improve system operating efficiency, operation cost and power supply reliability. This paper discusses the feasibility of using intelligent electronic devices (IEDs) to construct a CB CBM management platform. CBs in power substations can be monitored using IEDs with additional logic configuration and wire connections. The CB monitoring data can be sent through intranet to a control center and be analyzed and integrated by the Elipse Power Studio software. Finally, a human-machine interface (HMI) of supervisory control and data acquisition (SCADA) system can be designed to construct a CBM management platform to provide maintenance decision information for the maintenance personnel, management personnel and CB manufacturers.

Keywords: Circuit breaker, Condition base maintenance, Intelligent electronic device, Time base maintenance, SCADA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2263
1099 Design for Safety: Safety Consideration in Planning and Design of Airport Airsides

Authors: Maithem Al-Saadi, Min An

Abstract:

During airport planning and design stages, the major issues of capacity and safety in construction and operation of an airport need to be taken into consideration. The airside of an airport is a major and critical infrastructure that usually consists of runway(s), taxiway system, and apron(s) etc., which have to be designed according to the international standards and recommendations, and local limitations to accommodate the forecasted demands. However, in many cases, airport airsides are suffering from unexpected risks that occurred during airport operations. Therefore, safety risk assessment should be applied in the planning and design of airsides to cope with the probability of risks and their consequences, and to make decisions to reduce the risks to as low as reasonably practicable (ALARP) based on safety risk assessment. This paper presents a combination approach of Failure Modes, Effect, and Criticality Analysis (FMECA), Fuzzy Reasoning Approach (FRA), and Fuzzy Analytic Hierarchy Process (FAHP) to develop a risk analysis model for safety risk assessment. An illustrated example is used to the demonstrate risk assessment process on how the design of an airside in an airport can be analysed by using the proposed safety design risk assessment model.

Keywords: Airport airside planning and design, design for safety, fuzzy reasoning approach, fuzzy AHP, risk assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2136
1098 Lattice Boltzmann Simulation of MHD Natural Convection in a Nanofluid-Filled Enclosure with Non-Uniform Heating on Both Side Walls

Authors: Imen Mejri, Ahmed Mahmoudi, Mohamed A. Abbassi, Ahmed Omri

Abstract:

This paper examines the natural convection in a square enclosure filled with a water-Al2O3 nanofluid and is subjected to a magnetic field. The side walls of the cavity have spatially varying sinusoidal temperature distributions. The horizontal walls are adiabatic. Lattice Boltzmann method (LBM) is applied to solve the coupled equations of flow and temperature fields. This study has been carried out for the pertinent parameters in the following ranges: Rayleigh number of the base fluid, Ra=103 to 106, Hartmann number varied from Ha=0 to 90, phase deviation (γ=0, π/4, π/2, 3π/4 and π) and the solid volume fraction of the nanoparticles between Ø = 0 and 6%. The results show that the heat transfer rate increases with an increase of the Rayleigh number but it decreases with an increase of the Hartmann number. For γ=π/2 and Ra=105 the magnetic field augments the effect of nanoparticles. At Ha=0, the greatest effects of nanoparticles are obtained at γ = 0 and π/4 for Ra=104 and 105 respectively.

 

Keywords: Lattice Boltzmann Method, magnetic field, Natural convection, nanofluid, Sinusoidal temperature distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2986
1097 Effect Comparison of Speckle Noise Reduction Filters on 2D-Echocardigraphic Images

Authors: Faten A. Dawood, Rahmita W. Rahmat, Suhaini B. Kadiman, Lili N. Abdullah, Mohd D. Zamrin

Abstract:

Echocardiography imaging is one of the most common diagnostic tests that are widely used for assessing the abnormalities of the regional heart ventricle function. The main goal of the image enhancement task in 2D-echocardiography (2DE) is to solve two major anatomical structure problems; speckle noise and low quality. Therefore, speckle noise reduction is one of the important steps that used as a pre-processing to reduce the distortion effects in 2DE image segmentation. In this paper, we present the common filters that based on some form of low-pass spatial smoothing filters such as Mean, Gaussian, and Median. The Laplacian filter was used as a high-pass sharpening filter. A comparative analysis was presented to test the effectiveness of these filters after being applied to original 2DE images of 4-chamber and 2-chamber views. Three statistical quantity measures: root mean square error (RMSE), peak signal-to-ratio (PSNR) and signal-tonoise ratio (SNR) are used to evaluate the filter performance quantitatively on the output enhanced image.

Keywords: Gaussian operator, median filter, speckle texture, peak signal-to-ratio

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1969
1096 A Hidden Markov Model-Based Isolated and Meaningful Hand Gesture Recognition

Authors: Mahmoud Elmezain, Ayoub Al-Hamadi, Jörg Appenrodt, Bernd Michaelis

Abstract:

Gesture recognition is a challenging task for extracting meaningful gesture from continuous hand motion. In this paper, we propose an automatic system that recognizes isolated gesture, in addition meaningful gesture from continuous hand motion for Arabic numbers from 0 to 9 in real-time based on Hidden Markov Models (HMM). In order to handle isolated gesture, HMM using Ergodic, Left-Right (LR) and Left-Right Banded (LRB) topologies is applied over the discrete vector feature that is extracted from stereo color image sequences. These topologies are considered to different number of states ranging from 3 to 10. A new system is developed to recognize the meaningful gesture based on zero-codeword detection with static velocity motion for continuous gesture. Therefore, the LRB topology in conjunction with Baum-Welch (BW) algorithm for training and forward algorithm with Viterbi path for testing presents the best performance. Experimental results show that the proposed system can successfully recognize isolated and meaningful gesture and achieve average rate recognition 98.6% and 94.29% respectively.

Keywords: Computer Vision & Image Processing, Gesture Recognition, Pattern Recognition, Application

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2223
1095 Applying the Integrative Design Process in Architectural Firms: An Analytical Study on Egyptian Firms

Authors: Carole A. El Raheb, Hassan K. Abdel-Salam, Ingi Elcherif

Abstract:

An architect carrying the design process alone is the main reason for the deterioration of the quality of the architectural product as the complexity of the projects makes it a multi-disciplinary work; then, the Integrative Design Process (IDP) must be applied in the architectural firm especially from the early design phases to improve the product’s quality and to eliminate the ignorance of the principles of design causing the occurrence of low-grade buildings. The research explores the Integrative Design (ID) principles that fit in the architectural practice. Constraints facing this application are presented with strategies and solutions to overcome them. A survey questionnaire was conducted to collect data from a number of recognized Egyptian Architecture, Engineering and Construction (AEC) firms that explores their opinions on using the IDP. This survey emphasizes the importance of the IDP in firms and presents the reasons preventing the firms from applying the IDP. The aim here is to investigate the potentials of integrating this approach into architectural firms emphasizing the importance of this application which ensures the realization of the project’s goal and eliminates the reduction in the project’s quality.

Keywords: Application, architectural firms, integrative design principles, integrative design process, the project quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 717
1094 Artificial Accelerated Ageing Test of 22 kVXLPE Cable for Distribution System Applications in Thailand

Authors: A. Rawangpai, B. Maraungsri, N. Chomnawang

Abstract:

This paper presents the experimental results on artificial ageing test of 22 kV XLPE cable for distribution system application in Thailand. XLPE insulating material of 22 kV cable was sliced to 60-70 μm in thick and was subjected to ac high voltage at 23 Ôùª C, 60 Ôùª C and 75 Ôùª C. Testing voltage was constantly applied to the specimen until breakdown. Breakdown voltage and time to breakdown were used to evaluate life time of insulating material. Furthermore, the physical model by J. P. Crine for predicts life time of XLPE insulating material was adopted as life time model and was calculated in order to compare the experimental results. Acceptable life time results were obtained from Crine-s model comparing with the experimental result. In addition, fourier transform infrared spectroscopy (FTIR) for chemical analysis and scanning electron microscope (SEM) for physical analysis were conducted on tested specimens.

Keywords: Artificial accelerated ageing test, XLPE cable, distribution system, insulating material, life time, life time model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3651
1093 Simulation of Non-Linear Behavior of Shear Wall under Seismic Loading

Authors: M. A. Ghorbani, M. Pasbani Khiavi

Abstract:

The seismic response of steel shear wall system considering nonlinearity effects using finite element method is investigated in this paper. The non-linear finite element analysis has potential as usable and reliable means for analyzing of civil structures with the availability of computer technology. In this research the large displacements and materially nonlinear behavior of shear wall is presented with developing of finite element code. A numerical model based on the finite element method for the seismic analysis of shear wall is presented with developing of finite element code in this research. To develop the finite element code, the standard Galerkin weighted residual formulation is used. Two-dimensional plane stress model and total Lagrangian formulation was carried out to present the shear wall response and the Newton-Raphson method is applied for the solution of nonlinear transient equations. The presented model in this paper can be developed for analysis of civil engineering structures with different material behavior and complicated geometry.

Keywords: Finite element, steel shear wall, nonlinear, earthquake

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1816
1092 A Fragile Watermarking Scheme for Color Image Authentication

Authors: M. Hamad Hassan, S.A.M. Gilani

Abstract:

In this paper, a fragile watermarking scheme is proposed for color image specified object-s authentication. The color image is first transformed from RGB to YST color space, suitable for watermarking the color media. The T channel corresponds to the chrominance component of a color image andYS ÔèÑ T , therefore selected for embedding the watermark. The T channel is first divided into 2×2 non-overlapping blocks and the two LSBs are set to zero. The object that is to be authenticated is also divided into 2×2 nonoverlapping blocks and each block-s intensity mean is computed followed by eight bit encoding. The generated watermark is then embedded into T channel randomly selected 2×2 block-s LSBs using 2D-Torus Automorphism. Selection of block size is paramount for exact localization and recovery of work. The proposed scheme is blind, efficient and secure with ability to detect and locate even minor tampering applied to the image with full recovery of original work. The quality of watermarked media is quite high both subjectively and objectively. The technique is suitable for class of images with format such as gif, tif or bitmap.

Keywords: Image Authentication, LSBs, PSNR, 2D-Torus Automorphism, YST Color Space.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1870
1091 Photoimpedance Spectroscopy Analysis of Planar and Nano-Textured Thin-Film Silicon Solar Cells

Authors: P. Kumar, D. Eisenhauer, M. M. K. Yousef, Q. Shi, A. S. G. Khalil, M. R. Saber, C. Becker, T. Pullerits, K. J. Karki

Abstract:

In impedance spectroscopy (IS) the response of a photo-active device is analysed as a function of ac bias. It is widely applied in a broad class of material systems and devices. It gives access to fundamental mechanisms of operation of solar cells. We have implemented a method of IS where we modulate the light instead of the bias. This scheme allows us to analyze not only carrier dynamics but also impedance of device locally. Here, using this scheme, we have measured the frequency-dependent photocurrent response of the thin-film planar and nano-textured Si solar cells using this method. Photocurrent response is measured in range of 50 Hz to 50 kHz. Bode and Nyquist plots are used to determine characteristic lifetime of both the cells. Interestingly, the carrier lifetime of both planar and nano-textured solar cells depend on back and front contact positions. This is due to either heterogeneity of device or contacts are not optimized. The estimated average lifetime is found to be shorter for the nano-textured cell, which could be due to the influence of the textured interface on the carrier relaxation dynamics.

Keywords: Carrier lifetime, Impedance, nano-textured, and photocurrent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1009
1090 Modelling, Simulation and Validation of Plastic Zone Size during Deformation of Mild Steel

Authors: S. O. Adeosun, E. I. Akpan, S. A. Balogun, O. O. Taiwo

Abstract:

A model to predict the plastic zone size for material under plane stress condition has been developed and verified experimentally. The developed model is a function of crack size, crack angle and material property (dislocation density). Simulation and validation results show that the model developed show good agreement with experimental results. Samples of low carbon steel (0.035%C) with included surface crack angles of 45o, 50o, 60o, 70o and 90o and crack depths of 2mm and 4mm were subjected to low strain rate between 0.48 x 10-3 s-1 – 2.38 x 10-3 s-1. The mechanical properties studied were ductility, tensile strength, modulus of elasticity, yield strength, yield strain, stress at fracture and fracture toughness. The experimental study shows that strain rate has no appreciable effect on the size of plastic zone while crack depth and crack angle plays an imperative role in determining the size of the plastic zone of mild steel materials.

Keywords: Applied stress, crack angle, crack size, material property, plastic zone size, strain rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1592
1089 Long Term Variability of Temperature in Armenia in the Context of Climate Change

Authors: Hrachuhi Galstyan, Lucian Sfîcă, Pavel Ichim

Abstract:

The purpose of this study is to analyze the temporal and spatial variability of thermal conditions in the Republic of Armenia. The paper describes annual fluctuations in air temperature. Research has been focused on case study region of Armenia and surrounding areas, where long–term measurements and observations of weather conditions have been performed within the National Meteorological Service of Armenia and its surrounding areas. The study contains yearly air temperature data recorded between 1961- 2012. Mann-Kendal test and the autocorrelation function were applied to detect the change trend of annual mean temperature, as well as other parametric and non-parametric tests searching to find the presence of some breaks in the long term evolution of temperature. The analysis of all records reveals a tendency mostly towards warmer years, with increased temperatures especially in valleys and inner basins. The maximum temperature increase is up to 1,5°C. Negative results have not been observed in Armenia. The patterns of temperature change have been observed since the 1990’s over much of the Armenian territory. The climate in Armenia was influenced by global change in the last 2 decades, as results from the methods employed within the study.

Keywords: Air temperature, long-term variability, trend, climate change.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2187
1088 M-band Wavelet and Cosine Transform Based Watermark Algorithm Using Randomization and Principal Component Analysis

Authors: Tong Liu, Xuan Xu, Xiaodi Wang

Abstract:

Computational techniques derived from digital image processing are playing a significant role in the security and digital copyrights of multimedia and visual arts. This technology has the effect within the domain of computers. This research presents discrete M-band wavelet transform (MWT) and cosine transform (DCT) based watermarking algorithm by incorporating the principal component analysis (PCA). The proposed algorithm is expected to achieve higher perceptual transparency. Specifically, the developed watermarking scheme can successfully resist common signal processing, such as geometric distortions, and Gaussian noise. In addition, the proposed algorithm can be parameterized, thus resulting in more security. To meet these requirements, the image is transformed by a combination of MWT & DCT. In order to improve the security further, we randomize the watermark image to create three code books. During the watermark embedding, PCA is applied to the coefficients in approximation sub-band. Finally, first few component bands represent an excellent domain for inserting the watermark.

Keywords: discrete M-band wavelet transform , discrete M-band wavelet transform, randomized watermark, principal component analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1988
1087 Cost Sensitive Feature Selection in Decision-Theoretic Rough Set Models for Customer Churn Prediction: The Case of Telecommunication Sector Customers

Authors: Emel Kızılkaya Aydogan, Mihrimah Ozmen, Yılmaz Delice

Abstract:

In recent days, there is a change and the ongoing development of the telecommunications sector in the global market. In this sector, churn analysis techniques are commonly used for analysing why some customers terminate their service subscriptions prematurely. In addition, customer churn is utmost significant in this sector since it causes to important business loss. Many companies make various researches in order to prevent losses while increasing customer loyalty. Although a large quantity of accumulated data is available in this sector, their usefulness is limited by data quality and relevance. In this paper, a cost-sensitive feature selection framework is developed aiming to obtain the feature reducts to predict customer churn. The framework is a cost based optional pre-processing stage to remove redundant features for churn management. In addition, this cost-based feature selection algorithm is applied in a telecommunication company in Turkey and the results obtained with this algorithm.

Keywords: Churn prediction, data mining, decision-theoretic rough set, feature selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744
1086 Microscopic Emission and Fuel Consumption Modeling for Light-duty Vehicles Using Portable Emission Measurement System Data

Authors: Wei Lei, Hui Chen, Lin Lu

Abstract:

Microscopic emission and fuel consumption models have been widely recognized as an effective method to quantify real traffic emission and energy consumption when they are applied with microscopic traffic simulation models. This paper presents a framework for developing the Microscopic Emission (HC, CO, NOx, and CO2) and Fuel consumption (MEF) models for light-duty vehicles. The variable of composite acceleration is introduced into the MEF model with the purpose of capturing the effects of historical accelerations interacting with current speed on emission and fuel consumption. The MEF model is calibrated by multivariate least-squares method for two types of light-duty vehicle using on-board data collected in Beijing, China by a Portable Emission Measurement System (PEMS). The instantaneous validation results shows the MEF model performs better with lower Mean Absolute Percentage Error (MAPE) compared to other two models. Moreover, the aggregate validation results tells the MEF model produces reasonable estimations compared to actual measurements with prediction errors within 12%, 10%, 19%, and 9% for HC, CO, NOx emissions and fuel consumption, respectively.

Keywords: Emission, Fuel consumption, Light-duty vehicle, Microscopic, Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1976
1085 An Efficient Algorithm for Computing all Program Forward Static Slices

Authors: Jehad Al Dallal

Abstract:

Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program backward slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. The existing algorithms for computing program slices are introduced to compute a slice at a program point. In these algorithms, the program, or the model that represents the program, is traversed completely or partially once. To compute more than one slice, the same algorithm is applied for every point of interest in the program. Thus, the same program, or program representation, is traversed several times. In this paper, an algorithm is introduced to compute all forward static slices of a computer program by traversing the program representation graph once. Therefore, the introduced algorithm is useful for software engineering applications that require computing program slices at different points of a program. The program representation graph used in this paper is called Program Dependence Graph (PDG).

Keywords: Program slicing, static slicing, forward slicing, program dependence graph (PDG).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1444
1084 An Investigation on Overstrength Factor (Ω) of Reinforced Concrete Buildings in Turkish Earthquake Draft Code (TEC-2016)

Authors: M. Hakan Arslan, I. Hakkı Erkan

Abstract:

Overstrength factor is an important parameter of load reduction factor. In this research, the overstrength factor (Ω) of reinforced concrete (RC) buildings and the parameters of Ω in TEC-2016 draft version have been explored. For this aim, 48 RC buildings have been modeled according to the current seismic code TEC-2007 and Turkish Building Code-500-2000 criteria. After modelling step, nonlinear static pushover analyses have been applied to these buildings by using TEC-2007 Section 7. After the nonlinear pushover analyses, capacity curves (lateral load-lateral top displacement curves) have been plotted for 48 RC buildings. Using capacity curves, overstrength factors (Ω) have been derived for each building. The obtained overstrength factor (Ω) values have been compared with TEC-2016 values for related building types, and the results have been interpreted. According to the obtained values from the study, overstrength factor (Ω) given in TEC-2016 draft code is found quite suitable.

Keywords: Reinforced concrete buildings, overstrength factor, earthquake, static pushover analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2001
1083 Partial Stabilization of a Class of Nonlinear Systems Via Center Manifold Theory

Authors: Ping He

Abstract:

This paper addresses the problem of the partial state feedback stabilization of a class of nonlinear systems. In order to stabilization this class systems, the especial place of this paper is to reverse designing the state feedback control law from the method of judging system stability with the center manifold theory. First of all, the center manifold theory is applied to discuss the stabilization sufficient condition and design the stabilizing state control laws for a class of nonlinear. Secondly, the problem of partial stabilization for a class of plane nonlinear system is discuss using the lyapunov second method and the center manifold theory. Thirdly, we investigate specially the problem of the stabilization for a class of homogenous plane nonlinear systems, a class of nonlinear with dual-zero eigenvalues and a class of nonlinear with zero-center using the method of lyapunov function with homogenous derivative, specifically. At the end of this paper, some examples and simulation results are given show that the approach of this paper to this class of nonlinear system is effective and convenient.

Keywords: Partial stabilization, Nonlinear critical systems, Centermanifold theory, Lyapunov function, System reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1737
1082 Alignment of Emission Gamma Ray Sources with Nai(Ti) Scintillation Detectors by Two Laser Beams to Pre-Operation using Alternating Minimization Technique

Authors: Abbas Ali Mahmood Karwi

Abstract:

Accurate timing alignment and stability is important to maximize the true counts and minimize the random counts in positron emission tomography So signals output from detectors must be centering with the two isotopes to pre-operation and fed signals into four units of pulse-processing units, each unit can accept up to eight inputs. The dual source computed tomography consist two units on the left for 15 detector signals of Cs-137 isotope and two units on the right are for 15 detectors signals of Co-60 isotope. The gamma spectrum consisting of either single or multiple photo peaks. This allows for the use of energy discrimination electronic hardware associated with the data acquisition system to acquire photon counts data with a specific energy, even if poor energy resolution detectors are used. This also helps to avoid counting of the Compton scatter counts especially if a single discrete gamma photo peak is emitted by the source as in the case of Cs-137. In this study the polyenergetic version of the alternating minimization algorithm is applied to the dual energy gamma computed tomography problem.

Keywords: Alignment, Spectrum, Laser, Detectors, Image

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1591
1081 Adequacy of Object-Oriented Framework System-Based Testing Techniques

Authors: Jehad Al Dallal

Abstract:

An application framework provides a reusable design and implementation for a family of software systems. If the framework contains defects, the defects will be passed on to the applications developed from the framework. Framework defects are hard to discover at the time the framework is instantiated. Therefore, it is important to remove all defects before instantiating the framework. In this paper, two measures for the adequacy of an object-oriented system-based testing technique are introduced. The measures assess the usefulness and uniqueness of the testing technique. The two measures are applied to experimentally compare the adequacy of two testing techniques introduced to test objectoriented frameworks at the system level. The two considered testing techniques are the New Framework Test Approach and Testing Frameworks Through Hooks (TFTH). The techniques are also compared analytically in terms of their coverage power of objectoriented aspects. The comparison study results show that the TFTH technique is better than the New Framework Test Approach in terms of usefulness degree, uniqueness degree, and coverage power.

Keywords: Object-oriented framework, object-oriented framework testing, test case generation, testing adequacy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1409
1080 Viscous Potential Flow Analysis of Electrohydrodynamic Capillary Instability through Porous Media

Authors: Mukesh Kumar Awasth, Mohammad Tamsir

Abstract:

The effect of porous medium on the capillary instability of a cylindrical interface in the presence of axial electric field has been investigated using viscous potential flow theory. In viscous potential flow, the viscous term in Navier-Stokes equation vanishes as vorticity is zero but viscosity is not zero. Viscosity enters through normal stress balance in the viscous potential flow theory and tangential stresses are not considered. A dispersion relation that accounts for the growth of axisymmetric waves is derived and stability is discussed theoretically as well as numerically. Stability criterion is given by critical value of applied electric field as well as critical wave number. Various graphs have been drawn to show the effect of various physical parameters such as electric field, viscosity ratio, permittivity ratio on the stability of the system. It has been observed that the axial electric field and porous medium both have stabilizing effect on the stability of the system.

Keywords: Capillary instability, Viscous potential flow, Porous media, Axial electric field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2052
1079 Formulation and in vitro Evaluation of Sustained Release Matrix Tablets of Levetiracetam for Better Epileptic Treatment

Authors: Nagasamy Venkatesh Dhandapani

Abstract:

The objective of the present study was to develop sustained release oral matrix tablets of anti epileptic drug levetiracetam. The sustained release matrix tablets of levetiracetam were prepared using hydrophilic matrix hydroxypropyl methylcellulose (HPMC) as a release retarding polymer by wet granulation method. Prior to compression, FTIR studies were performed to understand the compatibility between the drug and excipients. The study revealed that there was no chemical interaction between drug and excipients used in the study. The tablets were characterized by physical and chemical parameters and results were found in acceptable limits. In vitro release study was carried out for the tablets using 0.1 N HCl for 2 hours and in phosphate buffer pH 7.4 for remaining time up to 12 hours. The effect of polymer concentration was studied. Different dissolution models were applied to drug release data in order to evaluate release mechanisms and kinetics. The drug release data fit well to zero order kinetics. Drug release mechanism was found as a complex mixture of diffusion, swelling and erosion.

Keywords: Levetiracetam, sustained-release, hydrophilic matrix tablet, HPMC grade K 100 MCR, wet granulation, zero order release kinetics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1588