Search results for: 9/7 Wavelets Error Sensitivity WES
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1792

Search results for: 9/7 Wavelets Error Sensitivity WES

442 Numerical Optimization Design of PEM Fuel Cell Performance Applying the Taguchi Method

Authors: Shan-Jen Cheng, Jr-Ming Miao, Sheng-Ju Wu

Abstract:

The purpose of this paper is applied Taguchi method on the optimization for PEMFC performance, and a representative Computational Fluid Dynamics (CFD) model is selectively performed for statistical analysis. The studied factors in this paper are pressure of fuel cell, operating temperature, the relative humidity of anode and cathode, porosity of gas diffusion electrode (GDE) and conductivity of GDE. The optimal combination for maximum power density is gained by using a three-level statistical method. The results confirmed that the robustness of the optimum design parameters influencing the performance of fuel cell are founded by pressure of fuel cell, 3atm; operating temperature, 353K; the relative humidity of anode, 50%; conductivity of GDE, 1000 S/m, but the relative humidity of cathode and porosity of GDE are pooled as error due to a small sum of squares. The present simulation results give designers the ideas ratify the effectiveness of the proposed robust design methodology for the performance of fuel cell.

Keywords: PEMFC, numerical simulation, optimization, Taguchi method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2508
441 Deterioration Assessment Models for Water Pipelines

Authors: L. Parvizsedghy, I. Gkountis, A. Senouci, T. Zayed, M. Alsharqawi, H. El Chanati, M. El-Abbasy, F. Mosleh

Abstract:

The aging and deterioration of water pipelines in cities worldwide result in more frequent water main breaks, water service disruptions, and flooding damage. Therefore, there is an urgent need for undertaking proper maintenance procedures to avoid breaks and disastrous failures. However, due to budget limitations, the maintenance of water pipeline networks needs to be prioritized through efficient deterioration assessment models. Previous studies focused on the development of structural or physical deterioration assessment models, which require expensive inspection data. But, this paper aims at developing deterioration assessment models for water pipelines using statistical techniques. Several deterioration models were developed based on pipeline size, material type, and soil type using linear regression analysis. The categorical nature of some variables affecting pipeline deterioration was considered through developing several categorical models. The developed models were validated with an average validity percentage greater than 95%. Moreover, sensitivity analysis was carried out against different classifications and it displayed higher importance of age of pipes compared to other factors. The developed models will be helpful for the water municipalities and asset managers to assess the condition of their pipes and prioritize them for maintenance and inspection purposes.

Keywords: Water pipelines, deterioration assessment models, regression analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1158
440 A Proposed Managerial Framework for International Marketing Operations in the Fast Food Industry

Authors: Emmanuel Selase Asamoah, Miloslava Chovancová

Abstract:

When choosing marketing strategies for international markets, one of the factors that should be considered is the cultural differences that exist among consumers in different countries. If the branding strategy has to be contextual and in tune with the culture, then the brand positioning variables has to interact, adapt and respond to the cultural variables in which the brand is operating. This study provides an overview of the relevance of culture in the development of an effective branding strategy in the international business environment. Hence, the main objective of this study is to provide a managerial framework for developing strategies for cross cultural brand management. The framework is useful because it incorporates the variables that are important in the competitiveness of fast food enterprises irrespective of their size. It provides practical, proactive and result oriented analysis that will help fast food firms augment their strategies in the international fast food markets. The proposed framework will enable managers understand the intricacies involved in branding in the global fast food industry and decrease the use of 'trial and error' when entering into unfamiliar markets.

Keywords: culture, branding strategy, marketing mix, mass customization, standardization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2604
439 Performance Evaluation of Intelligent Controllers for AGC in Thermal Systems

Authors: Muhammad Muhsin, Abhishek Mishra, Shreyansh Vishwakarma, K. Dasaratha Babu, Anudevi Samuel

Abstract:

In an interconnected power system, any sudden small load perturbation in any of the interconnected areas causes the deviation of the area frequencies, the tie line power and voltage deviation at the generator terminals. This paper deals with the study of performance of intelligent Fuzzy Logic controllers coupled with Conventional Controllers (PI and PID) for Load Frequency Control. For analysis, an isolated single area and interconnected two area thermal power systems with and without generation rate constraints (GRC) have been considered. The studies have been performed with conventional PI and PID controllers and their performance has been compared with intelligent fuzzy controllers. It can be demonstrated that these controllers can successfully bring back the excursions in area frequencies and tie line powers within acceptable limits in smaller time periods and with lesser transients as compared to the performance of conventional controllers under same load disturbance conditions. The simulations in MATLAB have been used for comparative studies.

Keywords: Area Control Error, Fuzzy Logic, Generation rate constraint, Load Frequency, Tie line Power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2384
438 Current Distribution and Cathode Flooding Prediction in a PEM Fuel Cell

Authors: A. Jamekhorshid, G. Karimi, I. Noshadi, A. Jahangiri

Abstract:

Non-uniform current distribution in polymer electrolyte membrane fuel cells results in local over-heating, accelerated ageing, and lower power output than expected. This issue is very critical when fuel cell experiences water flooding. In this work, the performance of a PEM fuel cell is investigated under cathode flooding conditions. Two-dimensional partially flooded GDL models based on the conservation laws and electrochemical relations are proposed to study local current density distributions along flow fields over a wide range of cell operating conditions. The model results show a direct association between cathode inlet humidity increases and that of average current density but the system becomes more sensitive to flooding. The anode inlet relative humidity shows a similar effect. Operating the cell at higher temperatures would lead to higher average current densities and the chance of system being flooded is reduced. In addition, higher cathode stoichiometries prevent system flooding but the average current density remains almost constant. The higher anode stoichiometry leads to higher average current density and higher sensitivity to cathode flooding.

Keywords: Current distribution, Flooding, Hydrogen energysystem, PEM fuel cell.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2364
437 Specialized Reduced Models of Dynamic Flows in 2-Stroke Engines

Authors: S. Cagin, X. Fischer, E. Delacourt, N. Bourabaa, C. Morin, D. Coutellier, B. Carré, S. Loumé

Abstract:

The complexity of scavenging by ports and its impact on engine efficiency create the need to understand and to model it as realistically as possible. However, there are few empirical scavenging models and these are highly specialized. In a design optimization process, they appear very restricted and their field of use is limited. This paper presents a comparison of two methods to establish and reduce a model of the scavenging process in 2-stroke diesel engines. To solve the lack of scavenging models, a CFD model has been developed and is used as the referent case. However, its large size requires a reduction. Two techniques have been tested depending on their fields of application: The NTF method and neural networks. They both appear highly appropriate drastically reducing the model’s size (over 90% reduction) with a low relative error rate (under 10%). Furthermore, each method produces a reduced model which can be used in distinct specialized fields of application: the distribution of a quantity (mass fraction for example) in the cylinder at each time step (pseudo-dynamic model) or the qualification of scavenging at the end of the process (pseudo-static model).

Keywords: Diesel engine, Design optimization, Model reduction, Neural network, NTF algorithm, Scavenging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1282
436 Inversion of Electrical Resistivity Data: A Review

Authors: Shrey Sharma, Gunjan Kumar Verma

Abstract:

High density electrical prospecting has been widely used in groundwater investigation, civil engineering and environmental survey. For efficient inversion, the forward modeling routine, sensitivity calculation, and inversion algorithm must be efficient. This paper attempts to provide a brief summary of the past and ongoing developments of the method. It includes reviews of the procedures used for data acquisition, processing and inversion of electrical resistivity data based on compilation of academic literature. In recent times there had been a significant evolution in field survey designs and data inversion techniques for the resistivity method. In general 2-D inversion for resistivity data is carried out using the linearized least-square method with the local optimization technique .Multi-electrode and multi-channel systems have made it possible to conduct large 2-D, 3-D and even 4-D surveys efficiently to resolve complex geological structures that were not possible with traditional 1-D surveys. 3-D surveys play an increasingly important role in very complex areas where 2-D models suffer from artifacts due to off-line structures. Continued developments in computation technology, as well as fast data inversion techniques and software, have made it possible to use optimization techniques to obtain model parameters to a higher accuracy. A brief discussion on the limitations of the electrical resistivity method has also been presented.

Keywords: Resistivity, inversion, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6012
435 Palmprint based Cancelable Biometric Authentication System

Authors: Ying-Han Pang, Andrew Teoh Beng Jin, David Ngo Chek Ling

Abstract:

A cancelable palmprint authentication system proposed in this paper is specifically designed to overcome the limitations of the contemporary biometric authentication system. In this proposed system, Geometric and pseudo Zernike moments are employed as feature extractors to transform palmprint image into a lower dimensional compact feature representation. Before moment computation, wavelet transform is adopted to decompose palmprint image into lower resolution and dimensional frequency subbands. This reduces the computational load of moment calculation drastically. The generated wavelet-moment based feature representation is used to generate cancelable verification key with a set of random data. This private binary key can be canceled and replaced. Besides that, this key also possesses high data capture offset tolerance, with highly correlated bit strings for intra-class population. This property allows a clear separation of the genuine and imposter populations, as well as zero Equal Error Rate achievement, which is hardly gained in the conventional biometric based authentication system.

Keywords: Cancelable biometric authenticator, Discrete- Hashing, Moments, Palmprint.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
434 A Simulation-Optimization Approach to Control Production, Subcontracting and Maintenance Decisions for a Deteriorating Production System

Authors: Héctor Rivera-Gómez, Eva Selene Hernández-Gress, Oscar Montaño-Arango, Jose Ramon Corona-Armenta

Abstract:

This research studies the joint production, maintenance and subcontracting control policy for an unreliable deteriorating manufacturing system. Production activities are controlled by a derivation of the Hedging Point Policy, and given that the system is subject to deterioration, it reduces progressively its capacity to satisfy product demand. Multiple deterioration effects are considered, reflected mainly in the quality of the parts produced and the reliability of the machine. Subcontracting is available as support to satisfy product demand; also, overhaul maintenance can be conducted to reduce the effects of deterioration. The main objective of the research is to determine simultaneously the production, maintenance and subcontracting rate, which minimize the total, incurred cost. A stochastic dynamic programming model is developed and solved through a simulation-based approach composed of statistical analysis and optimization with the response surface methodology. The obtained results highlight the strong interactions between production, deterioration and quality, which justify the development of an integrated model. A numerical example and a sensitivity analysis are presented to validate our results.

Keywords: Deterioration, simulation, subcontracting, production planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1869
433 Determining of Threshold Levels of Burst by Burst AQAM/CDMA in Slow Rayleigh Fading Environments

Authors: F. Nejadebrahimi, M. ArdebiliPour

Abstract:

In this paper, we are going to determine the threshold levels of adaptive modulation in a burst by burst CDMA system by a suboptimum method so that the above method attempts to increase the average bit per symbol (BPS) rate of transceiver system by switching between the different modulation modes in variable channel condition. In this method, we choose the minimum values of average bit error rate (BER) and maximum values of average BPS on different values of average channel signal to noise ratio (SNR) and then calculate the relative threshold levels of them, so that when the instantaneous SNR increases, a higher order modulation be employed for increasing throughput and vise-versa when the instantaneous SNR decreases, a lower order modulation be employed for improvement of BER. In transmission step, by this adaptive modulation method, in according to comparison between obtained estimation of pilot symbols and a set of above suboptimum threshold levels, above system chooses one of states no transmission, BPSK, 4QAM and square 16QAM for modulation of data. The expected channel in this paper is a slow Rayleigh fading.

Keywords: AQAM, burst, BER, BPS, CDMA, threshold.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1494
432 A Computer Aided Detection (CAD) System for Microcalcifications in Mammograms - MammoScan mCaD

Authors: Kjersti Engan, Thor Ole Gulsrud, Karl Fredrik Fretheim, Barbro Furebotten Iversen, Liv Eriksen

Abstract:

Clusters of microcalcifications in mammograms are an important sign of breast cancer. This paper presents a complete Computer Aided Detection (CAD) scheme for automatic detection of clustered microcalcifications in digital mammograms. The proposed system, MammoScan μCaD, consists of three main steps. Firstly all potential microcalcifications are detected using a a method for feature extraction, VarMet, and adaptive thresholding. This will also give a number of false detections. The goal of the second step, Classifier level 1, is to remove everything but microcalcifications. The last step, Classifier level 2, uses learned dictionaries and sparse representations as a texture classification technique to distinguish single, benign microcalcifications from clustered microcalcifications, in addition to remove some remaining false detections. The system is trained and tested on true digital data from Stavanger University Hospital, and the results are evaluated by radiologists. The overall results are promising, with a sensitivity > 90 % and a low false detection rate (approx 1 unwanted pr. image, or 0.3 false pr. image).

Keywords: mammogram, microcalcifications, detection, CAD, MammoScan μCaD, VarMet, dictionary learning, texture, FTCM, classification, adaptive thresholding

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1778
431 Techno-Economic Prospects of High Wind Energy Share in Remote vs. Interconnected Island Grids

Authors: Marina Kapsali, John S. Anagnostopoulos

Abstract:

On the basis of comparative analysis of alternative “development scenarios” for electricity generation, the main objective of the present study is to investigate the techno-economic viability of high wind energy (WE) use at the local (island) level. An integrated theoretical model is developed based on first principles assuming two main possible scenarios for covering future electrification needs of a medium–sized Greek island, i.e. Lesbos. The first scenario (S1), assumes that the island will keep using oil products as the main source for electricity generation. The second scenario (S2) involves the interconnection of the island with the mainland grid to satisfy part of the electricity demand, while remarkable WE penetration is also achieved. The economic feasibility of the above solutions is investigated in terms of determining their Levelized Cost of Energy (LCOE) for the time-period 2020-2045, including also a sensitivity analysis on the worst/reference/best Cases. According to the results obtained, interconnection of Lesbos Island with the mainland grid (S2) presents considerable economic interest in comparison to autonomous development (S1) with WE having a prominent role to this effect.

Keywords: Electricity generation cost, levelized cost of energy, mainland, wind energy surplus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1043
430 Comparison of Three Turbulence Models in Wear Prediction of Multi-Size Particulate Flow through Rotating Channel

Authors: Pankaj K. Gupta, Krishnan V. Pagalthivarthi

Abstract:

The present work compares the performance of three turbulence modeling approach (based on the two-equation k -ε model) in predicting erosive wear in multi-size dense slurry flow through rotating channel. All three turbulence models include rotation modification to the production term in the turbulent kineticenergy equation. The two-phase flow field obtained numerically using Galerkin finite element methodology relates the local flow velocity and concentration to the wear rate via a suitable wear model. The wear models for both sliding wear and impact wear mechanisms account for the particle size dependence. Results of predicted wear rates using the three turbulence models are compared for a large number of cases spanning such operating parameters as rotation rate, solids concentration, flow rate, particle size distribution and so forth. The root-mean-square error between FE-generated data and the correlation between maximum wear rate and the operating parameters is found less than 2.5% for all the three models.

Keywords: Rotating channel, maximum wear rate, multi-sizeparticulate flow, k −ε turbulence models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1738
429 Comparison of Real-Time PCR and FTIR with Chemometrics Technique in Analysing Halal Supplement Capsules

Authors: Mohd Sukri Hassan, Ahlam Inayatullah Badrul Munir, M. Husaini A. Rahman

Abstract:

Halal authentication and verification in supplement capsules are highly required as the gelatine available in the market can be from halal or non-halal sources. It is an obligation for Muslim to consume and use the halal consumer goods. At present, real-time polymerase chain reaction (RT-PCR) is the most common technique being used for the detection of porcine and bovine DNA in gelatine due to high sensitivity of the technique and higher stability of DNA compared to protein. In this study, twenty samples of supplements capsules from different products with different Halal logos were analyzed for porcine and bovine DNA using RT-PCR. Standard bovine and porcine gelatine from eurofins at a range of concentration from 10-1 to 10-5 ng/µl were used to determine the linearity range, limit of detection and specificity on RT-PCR (SYBR Green method). RT-PCR detected porcine (two samples), bovine (four samples) and mixture of porcine and bovine (six samples). The samples were also tested using FT-IR technique where normalized peak of IR spectra were pre-processed using Savitsky Golay method before Principal Components Analysis (PCA) was performed on the database. Scores plot of PCA shows three clusters of samples; bovine, porcine and mixture (bovine and porcine). The RT-PCR and FT-IR with chemometrics technique were found to give same results for porcine gelatine samples which can be used for Halal authentication.

Keywords: Halal, real-time PCR, gelatin, FTIR and chemometrics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 859
428 A Simulation Study of Bullwhip Effect in a Closed-Loop Supply Chain with Fuzzy Demand and Fuzzy Collection Rate under Possibility Constraints

Authors: Debabrata Das, Pankaj Dutta

Abstract:

Along with forward supply chain organization needs to consider the impact of reverse logistics due to its economic advantage, social awareness and strict legislations. In this paper, we develop a system dynamics framework for a closed-loop supply chain with fuzzy demand and fuzzy collection rate by incorporating product exchange policy in forward channel and various recovery options in reverse channel. The uncertainty issues associated with acquisition and collection of used product have been quantified using possibility measures. In the simulation study, we analyze order variation at both retailer and distributor level and compare bullwhip effects of different logistics participants over time between the traditional forward supply chain and the closed-loop supply chain. Our results suggest that the integration of reverse logistics can reduce order variation and bullwhip effect of a closed-loop system. Finally, sensitivity analysis is performed to examine the impact of various parameters on recovery process and bullwhip effect.

Keywords: Bullwhip Effect, Fuzzy Possibility Measures, Reverse Supply Chain, System Dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2652
427 Sectoral Energy Consumption in South Africa and Its Implication for Economic Growth

Authors: Kehinde Damilola Ilesanmi, Dev Datt Tewari

Abstract:

South Africa is in its post-industrial era moving from the primary and secondary sector to the tertiary sector. The study investigated the impact of the disaggregated energy consumption (coal, oil, and electricity) on the primary, secondary and tertiary sectors of the economy between 1980 and 2012 in South Africa. Using vector error correction model, it was established that South Africa is an energy dependent economy, and that energy (especially electricity and oil) is a limiting factor of growth. This implies that implementation of energy conservation policies may hamper economic growth. Output growth is significantly outpacing energy supply, which has necessitated load shedding. To meet up the excess energy demand, there is a need to increase the generating capacity which will necessitate increased investment in the electricity sector as well as strategic steps to increase oil production. There is also need to explore more renewable energy sources, in order to meet the growing energy demand without compromising growth and environmental sustainability. Policy makers should also pursue energy efficiency policies especially at sectoral level of the economy.

Keywords: Causality, economic growth, energy consumption, hypothesis, sectoral output.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1619
426 Towards an Intelligent Ontology Construction Cost Estimation System: Using BIM and New Rules of Measurement Techniques

Authors: F. H. Abanda, B. Kamsu-Foguem, J. H. M. Tah

Abstract:

Construction cost estimation is one of the most important aspects of construction project design. For generations, the process of cost estimating has been manual, time-consuming and error-prone. This has partly led to most cost estimates to be unclear and riddled with inaccuracies that at times lead to over- or underestimation of construction cost. The development of standard set of measurement rules that are understandable by all those involved in a construction project, have not totally solved the challenges. Emerging Building Information Modelling (BIM) technologies can exploit standard measurement methods to automate cost estimation process and improve accuracies. This requires standard measurement methods to be structured in ontological and machine readable format; so that BIM software packages can easily read them. Most standard measurement methods are still text-based in textbooks and require manual editing into tables or Spreadsheet during cost estimation. The aim of this study is to explore the development of an ontology based on New Rules of Measurement (NRM) commonly used in the UK for cost estimation. The methodology adopted is Methontology, one of the most widely used ontology engineering methodologies. The challenges in this exploratory study are also reported and recommendations for future studies proposed.

Keywords: BIM, Construction projects, Cost estimation, NRM, Ontology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4399
425 A Hybrid Classification Method using Artificial Neural Network Based Decision Tree for Automatic Sleep Scoring

Authors: Haoyu Ma, Bin Hu, Mike Jackson, Jingzhi Yan, Wen Zhao

Abstract:

In this paper we propose a new classification method for automatic sleep scoring using an artificial neural network based decision tree. It attempts to treat sleep scoring progress as a series of two-class problems and solves them with a decision tree made up of a group of neural network classifiers, each of which uses a special feature set and is aimed at only one specific sleep stage in order to maximize the classification effect. A single electroencephalogram (EEG) signal is used for our analysis rather than depending on multiple biological signals, which makes greatly simplifies the data acquisition process. Experimental results demonstrate that the average epoch by epoch agreement between the visual and the proposed method in separating 30s wakefulness+S1, REM, S2 and SWS epochs was 88.83%. This study shows that the proposed method performed well in all the four stages, and can effectively limit error propagation at the same time. It could, therefore, be an efficient method for automatic sleep scoring. Additionally, since it requires only a small volume of data it could be suited to pervasive applications.

Keywords: Sleep, Sleep stage, Automatic sleep scoring, Electroencephalography, Decision tree, Artificial neural network

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2040
424 Pattern Classification of Back-Propagation Algorithm Using Exclusive Connecting Network

Authors: Insung Jung, Gi-Nam Wang

Abstract:

The objective of this paper is to a design of pattern classification model based on the back-propagation (BP) algorithm for decision support system. Standard BP model has done full connection of each node in the layers from input to output layers. Therefore, it takes a lot of computing time and iteration computing for good performance and less accepted error rate when we are doing some pattern generation or training the network. However, this model is using exclusive connection in between hidden layer nodes and output nodes. The advantage of this model is less number of iteration and better performance compare with standard back-propagation model. We simulated some cases of classification data and different setting of network factors (e.g. hidden layer number and nodes, number of classification and iteration). During our simulation, we found that most of simulations cases were satisfied by BP based using exclusive connection network model compared to standard BP. We expect that this algorithm can be available to identification of user face, analysis of data, mapping data in between environment data and information.

Keywords: Neural network, Back-propagation, classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627
423 Efficient Variants of Square Contour Algorithm for Blind Equalization of QAM Signals

Authors: Ahmad Tariq Sheikh, Shahzad Amin Sheikh

Abstract:

A new distance-adjusted approach is proposed in which static square contours are defined around an estimated symbol in a QAM constellation, which create regions that correspond to fixed step sizes and weighting factors. As a result, the equalizer tap adjustment consists of a linearly weighted sum of adaptation criteria that is scaled by a variable step size. This approach is the basis of two new algorithms: the Variable step size Square Contour Algorithm (VSCA) and the Variable step size Square Contour Decision-Directed Algorithm (VSDA). The proposed schemes are compared with existing blind equalization algorithms in the SCA family in terms of convergence speed, constellation eye opening and residual ISI suppression. Simulation results for 64-QAM signaling over empirically derived microwave radio channels confirm the efficacy of the proposed algorithms. An RTL implementation of the blind adaptive equalizer based on the proposed schemes is presented and the system is configured to operate in VSCA error signal mode, for square QAM signals up to 64-QAM.

Keywords: Adaptive filtering, Blind Equalization, Square Contour Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824
422 Application of PSO Technique for Seismic Control of Tall Building

Authors: A. Shayeghi, H. Shayeghi, H. Eimani Kalasar

Abstract:

In recent years, tuned mass damper (TMD) control systems for civil engineering structures have attracted considerable attention. This paper emphasizes on the application of particle swarm application (PSO) to design and optimize the parameters of the TMD control scheme for achieving the best results in the reduction of the building response under earthquake excitations. The Integral of the Time multiplied Absolute value of the Error (ITAE) based on relative displacement of all floors in the building is taken as a performance index of the optimization criterion. The problem of robustly TMD controller design is formatted as an optimization problem based on the ITAE performance index to be solved using the PSO technique which has a story ability to find the most optimistic results. An 11- story realistic building, located in the city of Rasht, Iran is considered as a test system to demonstrate effectiveness of the proposed method. The results analysis through the time-domain simulation and some performance indices reveals that the designed PSO based TMD controller has an excellent capability in reduction of the seismically excited example building.

Keywords: TMD, Particle Swarm Optimization, Tall Buildings, Structural Dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1781
421 Effect Comparison of Speckle Noise Reduction Filters on 2D-Echocardigraphic Images

Authors: Faten A. Dawood, Rahmita W. Rahmat, Suhaini B. Kadiman, Lili N. Abdullah, Mohd D. Zamrin

Abstract:

Echocardiography imaging is one of the most common diagnostic tests that are widely used for assessing the abnormalities of the regional heart ventricle function. The main goal of the image enhancement task in 2D-echocardiography (2DE) is to solve two major anatomical structure problems; speckle noise and low quality. Therefore, speckle noise reduction is one of the important steps that used as a pre-processing to reduce the distortion effects in 2DE image segmentation. In this paper, we present the common filters that based on some form of low-pass spatial smoothing filters such as Mean, Gaussian, and Median. The Laplacian filter was used as a high-pass sharpening filter. A comparative analysis was presented to test the effectiveness of these filters after being applied to original 2DE images of 4-chamber and 2-chamber views. Three statistical quantity measures: root mean square error (RMSE), peak signal-to-ratio (PSNR) and signal-tonoise ratio (SNR) are used to evaluate the filter performance quantitatively on the output enhanced image.

Keywords: Gaussian operator, median filter, speckle texture, peak signal-to-ratio

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1952
420 Effect of Unbound Granular Materials Nonlinear Resilient Behavior on Pavement Response and Performance of Low Volume Roads

Authors: K. Sandjak, B. Tiliouine

Abstract:

Structural analysis of flexible pavements has been and still is currently performed using multi-layer elastic theory. However, for thinly surfaced pavements subjected to low to medium volumes of traffics, the importance of non-linear stress-strain behavior of unbound granular materials (UGM) requires the use of more sophisticated numerical models for structural design and performance of such pavements. In the present work, nonlinear unbound aggregates constitutive model is implemented within an axisymmetric finite element code developed to simulate the nonlinear behavior of pavement structures including two local aggregates of different mineralogical nature, typically used in Algerian pavements. The performance of the mechanical model is examined about its capability of representing adequately, under various conditions, the granular material non-linearity in pavement analysis. In addition, deflection data collected by Falling Weight Deflectometer (FWD) are incorporated into the analysis in order to assess the sensitivity of critical pavement design criteria and pavement design life to the constitutive model. Finally, conclusions of engineering significance are formulated. 

Keywords: Nonlinear resilient behavior, unbound granular materials, RLT test results, FWD backcalculations, finite element simulations, pavement response and performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2258
419 PAPR Reduction Method for OFDM Signalby Using Dummy Sub-carriers

Authors: Pisit Boonsrimuang, Arjin Numsomran, Tawil Paungma, Hideo Kobayashi

Abstract:

One of the disadvantages of using OFDM is the larger peak to averaged power ratio (PAPR) in its time domain signal. The larger PAPR signal would course the fatal degradation of bit error rate performance (BER) due to the inter-modulation noise in the nonlinear channel. This paper proposes an improved DSI (Dummy Sequence Insertion) method, which can achieve the better PAPR and BER performances. The feature of proposed method is to optimize the phase of each dummy sub-carrier so as to reduce the PAPR performance by changing all predetermined phase coefficients in the time domain signal, which is calculated for data sub-carriers and dummy sub-carriers separately. To achieve the better PAPR performance, this paper also proposes to employ the time-frequency domain swapping algorithm for fine adjustment of phase coefficient of the dummy subcarriers, which can achieve the less complexity of processing and achieves the better PAPR and BER performances than those for the conventional DSI method. This paper presents various computer simulation results to verify the effectiveness of proposed method as comparing with the conventional methods in the non-linear channel.

Keywords: OFDM, PAPR, dummy sub-carriers, non-linear

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513
418 CAD/CAM Algorithms for 3D Woven Multilayer Textile Structures

Authors: Martin A. Smith, Xiaogang Chen

Abstract:

This paper proposes new algorithms for the computeraided design and manufacture (CAD/CAM) of 3D woven multi-layer textile structures. Existing commercial CAD/CAM systems are often restricted to the design and manufacture of 2D weaves. Those CAD/CAM systems that do support the design and manufacture of 3D multi-layer weaves are often limited to manual editing of design paper grids on the computer display and weave retrieval from stored archives. This complex design activity is time-consuming, tedious and error-prone and requires considerable experience and skill of a technical weaver. Recent research reported in the literature has addressed some of the shortcomings of commercial 3D multi-layer weave CAD/CAM systems. However, earlier research results have shown the need for further work on weave specification, weave generation, yarn path editing and layer binding. Analysis of 3D multi-layer weaves in this research has led to the design and development of efficient and robust algorithms for the CAD/CAM of 3D woven multi-layer textile structures. The resulting algorithmically generated weave designs can be used as a basis for lifting plans that can be loaded onto looms equipped with electronic shedding mechanisms for the CAM of 3D woven multi-layer textile structures.

Keywords: CAD/CAM, Multi-layer, Textile, Weave.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2535
417 The Use of Fractional Brownian Motion in the Generation of Bed Topography for Bodies of Water Coupled with the Lattice Boltzmann Method

Authors: Elysia Barker, Jian Guo Zhou, Ling Qian, Steve Decent

Abstract:

A method of modelling topography used in the simulation of riverbeds is proposed in this paper which removes the need for datapoints and measurements of a physical terrain. While complex scans of the contours of a surface can be achieved with other methods, this requires specialised tools which the proposed method overcomes by using fractional Brownian motion (FBM) as a basis to estimate the real surface within a 15% margin of error while attempting to optimise algorithmic efficiency. This removes the need for complex, expensive equipment and reduces resources spent modelling bed topography. This method also accounts for the change in topography over time due to erosion, sediment transport, and other external factors which could affect the topography of the ground by updating its parameters and generating a new bed. The lattice Boltzmann method (LBM) is used to simulate both stationary and steady flow cases in a side-by-side comparison over the generated bed topography using the proposed method, and a test case taken from an external source. The method, if successful, will be incorporated into the current LBM program used in the testing phase, which will allow an automatic generation of topography for the given situation in future research, removing the need for bed data to be specified.

Keywords: Bed topography, FBM, LBM, shallow water, simulations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 249
416 Microscopic Emission and Fuel Consumption Modeling for Light-duty Vehicles Using Portable Emission Measurement System Data

Authors: Wei Lei, Hui Chen, Lin Lu

Abstract:

Microscopic emission and fuel consumption models have been widely recognized as an effective method to quantify real traffic emission and energy consumption when they are applied with microscopic traffic simulation models. This paper presents a framework for developing the Microscopic Emission (HC, CO, NOx, and CO2) and Fuel consumption (MEF) models for light-duty vehicles. The variable of composite acceleration is introduced into the MEF model with the purpose of capturing the effects of historical accelerations interacting with current speed on emission and fuel consumption. The MEF model is calibrated by multivariate least-squares method for two types of light-duty vehicle using on-board data collected in Beijing, China by a Portable Emission Measurement System (PEMS). The instantaneous validation results shows the MEF model performs better with lower Mean Absolute Percentage Error (MAPE) compared to other two models. Moreover, the aggregate validation results tells the MEF model produces reasonable estimations compared to actual measurements with prediction errors within 12%, 10%, 19%, and 9% for HC, CO, NOx emissions and fuel consumption, respectively.

Keywords: Emission, Fuel consumption, Light-duty vehicle, Microscopic, Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1956
415 Estimation of Real Power Transfer Allocation Using Intelligent Systems

Authors: H. Shareef, A. Mohamed, S. A. Khalid, Aziah Khamis

Abstract:

This paper presents application artificial intelligent (AI) techniques, namely artificial neural network (ANN), adaptive neuro fuzzy interface system (ANFIS), to estimate the real power transfer between generators and loads. Since these AI techniques adopt supervised learning, it first uses modified nodal equation method (MNE) to determine real power contribution from each generator to loads. Then the results of MNE method and load flow information are utilized to estimate the power transfer using AI techniques. The 25-bus equivalent system of south Malaysia is utilized as a test system to illustrate the effectiveness of both AI methods compared to that of the MNE method. The mean squared error of the estimate of ANN and ANFIS power transfer allocation methods are 1.19E-05 and 2.97E-05, respectively. Furthermore, when compared to MNE method, ANN and ANFIS methods computes generator contribution to loads within 20.99 and 39.37msec respectively whereas the MNE method took 360msec for the calculation of same real power transfer allocation. 

Keywords: Artificial intelligence, Power tracing, Artificial neural network, ANFIS, Power system deregulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2540
414 On the Variability of Tool Wear and Life at Disparate Operating Parameters

Authors: S. E. Oraby, A.M. Alaskari

Abstract:

The stochastic nature of tool life using conventional discrete-wear data from experimental tests usually exists due to many individual and interacting parameters. It is a common practice in batch production to continually use the same tool to machine different parts, using disparate machining parameters. In such an environment, the optimal points at which tools have to be changed, while achieving minimum production cost and maximum production rate within the surface roughness specifications, have not been adequately studied. In the current study, two relevant aspects are investigated using coated and uncoated inserts in turning operations: (i) the accuracy of using machinability information, from fixed parameters testing procedures, when variable parameters situations are emerged, and (ii) the credibility of tool life machinability data from prior discrete testing procedures in a non-stop machining. A novel technique is proposed and verified to normalize the conventional fixed parameters machinability data to suit the cases when parameters have to be changed for the same tool. Also, an experimental investigation has been established to evaluate the error in the tool life assessment when machinability from discrete testing procedures is employed in uninterrupted practical machining.

Keywords: Machinability, tool life, tool wear, wear variability

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1763
413 Capability Prediction of Machining Processes Based on Uncertainty Analysis

Authors: Hamed Afrasiab, Saeed Khodaygan

Abstract:

Prediction of machining process capability in the design stage plays a key role to reach the precision design and manufacturing of mechanical products. Inaccuracies in machining process lead to errors in position and orientation of machined features on the part, and strongly affect the process capability in the final quality of the product. In this paper, an efficient systematic approach is given to investigate the machining errors to predict the manufacturing errors of the parts and capability prediction of corresponding machining processes. A mathematical formulation of fixture locators modeling is presented to establish the relationship between the part errors and the related sources. Based on this method, the final machining errors of the part can be accurately estimated by relating them to the combined dimensional and geometric tolerances of the workpiece – fixture system. This method is developed for uncertainty analysis based on the Worst Case and statistical approaches. The application of the presented method is illustrated through presenting an example and the computational results are compared with the Monte Carlo simulation results.

Keywords: Process capability, machining error, dimensional and geometrical tolerances, uncertainty analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1189