Search results for: Quality of Work Life
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7770

Search results for: Quality of Work Life

570 Wear and Friction Analysis of Sintered Metal Powder Self Lubricating Bush Bearing

Authors: J. K. Khare, Abhay Kumar Sharma, Ajay Tiwari, Amol A. Talankar

Abstract:

Powder metallurgy (P/M) is the only economic way to produce porous parts/products. P/M can produce near net shape parts hence reduces wastage of raw material and energy, avoids various machining operations. The most vital use of P/M is in production of metallic filters and self lubricating bush bearings and siding surfaces. The porosity of the part can be controlled by varying compaction pressure, sintering temperature and composition of metal powder mix. The present work is aimed for experimental analysis of friction and wear properties of self lubricating copper and tin bush bearing. Experimental results confirm that wear rate of sintered component is lesser for components having 10% tin by weight percentage. Wear rate increases for high tin percentage (experimented for 20% tin and 30% tin) at same sintering temperature. Experimental results also confirms that wear rate of sintered component is also dependent on sintering temperature, soaking period, composition of the preform, compacting pressure, powder particle shape and size. Interfacial friction between die and punch, between inter powder particles, between die face and powder particle depends on compaction pressure, powder particle size and shape, size and shape of component which decides size & shape of die & punch, material of die & punch and material of powder particles.

Keywords: Interfacial friction, porous bronze bearing, sintering temperature, wear rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3938
569 Effect of Manganese Doping on Ferrroelectric Properties of (K0.485Na0.5Li0.015)(Nb0.98V0.02)O3 Lead-Free Piezoceramic

Authors: Chongtham Jiten, Radhapiyari Laishram, K. Chandramani Singh

Abstract:

Alkaline niobate (Na0.5K0.5)NbO3 ceramic system has attracted major attention in view of its potential for replacing the highly toxic but superior lead zirconate titanate (PZT) system for piezoelectric applications. Recently, a more detailed study of this system reveals that the ferroelectric and piezoelectric properties are optimized in the Li- and V-modified system having the composition (K0.485Na0.5Li0.015)(Nb0.98V0.02)O3. In the present work, we further study the pyroelectric behaviour of this composition along with another doped with Mn4+. So, (K0.485Na0.5Li0.015)(Nb0.98V0.02)O3 + x MnO2 (x = 0, and 0.01 wt. %) ceramic compositions were synthesized by conventional ceramic processing route. X-ray diffraction study reveals that both the undoped and Mn4+-doped ceramic samples prepared crystallize into a perovskite structure having orthorhombic symmetry. Dielectric study indicates that Mn4+ doping has little effect on both the Curie temperature (Tc) and tetragonal-orthorhombic phase transition temperature (Tot). The bulk density, room-temperature dielectric constant (εRT), and room-c The room-temperature coercive field (Ec) is observed to be lower in Mn4+ doped sample. The detailed analysis of the P-E hysteresis loops over the range of temperature from about room temperature to Tot points out that enhanced ferroelectric properties exist in this temperature range with better thermal stability for the Mn4+ doped ceramic. The study reveals that small traces of Mn4+ can modify (K0.485Na0.5Li0.015)(Nb0.98V0.02)O3 system so as to improve its ferroelectric properties with good thermal stability over a wide range of temperature.

Keywords: Ceramics, dielectric properties, ferroelectric properties, lead-free, sintering, thermal stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 962
568 Synthesis and Electrochemical Characterization of Iron Oxide / Activated Carbon Composite Electrode for Symmetrical Supercapacitor

Authors: PoiSim Khiew, MuiYen Ho, ThianKhoonTan, WeeSiong Chiu, Roslinda Shamsudin, Muhammad Azmi Abd-Hamid, ChinHua Chia

Abstract:

In the present work, we have developed a symmetric electrochemical capacitor based on the nanostructured iron oxide (Fe3O4)-activated carbon (AC) nanocomposite materials. The physical properties of the nanocomposites were characterized by Scanning Electron Microscopy (SEM) and Brunauer-Emmett-Teller (BET) analysis. The electrochemical performances of the composite electrode in 1.0 M Na2SO3 and 1.0 M Na2SO4 aqueous solutions were evaluated using cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS). The composite electrode with 4 wt% of iron oxide nanomaterials exhibits the highest capacitance of 86 F/g. The experimental results clearly indicate that the incorporation of iron oxide nanomaterials at low concentration to the composite can improve the capacitive performance, mainly attributed to the contribution of the pseudocapacitance charge storage mechanism and the enhancement on the effective surface area of the electrode. Nevertheless, there is an optimum threshold on the amount of iron oxide that needs to be incorporated into the composite system. When this optimum threshold is exceeded, the capacitive performance of the electrode starts to deteriorate, as a result of the undesired particle aggregation, which is clearly indicated in the SEM analysis. The electrochemical performance of the composite electrode is found to be superior when Na2SO3 is used as the electrolyte, if compared to the Na2SO4 solution. It is believed that Fe3O4 nanoparticles can provide favourable surface adsorption sites for sulphite (SO3 2-) anions which act as catalysts for subsequent redox and intercalation reactions.

Keywords: Metal oxide nanomaterials, Electrochemical Capacitor, Double Layer Capacitance, Pseduocapacitance

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5588
567 Fault-Tolerant Control Study and Classification: Case Study of a Hydraulic-Press Model Simulated in Real-Time

Authors: Jorge Rodriguez-Guerra, Carlos Calleja, Aron Pujana, Iker Elorza, Ana Maria Macarulla

Abstract:

Society demands more reliable manufacturing processes capable of producing high quality products in shorter production cycles. New control algorithms have been studied to satisfy this paradigm, in which Fault-Tolerant Control (FTC) plays a significant role. It is suitable to detect, isolate and adapt a system when a harmful or faulty situation appears. In this paper, a general overview about FTC characteristics are exposed; highlighting the properties a system must ensure to be considered faultless. In addition, a research to identify which are the main FTC techniques and a classification based on their characteristics is presented in two main groups: Active Fault-Tolerant Controllers (AFTCs) and Passive Fault-Tolerant Controllers (PFTCs). AFTC encompasses the techniques capable of re-configuring the process control algorithm after the fault has been detected, while PFTC comprehends the algorithms robust enough to bypass the fault without further modifications. The mentioned re-configuration requires two stages, one focused on detection, isolation and identification of the fault source and the other one in charge of re-designing the control algorithm by two approaches: fault accommodation and control re-design. From the algorithms studied, one has been selected and applied to a case study based on an industrial hydraulic-press. The developed model has been embedded under a real-time validation platform, which allows testing the FTC algorithms and analyse how the system will respond when a fault arises in similar conditions as a machine will have on factory. One AFTC approach has been picked up as the methodology the system will follow in the fault recovery process. In a first instance, the fault will be detected, isolated and identified by means of a neural network. In a second instance, the control algorithm will be re-configured to overcome the fault and continue working without human interaction.

Keywords: Fault-tolerant control, electro-hydraulic actuator, fault detection and isolation, control re-design, real-time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 780
566 Route Training in Mobile Robotics through System Identification

Authors: Roberto Iglesias, Theocharis Kyriacou, Ulrich Nehmzow, Steve Billings

Abstract:

Fundamental sensor-motor couplings form the backbone of most mobile robot control tasks, and often need to be implemented fast, efficiently and nevertheless reliably. Machine learning techniques are therefore often used to obtain the desired sensor-motor competences. In this paper we present an alternative to established machine learning methods such as artificial neural networks, that is very fast, easy to implement, and has the distinct advantage that it generates transparent, analysable sensor-motor couplings: system identification through nonlinear polynomial mapping. This work, which is part of the RobotMODIC project at the universities of Essex and Sheffield, aims to develop a theoretical understanding of the interaction between the robot and its environment. One of the purposes of this research is to enable the principled design of robot control programs. As a first step towards this aim we model the behaviour of the robot, as this emerges from its interaction with the environment, with the NARMAX modelling method (Nonlinear, Auto-Regressive, Moving Average models with eXogenous inputs). This method produces explicit polynomial functions that can be subsequently analysed using established mathematical methods. In this paper we demonstrate the fidelity of the obtained NARMAX models in the challenging task of robot route learning; we present a set of experiments in which a Magellan Pro mobile robot was taught to follow four different routes, always using the same mechanism to obtain the required control law.

Keywords: Mobile robotics, system identification, non-linear modelling, NARMAX.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1682
565 Investigations of Metals and Metal-Antibrowning Agents Effects on Polyphenol Oxidase Activity from Red Poppy Leaf

Authors: G. Arabaci

Abstract:

Heavy metals are one of the major groups of contaminants in the environment and many of them are toxic even at very low concentration in plants and animals. However, some metals play important roles in the biological function of many enzymes in living organisms. Metals such as zinc, iron, and cooper are important for survival and activity of enzymes in plants, however heavy metals can inhibit enzyme which is responsible for defense system of plants. Polyphenol oxidase (PPO) is a copper-containing metalloenzyme which is responsible for enzymatic browning reaction of plants. Enzymatic browning is a major problem for the handling of vegetables and fruits in food industry. It can be increased and effected with many different futures such as metals in the nature and ground. In the present work, PPO was isolated and characterized from green leaves of red poppy plant (Papaverr hoeas). Then, the effect of some known antibrowning agents which can form complexes with metals and metals were investigated on the red poppy PPO activity. The results showed that glutathione was the most potent inhibitory effect on PPO activity. Cu(II) and Fe(II) metals increased the enzyme activities however, Sn(II) had the maximum inhibitory effect and Zn(II) and Pb(II) had no significant effect on the enzyme activity. In order to reduce the effect of heavy metals, the effects of metal-antibrowning agent complexes on the PPO activity were determined. EDTA and metal complexes had no significant effect on the enzyme. L-ascorbic acid and metal complexes decreased but L-ascorbic acid-Cu(II)-complex had no effect. Glutathione–metal complexes had the best inhibitory effect on Red poppy leaf PPO activity.

Keywords: Inhibition, metal, red poppy, Polyphenol oxidase (PPO).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3418
564 The Relationship between Procurement Strategies and Sustainability Outcomes: A Systematic Literature Review

Authors: Cathy T. Mpanga Kowet, Aghaegbuna Obinna U. Ozumba

Abstract:

This study examined and identified the inconsistencies, relationships, gaps and recurring themes in literature regarding the relationship between procurement strategies employed in the construction projects for sustainable buildings and realization of sustainability goals. A systematic literature review of studies on the relationship between various procurement strategies and attainment of sustainability outcomes was conducted. Using specific terms, papers published between 2002 and 2018 were identified and screened according to an inclusion and exclusion criteria. Current findings reveal that, although the attainment of sustainability goals is achievable with both traditional and contemporary procurement strategies, only projects delivered using modern procurement strategies are capable of meeting and exceeding targeted sustainability objectives. However, traditional procurement strategy remains the preferred method for most green building construction projects. The results suggest implications for decision makers in considering the impact of selected procurement strategies on targeted sustainability goals, in the early stages of sustainable building construction projects. The study shows that there is a gap between the reported appropriate procurement strategies and what is being practiced currently. Theoretically, the study expands on the literature on adoption and diffusion of contemporary procurement strategies, by consolidating existing studies to highlight the current gaps. While the study is at the literature review stage, deductions will serve as basis for field work involving empirical data.

Keywords: Green building, green construction, procurement method, procurement strategy, sustainability objectives, sustainability outcomes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 903
563 Bus Transit Demand Modeling and Fare Structure Analysis of Kabul City

Authors: Ramin Mirzada, Takuya Maruyama

Abstract:

Kabul is the heart of political, commercial, cultural, educational and social life in Afghanistan and the fifth fastest growing city in the world. Minimum income inclined most of Kabul residents to use public transport, especially buses, although there is no proper bus system, beside that there is no proper fare exist in Kabul city Due to wars. From 1992 to 2001 during civil wars, Kabul suffered damage and destruction of its transportation facilities including pavements, sidewalks, traffic circles, drainage systems, traffic signs and signals, trolleybuses and almost all of the public transport system (e.g. Millie bus). This research is mainly focused on Kabul city’s transportation system. In this research, the data used have been gathered by Japan International Cooperation Agency (JICA) in 2008 and this data will be used to find demand and fare structure, additionally a survey was done in 2016 to find satisfaction level of Kabul residents for fare structure. Aim of this research is to observe the demand for Large Buses, compare to the actual supply from the government, analyze the current fare structure and compare it with the proposed fare (distance based fare) structure which has already been analyzed. Outcome of this research shows that the demand of Kabul city residents for the public transport (Large Buses) exceeds from the current supply, so that current public transportation (Large Buses) is not sufficient to serve public transport in Kabul city, worth to be mentioned, that in order to overcome this problem, there is no need to build new roads or exclusive way for buses. This research proposes government to change the fare from fixed fare to distance based fare, invest on public transportation and increase the number of large buses so that the current demand for public transport is met.

Keywords: Transportation, planning, public transport, large buses, fixed fare, distance based fare, Kabul, Afghanistan.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1620
562 PetriNets Manipulation to Reduce Roaming Duration: Criterion to Improve Handoff Management

Authors: Hossam el-ddin Mostafa, Pavel Čičak

Abstract:

IETF RFC 2002 originally introduced the wireless Mobile-IP protocol to support portable IP addresses for mobile devices that often change their network access points to the Internet. The inefficiency of this protocol mainly within the handoff management produces large end-to-end packet delays, during registration process, and further degrades the system efficiency due to packet losses between subnets. The criterion to initiate a simple and fast full-duplex connection between the home agent and foreign agent, to reduce the roaming duration, is a very important issue to be considered by a work in this paper. State-transition Petri-Nets of the modeling scenario-based CIA: communication inter-agents procedure as an extension to the basic Mobile-IP registration process was designed and manipulated. The heuristic of configuration file during practical Setup session for registration parameters, on Cisco platform Router-1760 using IOS 12.3 (15)T is created. Finally, stand-alone performance simulations results from Simulink Matlab, within each subnet and also between subnets, are illustrated for reporting better end-to-end packet delays. Results verified the effectiveness of our Mathcad analytical manipulation and experimental implementation. It showed lower values of end-to-end packet delay for Mobile-IP using CIA procedure. Furthermore, it reported packets flow between subnets to improve packet losses between subnets.

Keywords: Cisco configuration, handoff, packet delay, Petri-Nets, registration process, Simulink.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1277
561 Roundabout Optimal Entry and Circulating Flow Induced by Road Hump

Authors: Amir Hossein Pakshir, A. Hossein Pour, N. Jahandar, Ali Paydar

Abstract:

Roundabout work on the principle of circulation and entry flows, where the maximum entry flow rates depend largely on circulating flow bearing in mind that entry flows must give away to circulating flows. Where an existing roundabout has a road hump installed at the entry arm, it can be hypothesized that the kinematics of vehicles may prevent the entry arm from achieving optimum performance. Road humps are traffic calming devices placed across road width solely as speed reduction mechanism. They are the preferred traffic calming option in Malaysia and often used on single and dual carriageway local routes. The speed limit on local routes is 30mph (50 km/hr). Road humps in their various forms achieved the biggest mean speed reduction (based on a mean speed before traffic calming of 30mph) of up to 10mph or 16 km/hr according to the UK Department of Transport. The underlying aim of reduced speed should be to achieve a 'safe' distribution of speeds which reflects the function of the road and the impacts on the local community. Constraining safe distribution of speeds may lead to poor drivers timing and delayed reflex reaction that can probably cause accident. Previous studies on road hump impact have focused mainly on speed reduction, traffic volume, noise and vibrations, discomfort and delay from the use of road humps. The paper is aimed at optimal entry and circulating flow induced by road humps. Results show that roundabout entry and circulating flow perform better in circumstances where there is no road hump at entrance.

Keywords: Road hump, Roundabout, Speed Reduction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2965
560 Time-Domain Stator Current Condition Monitoring: Analyzing Point Failures Detection by Kolmogorov-Smirnov (K-S) Test

Authors: Najmeh Bolbolamiri, Maryam Setayesh Sanai, Ahmad Mirabadi

Abstract:

This paper deals with condition monitoring of electric switch machine for railway points. Point machine, as a complex electro-mechanical device, switch the track between two alternative routes. There has been an increasing interest in railway safety and the optimal management of railway equipments maintenance, e.g. point machine, in order to enhance railway service quality and reduce system failure. This paper explores the development of Kolmogorov- Smirnov (K-S) test to detect some point failures (external to the machine, slide chairs, fixing, stretchers, etc), while the point machine (inside the machine) is in its proper condition. Time-domain stator Current signatures of normal (healthy) and faulty points are taken by 3 Hall Effect sensors and are analyzed by K-S test. The test is simulated by creating three types of such failures, namely putting a hard stone and a soft stone between stock rail and switch blades as obstacles and also slide chairs- friction. The test has been applied for those three faults which the results show that K-S test can effectively be developed for the aim of other point failures detection, which their current signatures deviate parametrically from the healthy current signature. K-S test as an analysis technique, assuming that any defect has a specific probability distribution. Empirical cumulative distribution functions (ECDF) are used to differentiate these probability distributions. This test works based on the null hypothesis that ECDF of target distribution is statistically similar to ECDF of reference distribution. Therefore by comparing a given current signature (as target signal) from unknown switch state to a number of template signatures (as reference signal) from known switch states, it is possible to identify which is the most likely state of the point machine under analysis.

Keywords: stator currents monitoring, railway points, point failures, fault detection and diagnosis, Kolmogorov-Smirnov test, time-domain analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1780
559 Comparison of Different Gas Turbine Inlet Air Cooling Methods

Authors: Ana Paula P. dos Santos, Claudia R. Andrade, Edson L. Zaparoli

Abstract:

Gas turbine air inlet cooling is a useful method for increasing output for regions where significant power demand and highest electricity prices occur during the warm months. Inlet air cooling increases the power output by taking advantage of the gas turbine-s feature of higher mass flow rate when the compressor inlet temperature decreases. Different methods are available for reducing gas turbine inlet temperature. There are two basic systems currently available for inlet cooling. The first and most cost-effective system is evaporative cooling. Evaporative coolers make use of the evaporation of water to reduce the gas turbine-s inlet air temperature. The second system employs various ways to chill the inlet air. In this method, the cooling medium flows through a heat exchanger located in the inlet duct to remove heat from the inlet air. However, the evaporative cooling is limited by wet-bulb temperature while the chilling can cool the inlet air to temperatures that are lower than the wet bulb temperature. In the present work, a thermodynamic model of a gas turbine is built to calculate heat rate, power output and thermal efficiency at different inlet air temperature conditions. Computational results are compared with ISO conditions herein called "base-case". Therefore, the two cooling methods are implemented and solved for different inlet conditions (inlet temperature and relative humidity). Evaporative cooler and absorption chiller systems results show that when the ambient temperature is extremely high with low relative humidity (requiring a large temperature reduction) the chiller is the more suitable cooling solution. The net increment in the power output as a function of the temperature decrease for each cooling method is also obtained.

Keywords: Absorption chiller, evaporative cooling, gas turbine, turbine inlet cooling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7506
558 Effect of Fines on Liquefaction Susceptibility of Sandy Soil

Authors: Ayad Salih Sabbar, Amin Chegenizadeh, Hamid Nikraz

Abstract:

Investigation of liquefaction susceptibility of materials that have been used in embankments, slopes, dams, and foundations is very essential. Many catastrophic geo-hazards such as flow slides, declination of foundations, and damage to earth structure are associated with static liquefaction that may occur during abrupt shearing of these materials. Many artificial backfill materials are mixtures of sand with fines and other composition. In order to provide some clarifications and evaluations on the role of fines in static liquefaction behaviour of sand sandy soils, the effect of fines on the liquefaction susceptibility of sand was experimentally examined in the present work over a range of fines content, relative density, and initial confining pressure. The results of an experimental study on various sand-fines mixtures are presented. Undrained static triaxial compression tests were conducted on saturated Perth sand containing 5% bentonite at three different relative densities (10, 50, and 90%), and saturated Perth sand containing both 5% bentonite and slag (2%, 4%, and 6%) at single relative density 10%. Undrained static triaxial tests were performed at three different initial confining pressures (100, 150, and 200 kPa). The brittleness index was used to quantify the liquefaction potential of sand-bentonite-slag mixtures. The results demonstrated that the liquefaction susceptibility of sand-5% bentonite mixture was more than liquefaction susceptibility of clean sandy soil. However, liquefaction potential decreased when both of two fines (bentonite and slag) were used. Liquefaction susceptibility of all mixtures decreased with increasing relative density and initial confining pressure.  

Keywords: Bentonite, brittleness index, liquefaction, slag.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1161
557 Case Study Approach Using Scenario Analysis to Analyze Unabsorbed Head Office Overheads

Authors: K. C. Iyer, T. Gupta, Y. M. Bindal

Abstract:

Head office overhead (HOOH) is an indirect cost and is recovered through individual project billings by the contractor. Delay in a project impacts the absorption of HOOH cost allocated to that particular project and thus diminishes the expected profit of the contractor. This unabsorbed HOOH cost is later claimed by contractors as damages. The subjective nature of the available formulae to compute unabsorbed HOOH is the difficulty that contractors and owners face and thus dispute it. The paper attempts to bring together the rationale of various HOOH formulae by gathering contractor’s HOOH cost data on all of its project, using case study approach and comparing variations in values of HOOH using scenario analysis. The case study approach uses project data collected from four construction projects of a contractor in India to calculate unabsorbed HOOH costs from various available formulae. Scenario analysis provides further variations in HOOH values after considering two independent situations mainly scope changes and new projects during the delay period. Interestingly, one of the findings in this study reveals that, in spite of HOOH getting absorbed by additional works available during the period of delay, a few formulae depict an increase in the value of unabsorbed HOOH, neglecting any absorption by the increase in scope. This indicates that these formulae are inappropriate for use in case of a change to the scope of work. Results of this study can help both parties in deciding on an appropriate formula more objectively, considering the events on a project causing the delay and contractor's position in respect of obtaining new projects.

Keywords: Absorbed and unabsorbed overheads, head office overheads, scenario analysis, scope variation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 768
556 Determining G-γ Degradation Curve in Cohesive Soils by Dilatometer and in situ Seismic Tests

Authors: Ivandic Kreso, Spiranec Miljenko, Kavur Boris, Strelec Stjepan

Abstract:

This article discusses the possibility of using dilatometer tests (DMT) together with in situ seismic tests (MASW) in order to get the shape of G-g degradation curve in cohesive soils (clay, silty clay, silt, clayey silt and sandy silt). MASW test provides the small soil stiffness (Go from vs) at very small strains and DMT provides the stiffness of the soil at ‘work strains’ (MDMT). At different test locations, dilatometer shear stiffness of the soil has been determined by the theory of elasticity. Dilatometer shear stiffness has been compared with the theoretical G-g degradation curve in order to determine the typical range of shear deformation for different types of cohesive soil. The analysis also includes factors that influence the shape of the degradation curve (G-g) and dilatometer modulus (MDMT), such as the overconsolidation ratio (OCR), plasticity index (IP) and the vertical effective stress in the soil (svo'). Parametric study in this article defines the range of shear strain gDMT and GDMT/Go relation depending on the classification of a cohesive soil (clay, silty clay, clayey silt, silt and sandy silt), function of density (loose, medium dense and dense) and the stiffness of the soil (soft, medium hard and hard). The article illustrates the potential of using MASW and DMT to obtain G-g degradation curve in cohesive soils.

Keywords: Dilatometer testing, MASW testing, shear wave, soil stiffness, stiffness reduction, shear strain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 835
555 Dynamic Threshold Adjustment Approach For Neural Networks

Authors: Hamza A. Ali, Waleed A. J. Rasheed

Abstract:

The use of neural networks for recognition application is generally constrained by their inherent parameters inflexibility after the training phase. This means no adaptation is accommodated for input variations that have any influence on the network parameters. Attempts were made in this work to design a neural network that includes an additional mechanism that adjusts the threshold values according to the input pattern variations. The new approach is based on splitting the whole network into two subnets; main traditional net and a supportive net. The first deals with the required output of trained patterns with predefined settings, while the second tolerates output generation dynamically with tuning capability for any newly applied input. This tuning comes in the form of an adjustment to the threshold values. Two levels of supportive net were studied; one implements an extended additional layer with adjustable neuronal threshold setting mechanism, while the second implements an auxiliary net with traditional architecture performs dynamic adjustment to the threshold value of the main net that is constructed in dual-layer architecture. Experiment results and analysis of the proposed designs have given quite satisfactory conducts. The supportive layer approach achieved over 90% recognition rate, while the multiple network technique shows more effective and acceptable level of recognition. However, this is achieved at the price of network complexity and computation time. Recognition generalization may be also improved by accommodating capabilities involving all the innate structures in conjugation with Intelligence abilities with the needs of further advanced learning phases.

Keywords: Classification, Recognition, Neural Networks, Pattern Recognition, Generalization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1590
554 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach

Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar

Abstract:

Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.

Keywords: ANN, DWT, GLCM, KNN, ROI, artificial neural networks, discrete wavelet transform, gray-level co-occurrence matrix, k-nearest neighbor, region of interest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 907
553 STLF Based on Optimized Neural Network Using PSO

Authors: H. Shayeghi, H. A. Shayanfar, G. Azimi

Abstract:

The quality of short term load forecasting can improve the efficiency of planning and operation of electric utilities. Artificial Neural Networks (ANNs) are employed for nonlinear short term load forecasting owing to their powerful nonlinear mapping capabilities. At present, there is no systematic methodology for optimal design and training of an artificial neural network. One has often to resort to the trial and error approach. This paper describes the process of developing three layer feed-forward large neural networks for short-term load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. Particle Swarm Optimization (PSO) is used to develop the optimum large neural network structure and connecting weights for one-day ahead electric load forecasting problem. PSO is a novel random optimization method based on swarm intelligence, which has more powerful ability of global optimization. Employing PSO algorithms on the design and training of ANNs allows the ANN architecture and parameters to be easily optimized. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. The experimental results show that the proposed method optimized by PSO can quicken the learning speed of the network and improve the forecasting precision compared with the conventional Back Propagation (BP) method. Moreover, it is not only simple to calculate, but also practical and effective. Also, it provides a greater degree of accuracy in many cases and gives lower percent errors all the time for STLF problem compared to BP method. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.

Keywords: Large Neural Network, Short-Term Load Forecasting, Particle Swarm Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2187
552 Early Registration : Criterion to Improve Communication-Inter Agents in Mobile-IP Protocol

Authors: Hossam el-ddin Mostafa, Pavel Čičak

Abstract:

In IETF RFC 2002, Mobile-IP was developed to enable Laptobs to maintain Internet connectivity while moving between subnets. However, the packet loss that comes from switching subnets arises because network connectivity is lost while the mobile host registers with the foreign agent and this encounters large end-to-end packet delays. The criterion to initiate a simple and fast full-duplex connection between the home agent and foreign agent, to reduce the roaming duration, is a very important issue to be considered by a work in this paper. State-transition Petri-Nets of the modeling scenario-based CIA: communication inter-agents procedure as an extension to the basic Mobile-IP registration process was designed and manipulated to describe the system in discrete events. The heuristic of configuration file during practical Setup session for registration parameters, on Cisco platform Router-1760 using IOS 12.3 (15)T and TFTP server S/W is created. Finally, stand-alone performance simulations from Simulink Matlab, within each subnet and also between subnets, are illustrated for reporting better end-toend packet delays. Results verified the effectiveness of our Mathcad analytical manipulation and experimental implementation. It showed lower values of end-to-end packet delay for Mobile-IP using CIA procedure-based early registration. Furthermore, it reported packets flow between subnets to improve losses between subnets.

Keywords: Cisco configuration, handoff, Mobile-IP, packetdelay, Petri-Nets, registration process, Simulink

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1349
551 Fast Factored DCT-LMS Speech Enhancement for Performance Enhancement of Digital Hearing Aid

Authors: Sunitha. S.L., V. Udayashankara

Abstract:

Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Cosine Transform Power Normalized Least Mean Square algorithm to improve the SNR and to reduce the convergence rate of the LMS for Sensory neural loss patients. Since it requires only real arithmetic, it establishes the faster convergence rate as compare to time domain LMS and also this transformation improves the eigenvalue distribution of the input autocorrelation matrix of the LMS filter. The DCT has good ortho-normal, separable, and energy compaction property. Although the DCT does not separate frequencies, it is a powerful signal decorrelator. It is a real valued function and thus can be effectively used in real-time operation. The advantages of DCT-LMS as compared to standard LMS algorithm are shown via SNR and eigenvalue ratio computations. . Exploiting the symmetry of the basis functions, the DCT transform matrix [AN] can be factored into a series of ±1 butterflies and rotation angles. This factorization results in one of the fastest DCT implementation. There are different ways to obtain factorizations. This work uses the fast factored DCT algorithm developed by Chen and company. The computer simulations results show superior convergence characteristics of the proposed algorithm by improving the SNR at least 10 dB for input SNR less than and equal to 0 dB, faster convergence speed and better time and frequency characteristics.

Keywords: Hearing Impairment, DCT Adaptive filter, Sensorineural loss patients, Convergence rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2137
550 Managing Iterations in Product Design and Development

Authors: K. Aravindhan, Trishit Bandyopadhyay, Mahesh Mehendale, Supriya Kumar De

Abstract:

The inherent iterative nature of product design and development poses significant challenge to reduce the product design and development time (PD). In order to shorten the time to market, organizations have adopted concurrent development where multiple specialized tasks and design activities are carried out in parallel. Iterative nature of work coupled with the overlap of activities can result in unpredictable time to completion and significant rework. Many of the products have missed the time to market window due to unanticipated or rather unplanned iteration and rework. The iterative and often overlapped processes introduce greater amounts of ambiguity in design and development, where the traditional methods and tools of project management provide less value. In this context, identifying critical metrics to understand the iteration probability is an open research area where significant contribution can be made given that iteration has been the key driver of cost and schedule risk in PD projects. Two important questions that the proposed study attempts to address are: Can we predict and identify the number of iterations in a product development flow? Can we provide managerial insights for a better control over iteration? The proposal introduces the concept of decision points and using this concept intends to develop metrics that can provide managerial insights into iteration predictability. By characterizing the product development flow as a network of decision points, the proposed research intends to delve further into iteration probability and attempts to provide more clarity.

Keywords: Decision Points, Iteration, Product Design, Rework.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2154
549 An Evaluation Method for Two-Dimensional Position Errors and Assembly Errors of a Rotational Table on a 4 Axis Machine Tool

Authors: Jooho Hwang, Chang-Kyu Song, Chun-Hong Park

Abstract:

This paper describes a method to measure and compensate a 4 axes ultra-precision machine tool that generates micro patterns on the large surfaces. The grooving machine is usually used for making a micro mold for many electrical parts such as a light guide plate for LCD and fuel cells. The ultra precision machine tool has three linear axes and one rotational table. Shaping is usually used to generate micro patterns. In the case of 50 μm pitch and 25 μm height pyramid pattern machining with a 90° wedge angle bite, one of linear axis is used for long stroke motion for high cutting speed and other linear axis are used for feeding. The triangular patterns can be generated with many times of long stroke of one axis. Then 90° rotation of work piece is needed to make pyramid patterns with superposition of machined two triangular patterns. To make a two dimensional positioning error, straightness of two axes in out of plane, squareness between the each axis are important. Positioning errors, straightness and squarness were measured by laser interferometer system. Those were compensated and confirmed by ISO230-6. One of difficult problem to measure the error motions is squareness or parallelism of axis between the rotational table and linear axis. It was investigated by simultaneous moving of rotary table and XY axes. This compensation method is introduced in this paper.

Keywords: Ultra-precision machine tool, muti-axis errors, squraness, positioning errors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527
548 Flow Duration Curves and Recession Curves Connection through a Mathematical Link

Authors: Elena Carcano, Mirzi Betasolo

Abstract:

This study helps Public Water Bureaus in giving reliable answers to water concession requests. Rapidly increasing water requests can be supported provided that further uses of a river course are not totally compromised, and environmental features are protected as well. Strictly speaking, a water concession can be considered a continuous drawing from the source and causes a mean annual streamflow reduction. Therefore, deciding if a water concession is appropriate or inappropriate seems to be easily solved by comparing the generic demand to the mean annual streamflow value at disposal. Still, the immediate shortcoming for such a comparison is that streamflow data are information available only for few catchments and, most often, limited to specific sites. Subsequently, comparing the generic water demand to mean daily discharge is indeed far from being completely satisfactory since the mean daily streamflow is greater than the water withdrawal for a long period of a year. Consequently, such a comparison appears to be of little significance in order to preserve the quality and the quantity of the river. In order to overcome such a limit, this study aims to complete the information provided by flow duration curves introducing a link between Flow Duration Curves (FDCs) and recession curves and aims to show the chronological sequence of flows with a particular focus on low flow data. The analysis is carried out on 25 catchments located in North-Eastern Italy for which daily data are provided. The results identify groups of catchments as hydrologically homogeneous, having the lower part of the FDCs (corresponding streamflow interval is streamflow Q between 300 and 335, namely: Q(300), Q(335)) smoothly reproduced by a common recession curve. In conclusion, the results are useful to provide more reliable answers to water request, especially for those catchments which show similar hydrological response and can be used for a focused regionalization approach on low flow data. A mathematical link between streamflow duration curves and recession curves is herein provided, thus furnishing streamflow duration curves information upon a temporal sequence of data. In such a way, by introducing assumptions on recession curves, the chronological sequence upon low flow data can also be attributed to FDCs, which are known to lack this information by nature.

Keywords: Chronological sequence of discharges, recession curves, streamflow duration curves, water concession.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 531
547 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals

Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty

Abstract:

A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient, but not the magnitude. A neural network with two hidden layers was then used to learn the coefficient magnitudes, along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.

Keywords: Quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69
546 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases

Authors: Mohammad A. Bani-Khaled

Abstract:

In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.

Keywords: Coupled dynamics, geometric complexity, Proper Orthogonal Decomposition (POD), thin walled beams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 978
545 Effect of Initial Conditions on Aerodynamic and Acoustic Characteristics of High Subsonic Jets from Sharp Edged Circular Orifice

Authors: Murugan, K. N. Sharma, S. D.

Abstract:

The present work involves measurements to examine the effects of initial conditions on aerodynamic and acoustic characteristics of a Jet at M=0.8 by changing the orientation of sharp edged orifice plate. A thick plate with chamfered orifice presented divergent and convergent openings when it was flipped over. The centerline velocity was found to decay more rapidly for divergent orifice and that was consistent with the enhanced mass entrainment suggesting quicker spread of the jet compared with that from the convergent orifice. The mixing layer region elucidated this effect of initial conditions at an early stage – the growth was found to be comparatively more pronounced for the divergent orifice resulting in reduced potential core size. The acoustic measurements, carried out in the near field noise region outside the jet within potential core length, showed the jet from the divergent orifice to be less noisy. The frequency spectra of the noise signal exhibited that in the initial region of comparatively thin mixing layer for the convergent orifice, the peak registered a higher SPL and a higher frequency as well. The noise spectra and the mixing layer development suggested a direct correlation between the coherent structures developing in the initial region of the jet and the noise captured in the surrounding near field.

Keywords: Convergent orifice jet, Divergent orifice jet, Mass entrainment, mixing layer, near field noise, frequency spectrum, SPL, Strouhal number, wave number, reactive pressure field, propagating pressure field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526
544 Meta Model Based EA for Complex Optimization

Authors: Maumita Bhattacharya

Abstract:

Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, many real life optimization problems often require finding optimal solution to complex high dimensional, multimodal problems involving computationally very expensive fitness function evaluations. Use of evolutionary algorithms in such problem domains is thus practically prohibitive. An attractive alternative is to build meta models or use an approximation of the actual fitness functions to be evaluated. These meta models are order of magnitude cheaper to evaluate compared to the actual function evaluation. Many regression and interpolation tools are available to build such meta models. This paper briefly discusses the architectures and use of such meta-modeling tools in an evolutionary optimization context. We further present two evolutionary algorithm frameworks which involve use of meta models for fitness function evaluation. The first framework, namely the Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model [14] reduces computation time by controlled use of meta-models (in this case approximate model generated by Support Vector Machine regression) to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the metamodel are generated from a single uniform model. This does not take into account uncertain scenarios involving noisy fitness functions. The second model, DAFHEA-II, an enhanced version of the original DAFHEA framework, incorporates a multiple-model based learning approach for the support vector machine approximator to handle noisy functions [15]. Empirical results obtained by evaluating the frameworks using several benchmark functions demonstrate their efficiency

Keywords: Meta model, Evolutionary algorithm, Stochastictechnique, Fitness function, Optimization, Support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2018
543 Seismic Fragility Assessment of Strongback Steel Braced Frames Subjected to Near-Field Earthquakes

Authors: Mohammadreza Salek Faramarzi, Touraj Taghikhany

Abstract:

In this paper, seismic fragility assessment of a recently developed hybrid structural system, known as the strongback system (SBS) is investigated. In this system, to mitigate the occurrence of the soft-story mechanism and improve the distribution of story drifts over the height of the structure, an elastic vertical truss is formed. The strengthened members of the braced span are designed to remain substantially elastic during levels of excitation where soft-story mechanisms are likely to occur and impose a nearly uniform story drift distribution. Due to the distinctive characteristics of near-field ground motions, it seems to be necessary to study the effect of these records on seismic performance of the SBS. To this end, a set of 56 near-field ground motion records suggested by FEMA P695 methodology is used. For fragility assessment, nonlinear dynamic analyses are carried out in OpenSEES based on the recommended procedure in HAZUS technical manual. Four damage states including slight, moderate, extensive, and complete damage (collapse) are considered. To evaluate each damage state, inter-story drift ratio and floor acceleration are implemented as engineering demand parameters. Further, to extend the evaluation of the collapse state of the system, a different collapse criterion suggested in FEMA P695 is applied. It is concluded that SBS can significantly increase the collapse capacity and consequently decrease the collapse risk of the structure during its life time. Comparing the observing mean annual frequency (MAF) of exceedance of each damage state against the allowable values presented in performance-based design methods, it is found that using the elastic vertical truss, improves the structural response effectively.

Keywords: Strongback System, Near-fault, Seismic fragility, Uncertainty, IDA, Probabilistic performance assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 516
542 Natural Gas Dehydration Process Simulation and Optimization: A Case Study of Khurmala Field in Iraqi Kurdistan Region

Authors: R. Abdulrahman, I. Sebastine

Abstract:

Natural gas is the most popular fossil fuel in the current era and future as well. Natural gas is existed in underground reservoirs so it may contain many of non-hydrocarbon components for instance, hydrogen sulfide, nitrogen and water vapor. These impurities are undesirable compounds and cause several technical problems for example, corrosion and environment pollution. Therefore, these impurities should be reduce or removed from natural gas stream. Khurmala dome is located in southwest Erbil-Kurdistan region. The Kurdistan region government has paid great attention for this dome to provide the fuel for Kurdistan region. However, the Khurmala associated natural gas is currently flaring at the field. Moreover, nowadays there is a plan to recover and trade this gas and to use it either as feedstock to power station or to sell it in global market. However, the laboratory analysis has showed that the Khurmala sour gas has huge quantities of H2S about (5.3%) and CO2 about (4.4%). Indeed, Khurmala gas sweetening process has been removed in previous study by using Aspen HYSYS. However, Khurmala sweet gas still contents some quintets of water about 23 ppm in sweet gas stream. This amount of water should be removed or reduced. Indeed, water content in natural gas cause several technical problems such as hydrates and corrosion. Therefore, this study aims to simulate the prospective Khurmala gas dehydration process by using Aspen HYSYS V. 7.3 program. Moreover, the simulation process succeeded in reducing the water content to less than 0.1ppm. In addition, the simulation work is also achieved process optimization by using several desiccant types for example, TEG and DEG and it also study the relationship between absorbents type and its circulation rate with HCs losses from glycol regenerator tower.

Keywords: Aspen Hysys, Process simulation, gas dehydration, process optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8920
541 Formant Tracking Linear Prediction Model using HMMs for Noisy Speech Processing

Authors: Zaineb Ben Messaoud, Dorra Gargouri, Saida Zribi, Ahmed Ben Hamida

Abstract:

This paper presents a formant-tracking linear prediction (FTLP) model for speech processing in noise. The main focus of this work is the detection of formant trajectory based on Hidden Markov Models (HMM), for improved formant estimation in noise. The approach proposed in this paper provides a systematic framework for modelling and utilization of a time- sequence of peaks which satisfies continuity constraints on parameter; the within peaks are modelled by the LP parameters. The formant tracking LP model estimation is composed of three stages: (1) a pre-cleaning multi-band spectral subtraction stage to reduce the effect of residue noise on formants (2) estimation stage where an initial estimate of the LP model of speech for each frame is obtained (3) a formant classification using probability models of formants and Viterbi-decoders. The evaluation results for the estimation of the formant tracking LP model tested in Gaussian white noise background, demonstrate that the proposed combination of the initial noise reduction stage with formant tracking and LPC variable order analysis, results in a significant reduction in errors and distortions. The performance was evaluated with noisy natual vowels extracted from international french and English vocabulary speech signals at SNR value of 10dB. In each case, the estimated formants are compared to reference formants.

Keywords: Formants Estimation, HMM, Multi Band Spectral Subtraction, Variable order LPC coding, White Gauusien Noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1930