Search results for: Parameter reduction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2597

Search results for: Parameter reduction

2027 CFD Modeling of Reduction in NOX Emission Using HiTAC Technique

Authors: Abbas Khoshhal, Masoud Rahimi, Sayed Reza Shabanian, Ammar Abdulaziz Alsairafi

Abstract:

In the present study, the rate of NOx emission in a combustion chamber working in conventional combustion and High Temperature Air Combustion (HiTAC) system are examined using CFD modeling. The effect of peak temperature, combustion air temperature and oxygen concentration on NOx emission rate was undertaken. Results show that in a fixed oxygen concentration, increasing the preheated air temperature will increase the peak temperature and NOx emission rate. In addition, it was observed that the reduction of the oxygen concentration in the fixed preheated air temperature decreases the peak temperature and NOx emission rate. On the other hand, the results show that increase of preheated air temperature at various oxygen concentrations increases the NOx emission rate. However, the rate of increase in HiTAC conditions is quite lower than the conventional combustion. The modeling results show that the NOx emission rate in HiTAC combustion is 133% less than that of the conventional combustion.

Keywords: CFD Modeling, HiTAC, NOx, Combustion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1914
2026 Composite Distributed Generation and Transmission Expansion Planning Considering Security

Authors: Amir Lotfi, Seyed Hamid Hosseini

Abstract:

During the recent past, due to the increase of electrical energy demand and governmental resources constraints in creating additional capacity in the generation, transmission, and distribution, privatization, and restructuring in electrical industry have been considered. So, in most of the countries, different parts of electrical industry like generation, transmission, and distribution have been separated in order to create competition. Considering these changes, environmental issues, energy growth, investment of private equity in energy generation units and difficulties of transmission lines expansion, distributed generation (DG) units have been used in power systems. Moreover, reduction in the need for transmission and distribution, the increase of reliability, improvement of power quality, and reduction of power loss have caused DG to be placed in power systems. On the other hand, considering low liquidity need, private investors tend to spend their money for DGs. In this project, the main goal is to offer an algorithm for planning and placing DGs in order to reduce the need for transmission and distribution network.

Keywords: Planning, transmission, distributed generation, power security, power systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1132
2025 The Linkage of Urban and Energy Planning for Sustainable Cities: The Case of Denmark and Germany

Authors: Jens-Phillip Petersen

Abstract:

The reduction of GHG emissions in buildings is a focus area of national energy policies in Europe, because buildings are responsible for a major share of the final energy consumption. It is at local scale where policies to increase the share of renewable energies and energy efficiency measures get implemented. Municipalities, as local authorities and responsible entity for land-use planning, have a direct influence on urban patterns and energy use, which makes them key actors in the transition towards sustainable cities. Hence, synchronizing urban planning with energy planning offers great potential to increase society’s energy-efficiency; this has a high significance to reach GHG-reduction targets. In this paper, the actual linkage of urban planning and energy planning in Denmark and Germany was assessed; substantive barriers preventing their integration and driving factors that lead to successful transitions towards a holistic urban energy planning procedures were identified.

Keywords: Energy planning, urban planning, renewable energies, sustainable cities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1698
2024 Characterization of Printed Reflectarray Elements on Variable Substrate Thicknesses

Authors: M. Y. Ismail, Arslan Kiyani

Abstract:

Narrow bandwidth and high loss performance limits the use of reflectarray antennas in some applications. This article reports on the feasibility of employing strategic reflectarray resonant elements to characterize the reflectivity performance of reflectarrays in X-band frequency range. Strategic reflectarray resonant elements incorporating variable substrate thicknesses ranging from 0.016λ to 0.052λ have been analyzed in terms of reflection loss and reflection phase performance. The effect of substrate thickness has been validated by using waveguide scattering parameter technique. It has been demonstrated that as the substrate thickness is increased from 0.508mm to 1.57mm the measured reflection loss of dipole element decreased from 5.66dB to 3.70dB with increment in 10% bandwidth of 39MHz to 64MHz. Similarly the measured reflection loss of triangular loop element is decreased from 20.25dB to 7.02dB with an increment in 10% bandwidth of 12MHz to 23MHz. The results also show a significant decrease in the slope of reflection phase curve as well. A Figure of Merit (FoM) has also been defined for the comparison of static phase range of resonant elements under consideration. Moreover, a novel numerical model based on analytical equations has been established incorporating the material properties of dielectric substrate and electrical properties of different reflectarray resonant elements to obtain the progressive phase distribution for each individual reflectarray resonant element.

Keywords: Numerical model, Reflectarray resonant elements, Scattering parameter measurements, Variable substrate thickness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1723
2023 Feature Point Reduction for Video Stabilization

Authors: Theerawat Songyot, Tham Manjing, Bunyarit Uyyanonvara, Chanjira Sinthanayothin

Abstract:

Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.

Keywords: background object tracking, feature point reduction, low cost tracking, video stabilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1767
2022 Production of Pig Iron by Smelting of Blended Pre-Reduced Titaniferous Magnetite Ore and Hematite Ore Using Lean Grade Coal

Authors: Bitan Kumar Sarkar, Akashdeep Agarwal, Rajib Dey, Gopes Chandra Das

Abstract:

The rapid depletion of high-grade iron ore (Fe2O3) has gained attention on the use of other sources of iron ore. Titaniferous magnetite ore (TMO) is a special type of magnetite ore having high titania content (23.23% TiO2 present in this case). Due to high TiO2 content and high density, TMO cannot be treated by the conventional smelting reduction. In this present work, the TMO has been collected from high-grade metamorphic terrain of the Precambrian Chotanagpur gneissic complex situated in the eastern part of India (Shaltora area, Bankura district, West Bengal) and the hematite ore has been collected from Visakhapatnam Steel Plant (VSP), Visakhapatnam. At VSP, iron ore is received from Bailadila mines, Chattisgarh of M/s. National Mineral Development Corporation. The preliminary characterization of TMO and hematite ore (HMO) has been investigated by WDXRF, XRD and FESEM analyses. Similarly, good quality of coal (mainly coking coal) is also getting depleted fast. The basic purpose of this work is to find how lean grade coal can be utilised along with TMO for smelting to produce pig iron. Lean grade coal has been characterised by using TG/DTA, proximate and ultimate analyses. The boiler grade coal has been found to contain 28.08% of fixed carbon and 28.31% of volatile matter. TMO fines (below 75 μm) and HMO fines (below 75 μm) have been separately agglomerated with lean grade coal fines (below 75 μm) in the form of briquettes using binders like bentonite and molasses. These green briquettes are dried first in oven at 423 K for 30 min and then reduced isothermally in tube furnace over the temperature range of 1323 K, 1373 K and 1423 K for 30 min & 60 min. After reduction, the reduced briquettes are characterized by XRD and FESEM analyses. The best reduced TMO and HMO samples are taken and blended in three different weight percentage ratios of 1:4, 1:8 and 1:12 of TMO:HMO. The chemical analysis of three blended samples is carried out and degree of metallisation of iron is found to contain 89.38%, 92.12% and 93.12%, respectively. These three blended samples are briquetted using binder like bentonite and lime. Thereafter these blended briquettes are separately smelted in raising hearth furnace at 1773 K for 30 min. The pig iron formed is characterized using XRD, microscopic analysis. It can be concluded that 90% yield of pig iron can be achieved when the blend ratio of TMO:HMO is 1:4.5. This means for 90% yield, the maximum TMO that could be used in the blend is about 18%.

Keywords: Briquetting reduction, lean grade coal, smelting reduction, TMO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1921
2021 A Novel Model for Simultaneously Minimising Costs and Risks in Just-in-Time Systems Using Multi-Backup Suppliers: Part 1- Modelling

Authors: Faraj El Dabee, Romeo Marian, Yousef Amer

Abstract:

Just-In-Time (JIT) is a lean manufacturing tool, which provides the benefits of efficiency, and of minimizing unnecessary costs for many organisations. However, the risks arising from these benefits have been disregarded. These risks impact on system processes disrupting the whole supply chain. This paper proposes an inventory model that can simultaneously reduce costs and risks in JIT systems. This model is developed to ascertain an optimal ordering strategy for procuring raw materials by using regular multi-external and local backup suppliers to reduce the total cost of the products, and at the same time to reduce the risks arising from this cost reduction within production systems. Some results that will be illustrated in the second part of this paper are presented.

Keywords: Lean manufacturing, Just-in-Time (JIT), production system, cost-risk reduction, inventory model, eternal supplier, local backup supplier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1654
2020 The Application of an Experimental Design for the Defect Reduction of Electrodeposition Painting on Stainless Steel Washers

Authors: Chansiri Singhtaun, Nattaporn Prasartthong

Abstract:

The purpose of this research is to reduce the amount of incomplete coating of stainless steel washers in the electrodeposition painting process by using an experimental design technique. The surface preparation was found to be a major cause of painted surface quality. The influence of pretreating and painting process parameters, which are cleaning time, chemical concentration and shape of hanger were studied. A 23 factorial design with two replications was performed. The analysis of variance for the designed experiment showed the great influence of cleaning time and shape of hanger. From this study, optimized cleaning time was determined and a newly designed electrical conductive hanger was proved to be superior to the original one. The experimental verification results showed that the amount of incomplete coating defects decreased from 4% to 1.02% and operation cost decreased by 10.5%.

Keywords: Defect reduction, design of experiments, electrodeposition painting, stainless steel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2270
2019 Reduction in Population Growth under Various Contraceptive Strategies in Uttar Pradesh, India

Authors: Prashant Verma, K. K. Singh, Anjali Singh, Ujjaval Srivastava

Abstract:

Contraceptive policies have been derived to achieve desired reductions in the growth rate and also, applied to the data of Uttar-Pradesh, India for illustration. Using the Lotka’s integral equation for the stable population, expressions for the proportion of contraceptive users at different ages have been obtained. At the age of 20 years, 42% of contraceptive users is imperative to reduce the present annual growth rate of 0.036 to 0.02, assuming that 40% of the contraceptive users discontinue at the age of 25 years and 30% again continue contraceptive use at age 30 years. Further, presuming that 75% of women start using contraceptives at the age of 23 years, and 50% of the remaining women start using contraceptives at the age of 28 years, while the rest of them start using it at the age of 32 years. If we set a minimum age of marriage as 20 years, a reduction of 0.019 in growth rate will be obtained. This study describes how the level of contraceptive use at different age groups of women reduces the growth rate in the state of Uttar Pradesh. The article also promotes delayed marriage in the region.

Keywords: Child bearing, contraceptive devices, contraceptive policies, population growth, stable population.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 650
2018 Effect of Fuel Lean Reburning Process on NOx Reduction and CO Emission

Authors: Changyeop Lee, Sewon Kim

Abstract:

Reburning is a useful technology in reducing nitric oxide through injection of a secondary hydrocarbon fuel. In this paper, an experimental study has been conducted to evaluate the effect of fuel lean reburning on NOx/CO reduction in LNG flame. Experiments were performed in flames stabilized by a co-flow swirl burner, which was mounted at the bottom of the furnace. Tests were conducted using LNG gas as the reburn fuel as well as the main fuel. The effects of reburn fuel fraction and injection manner of the reburn fuel were studied when the fuel lean reburning system was applied. The paper reports data on flue gas emissions and temperature distribution in the furnace for a wide range of experimental conditions. At steady state, temperature distribution and emission formation in the furnace have been measured and compared. This paper makes clear that in order to decrease both NOx and CO concentrations in the exhaust when the pulsated fuel lean reburning system was adapted, it is important that the control of some factors such as frequency and duty ratio. Also it shows the fuel lean reburning is also effective method to reduce NOx as much as reburning.

Keywords: Fuel lean reburn, NOx, CO, LNG flame.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2201
2017 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu

Abstract:

Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.

Keywords: Instance selection, data reduction, MapReduce, kNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1018
2016 Friction Stir Welded Joint Aluminum Alloy H20-H20 with Different Type of Tools Mechanical Properties

Authors: Omid A. Zargar

Abstract:

In this project three type of tools, straight cylindrical, taper cylindrical and triangular tool all made of High speed steel (Wc-Co) used for the friction stir welding (FSW) aluminum alloy H20–H20 and the mechanical properties of the welded joint tested by tensile test and vicker hardness test. Besides, mentioned mechanical properties compared with each other to make conclusion. The result helped design of welding parameter optimization for different types of friction stir process like rotational speed, depth of welding, travel speed, type of material, type of joint, work piece dimension, joint dimension, tool material and tool geometry. Previous investigations in different types of materials work pieces; joint type, machining parameter and preheating temperature take placed. In this investigation 3 mentioned tool types that are popular in FSW tested and the results completed other aspects of the process. Hope this paper can open a new horizon in experimental investigation of mechanical properties for friction stir welded joint with other different type of tools like oval shape probe, paddle shape probe, three flat sided probe, and three sided re-entrant probe and other materials and alloys like titanium or steel in near future.

Keywords: Friction stir welding (FSW), tool, CNC milling machine, aluminum alloy H20, Vickers hardness test, tensile test, straight cylindrical tool, taper cylindrical tool, triangular tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2866
2015 The Kinetic of Biodegradation Lignin in Water Hyacinth (Eichhornia Crassipes) by Phanerochaete Chrysosporium using Solid State Fermentation (SSF) Method for Bioethanol Production, Indonesia

Authors: Eka Sari, Siti Syamsiah, Hary Sulistyo, Muslikhin

Abstract:

Lignocellulosic materials are considered the most abundant renewable resource available for the Bioethanol Production. Water Hyacinth is one of potential raw material of the world-s worst aquatic plant as a feedstock to produce Bioethanol. The purposed this research is obtain reduced of matter for biodegradation lignin in Biological pretreatment with White Rot Fungi eg. Phanerochaete Chrysosporium using Solid state Fermentation methods. Phanerochaete Chrysosporium is known to have the best ability to degraded lignin, but simultaneously it can also degraded cellulose and hemicelulose. During 8 weeks incubation, water hyacinth occurred loss of weight reached 34,67%, while loss of lignin reached 67,21%, loss of cellulose reached 11,01% and loss of hemicellulose reached 36,56%. The kinetic of losses lignin using regression linear plot, the results is obtained constant rate (k) of reduction lignin is -0.1053 and the equation of reduction of lignin is y = wo - 0, 1.53 x

Keywords: Biodegradation, lignin, PhanerochaeteChrysosporium, SSF, Water Hyacinth, Bioethanol

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2580
2014 Contribution of the Cogeneration Systems to Environment and Sustainability

Authors: Kemal Çomakli, Uğur Çakir, Ayşegül Çokgez Kuş, Erol Şahin

Abstract:

A lower consumption of thermal energy will contribute not only to a reduction in the running costs, but also in the reduction of pollutant emissions that contribute to the greenhouse effect. Cogeneration or CHP (Combined Heat and Power) is the system that produces power and usable heat simultaneously by decreasing the pollutant emissions and increasing the efficiency. Combined production of mechanical or electrical and thermal energy using a simple energy source, such as oil, coal, natural or liquefied gas, biomass or the sun; affords remarkable energy savings and frequently makes it possible to operate with greater efficiency when compared to a system producing heat and power separately. This study aims to bring out the contributions of cogeneration systems to the environment and sustainability by saving the energy and reducing the emissions. In this way we made a comprehensive investigation in the literature by focusing on the environmental aspects of the cogeneration systems. In the light of these studies we reached that, cogeneration systems must be consider in sustainability and their benefits on protecting the ecology must be investigated.

Keywords: Sustainability, cogeneration systems, energy economy, energy saving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2658
2013 Affordable and Environmental Friendly Small Commuter Aircraft Improving European Mobility

Authors: Diego Giuseppe Romano, Gianvito Apuleo, Jiri Duda

Abstract:

Mobility is one of the most important societal needs for amusement, business activities and health. Thus, transport needs are continuously increasing, with the consequent traffic congestion and pollution increase. Aeronautic effort aims at smarter infrastructures use and in introducing greener concepts. A possible solution to address the abovementioned topics is the development of Small Air Transport (SAT) system, able to guarantee operability from today underused airfields in an affordable and green way, helping meanwhile travel time reduction, too. In the framework of Horizon2020, EU (European Union) has funded the Clean Sky 2 SAT TA (Transverse Activity) initiative to address market innovations able to reduce SAT operational cost and environmental impact, ensuring good levels of operational safety. Nowadays, most of the key technologies to improve passenger comfort and to reduce community noise, DOC (Direct Operating Costs) and pilot workload for SAT have reached an intermediate level of maturity TRL (Technology Readiness Level) 3/4. Thus, the key technologies must be developed, validated and integrated on dedicated ground and flying aircraft demonstrators to reach higher TRL levels (5/6). Particularly, SAT TA focuses on the integration at aircraft level of the following technologies [1]: 1)    Low-cost composite wing box and engine nacelle using OoA (Out of Autoclave) technology, LRI (Liquid Resin Infusion) and advance automation process. 2) Innovative high lift devices, allowing aircraft operations from short airfields (< 800 m). 3) Affordable small aircraft manufacturing of metallic fuselage using FSW (Friction Stir Welding) and LMD (Laser Metal Deposition). 4)       Affordable fly-by-wire architecture for small aircraft (CS23 certification rules). 5) More electric systems replacing pneumatic and hydraulic systems (high voltage EPGDS -Electrical Power Generation and Distribution System-, hybrid de-ice system, landing gear and brakes). 6) Advanced avionics for small aircraft, reducing pilot workload. 7) Advanced cabin comfort with new interiors materials and more comfortable seats. 8) New generation of turboprop engine with reduced fuel consumption, emissions, noise and maintenance costs for 19 seats aircraft. (9) Alternative diesel engine for 9 seats commuter aircraft. To address abovementioned market innovations, two different platforms have been designed: Reference and Green aircraft. Reference aircraft is a virtual aircraft designed considering 2014 technologies with an existing engine assuring requested take-off power; Green aircraft is designed integrating the technologies addressed in Clean Sky 2. Preliminary integration of the proposed technologies shows an encouraging reduction of emissions and operational costs of small: about 20% CO2 reduction, about 24% NOx reduction, about 10 db (A) noise reduction at measurement point and about 25% DOC reduction. Detailed description of the performed studies, analyses and validations for each technology as well as the expected benefit at aircraft level are reported in the present paper.

Keywords: Affordable, European, green, mobility, technologies development, travel time reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 537
2012 Spreading Dynamics of a Viral Infection in a Complex Network

Authors: Khemanand Moheeput, Smita S. D. Goorah, Satish K. Ramchurn

Abstract:

We report a computational study of the spreading dynamics of a viral infection in a complex (scale-free) network. The final epidemic size distribution (FESD) was found to be unimodal or bimodal depending on the value of the basic reproductive number R0 . The FESDs occurred on time-scales long enough for intermediate-time epidemic size distributions (IESDs) to be important for control measures. The usefulness of R0 for deciding on the timeliness and intensity of control measures was found to be limited by the multimodal nature of the IESDs and by its inability to inform on the speed at which the infection spreads through the population. A reduction of the transmission probability at the hubs of the scale-free network decreased the occurrence of the larger-sized epidemic events of the multimodal distributions. For effective epidemic control, an early reduction in transmission at the index cell and its neighbors was essential.

Keywords: Basic reproductive number, epidemic control, scalefree network, viral infection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720
2011 A Numerical Study on the Effects of N2 Dilution on the Flame Structure and Temperature Distribution of Swirl Diffusion Flames

Authors: Yasaman Tohidi, Shidvash Vakilipour, Saeed Ebadi Tavallaee, Shahin Vakilipoor Takaloo, Hossein Amiri

Abstract:

The numerical modeling is performed to study the effects of N2 addition to the fuel stream on the flame structure and temperature distribution of methane-air swirl diffusion flames with different swirl intensities. The Open source Field Operation and Manipulation (OpenFOAM) has been utilized as the computational tool. Flamelet approach along with modified k-ε model is employed to model the flame characteristics.  The results indicate that the presence of N2 in the fuel stream leads to the flame temperature reduction. By increasing of swirl intensity, the flame structure changes significantly. The flame has a conical shape in low swirl intensity; however, it has an hour glass-shape with a shorter length in high swirl intensity. The effects of N2 dilution decrease the flame length in all swirl intensities; however, the rate of reduction is more noticeable in low swirl intensity.

Keywords: Swirl diffusion flame, N2 dilution, OpenFOAM, Swirl intensity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 610
2010 Acute Coronary Syndrome Prediction Using Data Mining Techniques- An Application

Authors: Tahseen A. Jilani, Huda Yasin, Madiha Yasin, C. Ardil

Abstract:

In this paper we use data mining techniques to investigate factors that contribute significantly to enhancing the risk of acute coronary syndrome. We assume that the dependent variable is diagnosis – with dichotomous values showing presence or  absence of disease. We have applied binary regression to the factors affecting the dependent variable. The data set has been taken from two different cardiac hospitals of Karachi, Pakistan. We have total sixteen variables out of which one is assumed dependent and other 15 are independent variables. For better performance of the regression model in predicting acute coronary syndrome, data reduction techniques like principle component analysis is applied. Based on results of data reduction, we have considered only 14 out of sixteen factors.

Keywords: Acute coronary syndrome (ACS), binary logistic regression analyses, myocardial ischemia (MI), principle component analysis, unstable angina (U.A.).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2114
2009 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.

Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167
2008 Effect of Pre-Construction on Construction Schedule and Client Loyalty

Authors: Jong Hoon Kim, Hyun-Soo Lee, Moonseo Park, Min Jeong, Inbeom Lee

Abstract:

Pre-construction is essential in achieving the success of a construction project. Due to the early involvement of project participants in the construction phase, project managers are able to plan ahead and solve issues well in advance leading to the success of the project and the satisfaction of the client. This research utilizes quantitative data derived from construction management projects in order to identify the relationship between pre-construction, construction schedule, and client satisfaction. A total of 65 construction projects and 93 clients were investigated for this research in an attempt to identify (a) the relationship between pre-construction and schedule reduction, and (b) pre-construction and client loyalty. Based on the quantitative analysis, this research was able to establish a negative correlation based on 65 construction projects between pre-construction and project schedule existed. This finding represents that the more pre-construction is performed for a certain project, the overall construction schedule decreased. Then, to determine the relationship between pre-construction and client satisfaction, Net Promoter Score (NPS) of 93 clients from the 65 projects was utilized. Pre-construction and NPS was further analyzed and a positive correlation was found between the two. This infers that clients tend to be more satisfied with projects with higher ratio of pre-construction than those projects with less pre-construction.

Keywords: Client loyalty, NPS, pre-construction, schedule reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3266
2007 Prediction of Coast Down Time for Mechanical Faults in Rotating Machinery Using Artificial Neural Networks

Authors: G. R. Rameshkumar, B. V. A Rao, K. P. Ramachandran

Abstract:

Misalignment and unbalance are the major concerns in rotating machinery. When the power supply to any rotating system is cutoff, the system begins to lose the momentum gained during sustained operation and finally comes to rest. The exact time period from when the power is cutoff until the rotor comes to rest is called Coast Down Time. The CDTs for different shaft cutoff speeds were recorded at various misalignment and unbalance conditions. The CDT reduction percentages were calculated for each fault and there is a specific correlation between the CDT reduction percentage and the severity of the fault. In this paper, radial basis network, a new generation of artificial neural networks, has been successfully incorporated for the prediction of CDT for misalignment and unbalance conditions. Radial basis network has been found to be successful in the prediction of CDT for mechanical faults in rotating machinery.

Keywords: Coast Down Time, Misalignment, Unbalance, Artificial Neural Networks, Radial Basis Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2989
2006 Reducing Test Vectors Count Using Fault Based Optimization Schemes in VLSI Testing

Authors: Vinod Kumar Khera, R. K. Sharma, A. K. Gupta

Abstract:

Power dissipation increases exponentially during test mode as compared to normal operation of the circuit. In extreme cases, test power is more than twice the power consumed during normal operation mode. Test vector generation scheme is key component in deciding the power hungriness of a circuit during testing. Test vector count and consequent leakage current are functions of test vector generation scheme. Fault based test vector count optimization has been presented in this work. It helps in reducing test vector count and the leakage current. In the presented scheme, test vectors have been reduced by extracting essential child vectors. The scheme has been tested experimentally using stuck at fault models and results ensure the reduction in test vector count.

Keywords: Low power VLSI testing, independent fault, essential faults, test vector reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1424
2005 Encrypter Information Software Using Chaotic Generators

Authors: Cardoza-Avendaño L., López-Gutiérrez R.M., Inzunza-González E., Cruz-Hernández C., García-Guerrero E., Spirin V., Serrano H.

Abstract:

This document shows a software that shows different chaotic generator, as continuous as discrete time. The software gives the option for obtain the different signals, using different parameters and initial condition value. The program shows then critical parameter for each model. All theses models are capable of encrypter information, this software show it too.

Keywords: cryptography, chaotic attractors, software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491
2004 Performance of Air Gap Membrane Distillation for Desalination of Ground Water and Seawater

Authors: Bhausaheb L. Pangarkar, M.G. Sane

Abstract:

Membrane distillation (MD) is a rising technology for seawater or brine desalination process. In this work, an air gap membrane distillation (AGMD) performance was investigated for aqueous NaCl solution along with natural ground water and seawater. In order to enhance the performance of the AGMD process in desalination, that is, to get more flux, it is necessary to study the effect of operating parameters on the yield of distillate water. The influence of operational parameters such as feed flow rate, feed temperature, feed salt concentration, coolant temperature and air gap thickness on the membrane distillation (MD) permeation flux have been investigated for low and high salt solution. the natural application of ground water and seawater over 90 h continuous operation, scale deposits observed on the membrane surface and reduction in flux represents 23% for ground water and 60% for seawater, in 90 h. This reduction was eliminated (less than 14 %) by acidification of feed water. Hence, promote the research attention in apply of AGMD for the ground water as well as seawater desalination over today-s conventional RO operation.

Keywords: MD, ground water, seawater, AGMD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2469
2003 Analysis of Noise Level Effects on Signal-Averaged Electrocardiograms

Authors: Chun-Cheng Lin

Abstract:

Noise level has critical effects on the diagnostic performance of signal-averaged electrocardiogram (SAECG), because the true starting and end points of QRS complex would be masked by the residual noise and sensitive to the noise level. Several studies and commercial machines have used a fixed number of heart beats (typically between 200 to 600 beats) or set a predefined noise level (typically between 0.3 to 1.0 μV) in each X, Y and Z lead to perform SAECG analysis. However different criteria or methods used to perform SAECG would cause the discrepancies of the noise levels among study subjects. According to the recommendations of 1991 ESC, AHA and ACC Task Force Consensus Document for the use of SAECG, the determinations of onset and offset are related closely to the mean and standard deviation of noise sample. Hence this study would try to perform SAECG using consistent root-mean-square (RMS) noise levels among study subjects and analyze the noise level effects on SAECG. This study would also evaluate the differences between normal subjects and chronic renal failure (CRF) patients in the time-domain SAECG parameters. The study subjects were composed of 50 normal Taiwanese and 20 CRF patients. During the signal-averaged processing, different RMS noise levels were adjusted to evaluate their effects on three time domain parameters (1) filtered total QRS duration (fQRSD), (2) RMS voltage of the last QRS 40 ms (RMS40), and (3) duration of the low amplitude signals below 40 μV (LAS40). The study results demonstrated that the reduction of RMS noise level can increase fQRSD and LAS40 and decrease the RMS40, and can further increase the differences of fQRSD and RMS40 between normal subjects and CRF patients. The SAECG may also become abnormal due to the reduction of RMS noise level. In conclusion, it is essential to establish diagnostic criteria of SAECG using consistent RMS noise levels for the reduction of the noise level effects.

Keywords: Signal-averaged electrocardiogram, Ventricular latepotentials, Chronic renal failure, Noise level effects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802
2002 Water and Soil Environment Pollution Reduction by Filter Strips

Authors: Roy R. Gu, Mahesh Sahu, Xianggui Zhao

Abstract:

Contour filter strips planted with perennial vegetation can be used to improve surface and ground water quality by reducing pollutant, such as NO3-N, and sediment outflow from cropland to a river or lake. Meanwhile, the filter strips of perennial grass with biofuel potentials also have economic benefits of producing ethanol. In this study, The Soil and Water Assessment Tool (SWAT) model was applied to the Walnut Creek Watershed to examine the effectiveness of contour strips in reducing NO3-N outflows from crop fields to the river or lake. Required input data include watershed topography, slope, soil type, land-use, management practices in the watershed and climate parameters (precipitation, maximum/minimum air temperature, solar radiation, wind speed and relative humidity). Numerical experiments were conducted to identify potential subbasins in the watershed that have high water quality impact, and to examine the effects of strip size and location on NO3-N reduction in the subbasins under various meteorological conditions (dry, average and wet). Variable sizes of contour strips (10%, 20%, 30% and 50%, respectively, of a subbasin area) planted with perennial switchgrass were selected for simulating the effects of strip size and location on stream water quality. Simulation results showed that a filter strip having 10%-50% of the subbasin area could lead to 55%- 90% NO3-N reduction in the subbasin during an average rainfall year. Strips occupying 10-20% of the subbasin area were found to be more efficient in reducing NO3-N when placed along the contour than that when placed along the river. The results of this study can assist in cost-benefit analysis and decision-making in best water resources management practices for environmental protection.

Keywords: modeling, SWAT, water quality, NO3-N, watershed.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1742
2001 Effect of Aging on the Second Law Efficiency, Exergy Destruction and Entropy Generation in the Skeletal Muscles during Exercise

Authors: Jale Çatak, Bayram Yılmaz, Mustafa Ozilgen

Abstract:

The second law muscle work efficiency is obtained by multiplying the metabolic and mechanical work efficiencies. Thermodynamic analyses are carried out with 19 sets of arms and legs exercise data which were obtained from the healthy young people. These data are used to simulate the changes occurring during aging. The muscle work efficiency decreases with aging as a result of the reduction of the metabolic energy generation in the mitochondria. The reduction of the mitochondrial energy efficiency makes it difficult to carry out the maintenance of the muscle tissue, which in turn causes a decline of the muscle work efficiency. When the muscle attempts to produce more work, entropy generation and exergy destruction increase. Increasing exergy destruction may be regarded as the result of the deterioration of the muscles. When the exergetic efficiency is 0.42, exergy destruction becomes 1.49 folds of the work performance. This proportionality becomes 2.50 and 5.21 folds when the exergetic efficiency decreases to 0.30 and 0.17 respectively.

Keywords: Aging mitochondria, entropy generation, exergy destruction, muscle work performance, second law efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1379
2000 Noise Reduction in Image Sequences using an Effective Fuzzy Algorithm

Authors: Mahmoud Saeidi, Khadijeh Saeidi, Mahmoud Khaleghi

Abstract:

In this paper, we propose a novel spatiotemporal fuzzy based algorithm for noise filtering of image sequences. Our proposed algorithm uses adaptive weights based on a triangular membership functions. In this algorithm median filter is used to suppress noise. Experimental results show when the images are corrupted by highdensity Salt and Pepper noise, our fuzzy based algorithm for noise filtering of image sequences, are much more effective in suppressing noise and preserving edges than the previously reported algorithms such as [1-7]. Indeed, assigned weights to noisy pixels are very adaptive so that they well make use of correlation of pixels. On the other hand, the motion estimation methods are erroneous and in highdensity noise they may degrade the filter performance. Therefore, our proposed fuzzy algorithm doesn-t need any estimation of motion trajectory. The proposed algorithm admissibly removes noise without having any knowledge of Salt and Pepper noise density.

Keywords: Image Sequences, Noise Reduction, fuzzy algorithm, triangular membership function

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881
1999 Modal Approach for Decoupling Damage Cost Dependencies in Building Stories

Authors: Haj Najafi Leila, Tehranizadeh Mohsen

Abstract:

Dependencies between diverse factors involved in probabilistic seismic loss evaluation are recognized to be an imperative issue in acquiring accurate loss estimates. Dependencies among component damage costs could be taken into account considering two partial distinct states of independent or perfectly-dependent for component damage states; however, in our best knowledge, there is no available procedure to take account of loss dependencies in story level. This paper attempts to present a method called "modal cost superposition method" for decoupling story damage costs subjected to earthquake ground motions dealt with closed form differential equations between damage cost and engineering demand parameters which should be solved in complex system considering all stories' cost equations by the means of the introduced "substituted matrixes of mass and stiffness". Costs are treated as probabilistic variables with definite statistic factors of median and standard deviation amounts and a presumed probability distribution. To supplement the proposed procedure and also to display straightforwardness of its application, one benchmark study has been conducted. Acceptable compatibility has been proven for the estimated damage costs evaluated by the new proposed modal and also frequently used stochastic approaches for entire building; however, in story level, insufficiency of employing modification factor for incorporating occurrence probability dependencies between stories has been revealed due to discrepant amounts of dependency between damage costs of different stories. Also, more dependency contribution in occurrence probability of loss could be concluded regarding more compatibility of loss results in higher stories than the lower ones, whereas reduction in incorporation portion of cost modes provides acceptable level of accuracy and gets away from time consuming calculations including some limited number of cost modes in high mode situation.

Keywords: Dependency, story-cost, cost modes, engineering demand parameter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1018
1998 A New Distribution Network Reconfiguration Approach using a Tree Model

Authors: E. Dolatdar, S. Soleymani, B. Mozafari

Abstract:

Power loss reduction is one of the main targets in power industry and so in this paper, the problem of finding the optimal configuration of a radial distribution system for loss reduction is considered. Optimal reconfiguration involves the selection of the best set of branches to be opened ,one each from each loop, for reducing resistive line losses , and reliving overloads on feeders by shifting the load to adjacent feeders. However ,since there are many candidate switching combinations in the system ,the feeder reconfiguration is a complicated problem. In this paper a new approach is proposed based on a simple optimum loss calculation by determining optimal trees of the given network. From graph theory a distribution network can be represented with a graph that consists a set of nodes and branches. In fact this problem can be viewed as a problem of determining an optimal tree of the graph which simultaneously ensure radial structure of each candidate topology .In this method the refined genetic algorithm is also set up and some improvements of algorithm are made on chromosome coding. In this paper an implementation of the algorithm presented by [7] is applied by modifying in load flow program and a comparison of this method with the proposed method is employed. In [7] an algorithm is proposed that the choice of the switches to be opened is based on simple heuristic rules. This algorithm reduce the number of load flow runs and also reduce the switching combinations to a fewer number and gives the optimum solution. To demonstrate the validity of these methods computer simulations with PSAT and MATLAB programs are carried out on 33-bus test system. The results show that the performance of the proposed method is better than [7] method and also other methods.

Keywords: Distribution System, Reconfiguration, Loss Reduction , Graph Theory , Optimization , Genetic Algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3782