Search results for: Noise Reduction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2327

Search results for: Noise Reduction

1637 Feature Point Reduction for Video Stabilization

Authors: Theerawat Songyot, Tham Manjing, Bunyarit Uyyanonvara, Chanjira Sinthanayothin

Abstract:

Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.

Keywords: background object tracking, feature point reduction, low cost tracking, video stabilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1759
1636 Synthesis of Filtering in Stochastic Systems on Continuous-Time Memory Observations in the Presence of Anomalous Noises

Authors: S. Rozhkova, O. Rozhkova, A. Harlova, V. Lasukov

Abstract:

We have conducted the optimal synthesis of rootmean- squared objective filter to estimate the state vector in the case if within the observation channel with memory the anomalous noises with unknown mathematical expectation are complement in the function of the regular noises. The synthesis has been carried out for linear stochastic systems of continuous - time.

Keywords: Mathematical expectation, filtration, anomalous noise, memory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1964
1635 Production of Pig Iron by Smelting of Blended Pre-Reduced Titaniferous Magnetite Ore and Hematite Ore Using Lean Grade Coal

Authors: Bitan Kumar Sarkar, Akashdeep Agarwal, Rajib Dey, Gopes Chandra Das

Abstract:

The rapid depletion of high-grade iron ore (Fe2O3) has gained attention on the use of other sources of iron ore. Titaniferous magnetite ore (TMO) is a special type of magnetite ore having high titania content (23.23% TiO2 present in this case). Due to high TiO2 content and high density, TMO cannot be treated by the conventional smelting reduction. In this present work, the TMO has been collected from high-grade metamorphic terrain of the Precambrian Chotanagpur gneissic complex situated in the eastern part of India (Shaltora area, Bankura district, West Bengal) and the hematite ore has been collected from Visakhapatnam Steel Plant (VSP), Visakhapatnam. At VSP, iron ore is received from Bailadila mines, Chattisgarh of M/s. National Mineral Development Corporation. The preliminary characterization of TMO and hematite ore (HMO) has been investigated by WDXRF, XRD and FESEM analyses. Similarly, good quality of coal (mainly coking coal) is also getting depleted fast. The basic purpose of this work is to find how lean grade coal can be utilised along with TMO for smelting to produce pig iron. Lean grade coal has been characterised by using TG/DTA, proximate and ultimate analyses. The boiler grade coal has been found to contain 28.08% of fixed carbon and 28.31% of volatile matter. TMO fines (below 75 μm) and HMO fines (below 75 μm) have been separately agglomerated with lean grade coal fines (below 75 μm) in the form of briquettes using binders like bentonite and molasses. These green briquettes are dried first in oven at 423 K for 30 min and then reduced isothermally in tube furnace over the temperature range of 1323 K, 1373 K and 1423 K for 30 min & 60 min. After reduction, the reduced briquettes are characterized by XRD and FESEM analyses. The best reduced TMO and HMO samples are taken and blended in three different weight percentage ratios of 1:4, 1:8 and 1:12 of TMO:HMO. The chemical analysis of three blended samples is carried out and degree of metallisation of iron is found to contain 89.38%, 92.12% and 93.12%, respectively. These three blended samples are briquetted using binder like bentonite and lime. Thereafter these blended briquettes are separately smelted in raising hearth furnace at 1773 K for 30 min. The pig iron formed is characterized using XRD, microscopic analysis. It can be concluded that 90% yield of pig iron can be achieved when the blend ratio of TMO:HMO is 1:4.5. This means for 90% yield, the maximum TMO that could be used in the blend is about 18%.

Keywords: Briquetting reduction, lean grade coal, smelting reduction, TMO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1917
1634 A Novel Model for Simultaneously Minimising Costs and Risks in Just-in-Time Systems Using Multi-Backup Suppliers: Part 1- Modelling

Authors: Faraj El Dabee, Romeo Marian, Yousef Amer

Abstract:

Just-In-Time (JIT) is a lean manufacturing tool, which provides the benefits of efficiency, and of minimizing unnecessary costs for many organisations. However, the risks arising from these benefits have been disregarded. These risks impact on system processes disrupting the whole supply chain. This paper proposes an inventory model that can simultaneously reduce costs and risks in JIT systems. This model is developed to ascertain an optimal ordering strategy for procuring raw materials by using regular multi-external and local backup suppliers to reduce the total cost of the products, and at the same time to reduce the risks arising from this cost reduction within production systems. Some results that will be illustrated in the second part of this paper are presented.

Keywords: Lean manufacturing, Just-in-Time (JIT), production system, cost-risk reduction, inventory model, eternal supplier, local backup supplier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1649
1633 The Application of an Experimental Design for the Defect Reduction of Electrodeposition Painting on Stainless Steel Washers

Authors: Chansiri Singhtaun, Nattaporn Prasartthong

Abstract:

The purpose of this research is to reduce the amount of incomplete coating of stainless steel washers in the electrodeposition painting process by using an experimental design technique. The surface preparation was found to be a major cause of painted surface quality. The influence of pretreating and painting process parameters, which are cleaning time, chemical concentration and shape of hanger were studied. A 23 factorial design with two replications was performed. The analysis of variance for the designed experiment showed the great influence of cleaning time and shape of hanger. From this study, optimized cleaning time was determined and a newly designed electrical conductive hanger was proved to be superior to the original one. The experimental verification results showed that the amount of incomplete coating defects decreased from 4% to 1.02% and operation cost decreased by 10.5%.

Keywords: Defect reduction, design of experiments, electrodeposition painting, stainless steel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2263
1632 Reduction in Population Growth under Various Contraceptive Strategies in Uttar Pradesh, India

Authors: Prashant Verma, K. K. Singh, Anjali Singh, Ujjaval Srivastava

Abstract:

Contraceptive policies have been derived to achieve desired reductions in the growth rate and also, applied to the data of Uttar-Pradesh, India for illustration. Using the Lotka’s integral equation for the stable population, expressions for the proportion of contraceptive users at different ages have been obtained. At the age of 20 years, 42% of contraceptive users is imperative to reduce the present annual growth rate of 0.036 to 0.02, assuming that 40% of the contraceptive users discontinue at the age of 25 years and 30% again continue contraceptive use at age 30 years. Further, presuming that 75% of women start using contraceptives at the age of 23 years, and 50% of the remaining women start using contraceptives at the age of 28 years, while the rest of them start using it at the age of 32 years. If we set a minimum age of marriage as 20 years, a reduction of 0.019 in growth rate will be obtained. This study describes how the level of contraceptive use at different age groups of women reduces the growth rate in the state of Uttar Pradesh. The article also promotes delayed marriage in the region.

Keywords: Child bearing, contraceptive devices, contraceptive policies, population growth, stable population.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 646
1631 Effect of Fuel Lean Reburning Process on NOx Reduction and CO Emission

Authors: Changyeop Lee, Sewon Kim

Abstract:

Reburning is a useful technology in reducing nitric oxide through injection of a secondary hydrocarbon fuel. In this paper, an experimental study has been conducted to evaluate the effect of fuel lean reburning on NOx/CO reduction in LNG flame. Experiments were performed in flames stabilized by a co-flow swirl burner, which was mounted at the bottom of the furnace. Tests were conducted using LNG gas as the reburn fuel as well as the main fuel. The effects of reburn fuel fraction and injection manner of the reburn fuel were studied when the fuel lean reburning system was applied. The paper reports data on flue gas emissions and temperature distribution in the furnace for a wide range of experimental conditions. At steady state, temperature distribution and emission formation in the furnace have been measured and compared. This paper makes clear that in order to decrease both NOx and CO concentrations in the exhaust when the pulsated fuel lean reburning system was adapted, it is important that the control of some factors such as frequency and duty ratio. Also it shows the fuel lean reburning is also effective method to reduce NOx as much as reburning.

Keywords: Fuel lean reburn, NOx, CO, LNG flame.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2196
1630 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu

Abstract:

Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.

Keywords: Instance selection, data reduction, MapReduce, kNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1012
1629 The Kinetic of Biodegradation Lignin in Water Hyacinth (Eichhornia Crassipes) by Phanerochaete Chrysosporium using Solid State Fermentation (SSF) Method for Bioethanol Production, Indonesia

Authors: Eka Sari, Siti Syamsiah, Hary Sulistyo, Muslikhin

Abstract:

Lignocellulosic materials are considered the most abundant renewable resource available for the Bioethanol Production. Water Hyacinth is one of potential raw material of the world-s worst aquatic plant as a feedstock to produce Bioethanol. The purposed this research is obtain reduced of matter for biodegradation lignin in Biological pretreatment with White Rot Fungi eg. Phanerochaete Chrysosporium using Solid state Fermentation methods. Phanerochaete Chrysosporium is known to have the best ability to degraded lignin, but simultaneously it can also degraded cellulose and hemicelulose. During 8 weeks incubation, water hyacinth occurred loss of weight reached 34,67%, while loss of lignin reached 67,21%, loss of cellulose reached 11,01% and loss of hemicellulose reached 36,56%. The kinetic of losses lignin using regression linear plot, the results is obtained constant rate (k) of reduction lignin is -0.1053 and the equation of reduction of lignin is y = wo - 0, 1.53 x

Keywords: Biodegradation, lignin, PhanerochaeteChrysosporium, SSF, Water Hyacinth, Bioethanol

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2572
1628 Contribution of the Cogeneration Systems to Environment and Sustainability

Authors: Kemal Çomakli, Uğur Çakir, Ayşegül Çokgez Kuş, Erol Şahin

Abstract:

A lower consumption of thermal energy will contribute not only to a reduction in the running costs, but also in the reduction of pollutant emissions that contribute to the greenhouse effect. Cogeneration or CHP (Combined Heat and Power) is the system that produces power and usable heat simultaneously by decreasing the pollutant emissions and increasing the efficiency. Combined production of mechanical or electrical and thermal energy using a simple energy source, such as oil, coal, natural or liquefied gas, biomass or the sun; affords remarkable energy savings and frequently makes it possible to operate with greater efficiency when compared to a system producing heat and power separately. This study aims to bring out the contributions of cogeneration systems to the environment and sustainability by saving the energy and reducing the emissions. In this way we made a comprehensive investigation in the literature by focusing on the environmental aspects of the cogeneration systems. In the light of these studies we reached that, cogeneration systems must be consider in sustainability and their benefits on protecting the ecology must be investigated.

Keywords: Sustainability, cogeneration systems, energy economy, energy saving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2654
1627 Spreading Dynamics of a Viral Infection in a Complex Network

Authors: Khemanand Moheeput, Smita S. D. Goorah, Satish K. Ramchurn

Abstract:

We report a computational study of the spreading dynamics of a viral infection in a complex (scale-free) network. The final epidemic size distribution (FESD) was found to be unimodal or bimodal depending on the value of the basic reproductive number R0 . The FESDs occurred on time-scales long enough for intermediate-time epidemic size distributions (IESDs) to be important for control measures. The usefulness of R0 for deciding on the timeliness and intensity of control measures was found to be limited by the multimodal nature of the IESDs and by its inability to inform on the speed at which the infection spreads through the population. A reduction of the transmission probability at the hubs of the scale-free network decreased the occurrence of the larger-sized epidemic events of the multimodal distributions. For effective epidemic control, an early reduction in transmission at the index cell and its neighbors was essential.

Keywords: Basic reproductive number, epidemic control, scalefree network, viral infection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1713
1626 A Numerical Study on the Effects of N2 Dilution on the Flame Structure and Temperature Distribution of Swirl Diffusion Flames

Authors: Yasaman Tohidi, Shidvash Vakilipour, Saeed Ebadi Tavallaee, Shahin Vakilipoor Takaloo, Hossein Amiri

Abstract:

The numerical modeling is performed to study the effects of N2 addition to the fuel stream on the flame structure and temperature distribution of methane-air swirl diffusion flames with different swirl intensities. The Open source Field Operation and Manipulation (OpenFOAM) has been utilized as the computational tool. Flamelet approach along with modified k-ε model is employed to model the flame characteristics.  The results indicate that the presence of N2 in the fuel stream leads to the flame temperature reduction. By increasing of swirl intensity, the flame structure changes significantly. The flame has a conical shape in low swirl intensity; however, it has an hour glass-shape with a shorter length in high swirl intensity. The effects of N2 dilution decrease the flame length in all swirl intensities; however, the rate of reduction is more noticeable in low swirl intensity.

Keywords: Swirl diffusion flame, N2 dilution, OpenFOAM, Swirl intensity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 602
1625 Acute Coronary Syndrome Prediction Using Data Mining Techniques- An Application

Authors: Tahseen A. Jilani, Huda Yasin, Madiha Yasin, C. Ardil

Abstract:

In this paper we use data mining techniques to investigate factors that contribute significantly to enhancing the risk of acute coronary syndrome. We assume that the dependent variable is diagnosis – with dichotomous values showing presence or  absence of disease. We have applied binary regression to the factors affecting the dependent variable. The data set has been taken from two different cardiac hospitals of Karachi, Pakistan. We have total sixteen variables out of which one is assumed dependent and other 15 are independent variables. For better performance of the regression model in predicting acute coronary syndrome, data reduction techniques like principle component analysis is applied. Based on results of data reduction, we have considered only 14 out of sixteen factors.

Keywords: Acute coronary syndrome (ACS), binary logistic regression analyses, myocardial ischemia (MI), principle component analysis, unstable angina (U.A.).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2104
1624 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.

Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 150
1623 T-Wave Detection Based on an Adjusted Wavelet Transform Modulus Maxima

Authors: Samar Krimi, Kaïs Ouni, Noureddine Ellouze

Abstract:

The method described in this paper deals with the problems of T-wave detection in an ECG. Determining the position of a T-wave is complicated due to the low amplitude, the ambiguous and changing form of the complex. A wavelet transform approach handles these complications therefore a method based on this concept was developed. In this way we developed a detection method that is able to detect T-waves with a sensitivity of 93% and a correct-detection ratio of 93% even with a serious amount of baseline drift and noise.

Keywords: ECG, Modulus Maxima Wavelet Transform, Performance, T-wave detection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845
1622 Combining Bagging and Additive Regression

Authors: Sotiris B. Kotsiantis

Abstract:

Bagging and boosting are among the most popular re-sampling ensemble methods that generate and combine a diversity of regression models using the same learning algorithm as base-learner. Boosting algorithms are considered stronger than bagging on noise-free data. However, there are strong empirical indications that bagging is much more robust than boosting in noisy settings. For this reason, in this work we built an ensemble using an averaging methodology of bagging and boosting ensembles with 10 sub-learners in each one. We performed a comparison with simple bagging and boosting ensembles with 25 sub-learners on standard benchmark datasets and the proposed ensemble gave better accuracy.

Keywords: Regressors, statistical learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1635
1621 Effect of Pre-Construction on Construction Schedule and Client Loyalty

Authors: Jong Hoon Kim, Hyun-Soo Lee, Moonseo Park, Min Jeong, Inbeom Lee

Abstract:

Pre-construction is essential in achieving the success of a construction project. Due to the early involvement of project participants in the construction phase, project managers are able to plan ahead and solve issues well in advance leading to the success of the project and the satisfaction of the client. This research utilizes quantitative data derived from construction management projects in order to identify the relationship between pre-construction, construction schedule, and client satisfaction. A total of 65 construction projects and 93 clients were investigated for this research in an attempt to identify (a) the relationship between pre-construction and schedule reduction, and (b) pre-construction and client loyalty. Based on the quantitative analysis, this research was able to establish a negative correlation based on 65 construction projects between pre-construction and project schedule existed. This finding represents that the more pre-construction is performed for a certain project, the overall construction schedule decreased. Then, to determine the relationship between pre-construction and client satisfaction, Net Promoter Score (NPS) of 93 clients from the 65 projects was utilized. Pre-construction and NPS was further analyzed and a positive correlation was found between the two. This infers that clients tend to be more satisfied with projects with higher ratio of pre-construction than those projects with less pre-construction.

Keywords: Client loyalty, NPS, pre-construction, schedule reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3256
1620 Prediction of Coast Down Time for Mechanical Faults in Rotating Machinery Using Artificial Neural Networks

Authors: G. R. Rameshkumar, B. V. A Rao, K. P. Ramachandran

Abstract:

Misalignment and unbalance are the major concerns in rotating machinery. When the power supply to any rotating system is cutoff, the system begins to lose the momentum gained during sustained operation and finally comes to rest. The exact time period from when the power is cutoff until the rotor comes to rest is called Coast Down Time. The CDTs for different shaft cutoff speeds were recorded at various misalignment and unbalance conditions. The CDT reduction percentages were calculated for each fault and there is a specific correlation between the CDT reduction percentage and the severity of the fault. In this paper, radial basis network, a new generation of artificial neural networks, has been successfully incorporated for the prediction of CDT for misalignment and unbalance conditions. Radial basis network has been found to be successful in the prediction of CDT for mechanical faults in rotating machinery.

Keywords: Coast Down Time, Misalignment, Unbalance, Artificial Neural Networks, Radial Basis Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2981
1619 Reducing Test Vectors Count Using Fault Based Optimization Schemes in VLSI Testing

Authors: Vinod Kumar Khera, R. K. Sharma, A. K. Gupta

Abstract:

Power dissipation increases exponentially during test mode as compared to normal operation of the circuit. In extreme cases, test power is more than twice the power consumed during normal operation mode. Test vector generation scheme is key component in deciding the power hungriness of a circuit during testing. Test vector count and consequent leakage current are functions of test vector generation scheme. Fault based test vector count optimization has been presented in this work. It helps in reducing test vector count and the leakage current. In the presented scheme, test vectors have been reduced by extracting essential child vectors. The scheme has been tested experimentally using stuck at fault models and results ensure the reduction in test vector count.

Keywords: Low power VLSI testing, independent fault, essential faults, test vector reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1415
1618 Spread Spectrum Code Estimation by Genetic Algorithm

Authors: V. R. Asghari, M. Ardebilipour

Abstract:

In the context of spectrum surveillance, a method to recover the code of spread spectrum signal is presented, whereas the receiver has no knowledge of the transmitter-s spreading sequence. The approach is based on a genetic algorithm (GA), which is forced to model the received signal. Genetic algorithms (GAs) are well known for their robustness in solving complex optimization problems. Experimental results show that the method provides a good estimation, even when the signal power is below the noise power.

Keywords: Code estimation, genetic algorithms, spread spectrum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1565
1617 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images

Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir

Abstract:

The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement. On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.

Keywords: Automatic landing, multirotor, nonlinear control, parameters estimation, optical flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 510
1616 Conceptual Design of the TransAtlantic as a Research Platform for the Development of “Green” Aircraft Technologies

Authors: Victor Maldonado

Abstract:

Recent concerns of the growing impact of aviation on climate change has prompted the emergence of a field referred to as Sustainable or “Green” Aviation dedicated to mitigating the harmful impact of aviation related CO2 emissions and noise pollution on the environment. In the current paper, a unique “green” business jet aircraft called the TransAtlantic was designed (using analytical formulation common in conceptual design) in order to show the feasibility for transatlantic passenger air travel with an aircraft weighing less than 10,000 pounds takeoff weight. Such an advance in fuel efficiency will require development and integration of advanced and emerging aerospace technologies. The TransAtlantic design is intended to serve as a research platform for the development of technologies such as active flow control. Recent advances in the field of active flow control and how this technology can be integrated on a sub-scale flight demonstrator are discussed in this paper. Flow control is a technique to modify the behavior of coherent structures in wall-bounded flows (over aerodynamic surfaces such as wings and turbine nozzles) resulting in improved aerodynamic cruise and flight control efficiency. One of the key challenges to application in manned aircraft is development of a robust high-momentum actuator that can penetrate the boundary layer flowing over aerodynamic surfaces. These deficiencies may be overcome in the current development and testing of a novel electromagnetic synthetic jet actuator which replaces piezoelectric materials as the driving diaphragm. One of the overarching goals of the TranAtlantic research platform include fostering national and international collaboration to demonstrate (in numerical and experimental models) reduced CO2/ noise pollution via development and integration of technologies and methodologies in design optimization, fluid dynamics, structures/ composites, propulsion, and controls.

Keywords: Aircraft Design, Sustainable “Green” Aviation, Active Flow Control, Aerodynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2524
1615 Performance of Air Gap Membrane Distillation for Desalination of Ground Water and Seawater

Authors: Bhausaheb L. Pangarkar, M.G. Sane

Abstract:

Membrane distillation (MD) is a rising technology for seawater or brine desalination process. In this work, an air gap membrane distillation (AGMD) performance was investigated for aqueous NaCl solution along with natural ground water and seawater. In order to enhance the performance of the AGMD process in desalination, that is, to get more flux, it is necessary to study the effect of operating parameters on the yield of distillate water. The influence of operational parameters such as feed flow rate, feed temperature, feed salt concentration, coolant temperature and air gap thickness on the membrane distillation (MD) permeation flux have been investigated for low and high salt solution. the natural application of ground water and seawater over 90 h continuous operation, scale deposits observed on the membrane surface and reduction in flux represents 23% for ground water and 60% for seawater, in 90 h. This reduction was eliminated (less than 14 %) by acidification of feed water. Hence, promote the research attention in apply of AGMD for the ground water as well as seawater desalination over today-s conventional RO operation.

Keywords: MD, ground water, seawater, AGMD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2463
1614 Digital Image Watermarking in the Wavelet Transform Domain

Authors: Kamran Hameed, Adeel Mumtaz, S.A.M. Gilani

Abstract:

In this paper, we start by first characterizing the most important and distinguishing features of wavelet-based watermarking schemes. We studied the overwhelming amount of algorithms proposed in the literature. Application scenario, copyright protection is considered and building on the experience that was gained, implemented two distinguishing watermarking schemes. Detailed comparison and obtained results are presented and discussed. We concluded that Joo-s [1] technique is more robust for standard noise attacks than Dote-s [2] technique.

Keywords: Digital image, Copyright protection, Watermarking, Wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2645
1613 Clustering Categorical Data Using Hierarchies (CLUCDUH)

Authors: Gökhan Silahtaroğlu

Abstract:

Clustering large populations is an important problem when the data contain noise and different shapes. A good clustering algorithm or approach should be efficient enough to detect clusters sensitively. Besides space complexity, time complexity also gains importance as the size grows. Using hierarchies we developed a new algorithm to split attributes according to the values they have and choosing the dimension for splitting so as to divide the database roughly into equal parts as much as possible. At each node we calculate some certain descriptive statistical features of the data which reside and by pruning we generate the natural clusters with a complexity of O(n).

Keywords: Clustering, tree, split, pruning, entropy, gini.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1550
1612 High Resolution Methods Based On Rank Revealing Triangular Factorizations

Authors: M. Bouri, S. Bourennane

Abstract:

In this paper, we propose a novel method for subspace estimation used high resolution method without eigendecomposition where the sample Cross-Spectral Matrix (CSM) is replaced by upper triangular matrix obtained from LU factorization. This novel method decreases the computational complexity. The method relies on a recently published result on Rank-Revealing LU (RRLU) factorization. Simulation results demonstrates that the new algorithm outperform the Householder rank-revealing QR (RRQR) factorization method and the MUSIC in the low Signal to Noise Ratio (SNR) scenarios.

Keywords: Factorization, Localization, Matrix, Signalsubspace.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1355
1611 Water and Soil Environment Pollution Reduction by Filter Strips

Authors: Roy R. Gu, Mahesh Sahu, Xianggui Zhao

Abstract:

Contour filter strips planted with perennial vegetation can be used to improve surface and ground water quality by reducing pollutant, such as NO3-N, and sediment outflow from cropland to a river or lake. Meanwhile, the filter strips of perennial grass with biofuel potentials also have economic benefits of producing ethanol. In this study, The Soil and Water Assessment Tool (SWAT) model was applied to the Walnut Creek Watershed to examine the effectiveness of contour strips in reducing NO3-N outflows from crop fields to the river or lake. Required input data include watershed topography, slope, soil type, land-use, management practices in the watershed and climate parameters (precipitation, maximum/minimum air temperature, solar radiation, wind speed and relative humidity). Numerical experiments were conducted to identify potential subbasins in the watershed that have high water quality impact, and to examine the effects of strip size and location on NO3-N reduction in the subbasins under various meteorological conditions (dry, average and wet). Variable sizes of contour strips (10%, 20%, 30% and 50%, respectively, of a subbasin area) planted with perennial switchgrass were selected for simulating the effects of strip size and location on stream water quality. Simulation results showed that a filter strip having 10%-50% of the subbasin area could lead to 55%- 90% NO3-N reduction in the subbasin during an average rainfall year. Strips occupying 10-20% of the subbasin area were found to be more efficient in reducing NO3-N when placed along the contour than that when placed along the river. The results of this study can assist in cost-benefit analysis and decision-making in best water resources management practices for environmental protection.

Keywords: modeling, SWAT, water quality, NO3-N, watershed.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735
1610 Effect of Aging on the Second Law Efficiency, Exergy Destruction and Entropy Generation in the Skeletal Muscles during Exercise

Authors: Jale Çatak, Bayram Yılmaz, Mustafa Ozilgen

Abstract:

The second law muscle work efficiency is obtained by multiplying the metabolic and mechanical work efficiencies. Thermodynamic analyses are carried out with 19 sets of arms and legs exercise data which were obtained from the healthy young people. These data are used to simulate the changes occurring during aging. The muscle work efficiency decreases with aging as a result of the reduction of the metabolic energy generation in the mitochondria. The reduction of the mitochondrial energy efficiency makes it difficult to carry out the maintenance of the muscle tissue, which in turn causes a decline of the muscle work efficiency. When the muscle attempts to produce more work, entropy generation and exergy destruction increase. Increasing exergy destruction may be regarded as the result of the deterioration of the muscles. When the exergetic efficiency is 0.42, exergy destruction becomes 1.49 folds of the work performance. This proportionality becomes 2.50 and 5.21 folds when the exergetic efficiency decreases to 0.30 and 0.17 respectively.

Keywords: Aging mitochondria, entropy generation, exergy destruction, muscle work performance, second law efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1371
1609 Intelligent Audio Watermarking using Genetic Algorithm in DWT Domain

Authors: M. Ketcham, S. Vongpradhip

Abstract:

In this paper, an innovative watermarking scheme for audio signal based on genetic algorithms (GA) in the discrete wavelet transforms is proposed. It is robust against watermarking attacks, which are commonly employed in literature. In addition, the watermarked image quality is also considered. We employ GA for the optimal localization and intensity of watermark. The watermark detection process can be performed without using the original audio signal. The experimental results demonstrate that watermark is inaudible and robust to many digital signal processing, such as cropping, low pass filter, additive noise.

Keywords: Intelligent Audio Watermarking, GeneticAlgorithm, DWT Domain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2049
1608 A New Distribution Network Reconfiguration Approach using a Tree Model

Authors: E. Dolatdar, S. Soleymani, B. Mozafari

Abstract:

Power loss reduction is one of the main targets in power industry and so in this paper, the problem of finding the optimal configuration of a radial distribution system for loss reduction is considered. Optimal reconfiguration involves the selection of the best set of branches to be opened ,one each from each loop, for reducing resistive line losses , and reliving overloads on feeders by shifting the load to adjacent feeders. However ,since there are many candidate switching combinations in the system ,the feeder reconfiguration is a complicated problem. In this paper a new approach is proposed based on a simple optimum loss calculation by determining optimal trees of the given network. From graph theory a distribution network can be represented with a graph that consists a set of nodes and branches. In fact this problem can be viewed as a problem of determining an optimal tree of the graph which simultaneously ensure radial structure of each candidate topology .In this method the refined genetic algorithm is also set up and some improvements of algorithm are made on chromosome coding. In this paper an implementation of the algorithm presented by [7] is applied by modifying in load flow program and a comparison of this method with the proposed method is employed. In [7] an algorithm is proposed that the choice of the switches to be opened is based on simple heuristic rules. This algorithm reduce the number of load flow runs and also reduce the switching combinations to a fewer number and gives the optimum solution. To demonstrate the validity of these methods computer simulations with PSAT and MATLAB programs are carried out on 33-bus test system. The results show that the performance of the proposed method is better than [7] method and also other methods.

Keywords: Distribution System, Reconfiguration, Loss Reduction , Graph Theory , Optimization , Genetic Algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3778