Search results for: block typechannel estimation.
372 Precombining Adaptive LMMSE Detection for DS-CDMA Systems in Time Varying Channels: Non Blind and Blind Approaches
Authors: M. D. Kokate, T. R. Sontakke, P. W. Wani
Abstract:
This paper deals with an adaptive multiuser detector for direct sequence code division multiple-access (DS-CDMA) systems. A modified receiver, precombinig LMMSE is considered under time varying channel environment. Detector updating is performed with two criterions, mean square estimation (MSE) and MOE optimization technique. The adaptive implementation issues of these two schemes are quite different. MSE criterion updates the filter weights by minimizing error between data vector and adaptive vector. MOE criterion together with canonical representation of the detector results in a constrained optimization problem. Even though the canonical representation is very complicated under time varying channels, it is analyzed with assumption of average power profile of multipath replicas of user of interest. The performance of both schemes is studied for practical SNR conditions. Results show that for poor SNR, MSE precombining LMMSE is better than the blind precombining LMMSE but for greater SNR, MOE scheme outperforms with better result.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1495371 An Attempt to Predict the Performances of a Rocket Thrust Chamber
Authors: A. Benarous, D. Karmed, R. Haoui, A. Liazid
Abstract:
The process for predicting the ballistic properties of a liquid rocket engine is based on the quantitative estimation of idealized performance deviations. In this aim, an equilibrium chemistry procedure is firstly developed and implemented in a Fortran routine. The thermodynamic formulation allows for the calculation of the theoretical performances of a rocket thrust chamber. In a second step, a computational fluid dynamic analysis of the turbulent reactive flow within the chamber is performed using a finite volume approach. The obtained values for the “quasi-real" performances account for both turbulent mixing and chemistryturbulence coupling. In the present work, emphasis is made on the combustion efficiency performance for which deviation is mainly due to radial gradients of static temperature and mixture ratio. Numerical values of the characteristic velocity are successfully compared with results from an industry-used code. The results are also confronted with the experimental data of a laboratory-scale rocket engine.
Keywords: JANAF methodology, Liquid rocket engine, Mascotte test-rig, Theoretical performances.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2044370 Contribution of On-Site and Off-Site Processes to Greenhouse Gas (GHG) Emissions by Wastewater Treatment Plants
Authors: Laleh Yerushalmi, Fariborz Haghighat, Maziar Bani Shahabadi
Abstract:
The estimation of overall on-site and off-site greenhouse gas (GHG) emissions by wastewater treatment plants revealed that in anaerobic and hybrid treatment systems greater emissions result from off-site processes compared to on-site processes. However, in aerobic treatment systems, onsite processes make a higher contribution to the overall GHG emissions. The total GHG emissions were estimated to be 1.6, 3.3 and 3.8 kg CO2-e/kg BOD in the aerobic, anaerobic and hybrid treatment systems, respectively. In the aerobic treatment system without the recovery and use of the generated biogas, the off-site GHG emissions were 0.65 kg CO2-e/kg BOD, accounting for 40.2% of the overall GHG emissions. This value changed to 2.3 and 2.6 kg CO2-e/kg BOD, and accounted for 69.9% and 68.1% of the overall GHG emissions in the anaerobic and hybrid treatment systems, respectively. The increased off-site GHG emissions in the anaerobic and hybrid treatment systems are mainly due to material usage and energy demand in these systems. The anaerobic digester can contribute up to 100%, 55% and 60% of the overall energy needs of plants in the aerobic, anaerobic and hybrid treatment systems, respectively.
Keywords: On-site and off-site greenhouse gas (GHG)emissions, wastewater treatment plants, biogas recovery
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2166369 Optimal Convolutive Filters for Real-Time Detection and Arrival Time Estimation of Transient Signals
Authors: Michal Natora, Felix Franke, Klaus Obermayer
Abstract:
Linear convolutive filters are fast in calculation and in application, and thus, often used for real-time processing of continuous data streams. In the case of transient signals, a filter has not only to detect the presence of a specific waveform, but to estimate its arrival time as well. In this study, a measure is presented which indicates the performance of detectors in achieving both of these tasks simultaneously. Furthermore, a new sub-class of linear filters within the class of filters which minimize the quadratic response is proposed. The proposed filters are more flexible than the existing ones, like the adaptive matched filter or the minimum power distortionless response beamformer, and prove to be superior with respect to that measure in certain settings. Simulations of a real-time scenario confirm the advantage of these filters as well as the usefulness of the performance measure.
Keywords: Adaptive matched filter, minimum variance distortionless response, beam forming, Capon beam former, linear filters, performance measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1523368 Semi-automatic Background Detection in Microscopic Images
Authors: Alessandro Bevilacqua, Alessandro Gherardi, Ludovico Carozza, Filippo Piccinini
Abstract:
The last years have seen an increasing use of image analysis techniques in the field of biomedical imaging, in particular in microscopic imaging. The basic step for most of the image analysis techniques relies on a background image free of objects of interest, whether they are cells or histological samples, to perform further analysis, such as segmentation or mosaicing. Commonly, this image consists of an empty field acquired in advance. However, many times achieving an empty field could not be feasible. Or else, this could be different from the background region of the sample really being studied, because of the interaction with the organic matter. At last, it could be expensive, for instance in case of live cell analyses. We propose a non parametric and general purpose approach where the background is built automatically stemming from a sequence of images containing even objects of interest. The amount of area, in each image, free of objects just affects the overall speed to obtain the background. Experiments with different kinds of microscopic images prove the effectiveness of our approach.
Keywords: Microscopy, flat field correction, background estimation, image segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1835367 Dynamic Measurement System Modeling with Machine Learning Algorithms
Authors: Changqiao Wu, Guoqing Ding, Xin Chen
Abstract:
In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.Keywords: Dynamic system modeling, neural network, normal equation, second order gradient descent.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781366 Combining Color and Layout Features for the Identification of Low-resolution Documents
Authors: Ardhendu Behera, Denis Lalanne, Rolf Ingold
Abstract:
This paper proposes a method, combining color and layout features, for identifying documents captured from lowresolution handheld devices. On one hand, the document image color density surface is estimated and represented with an equivalent ellipse and on the other hand, the document shallow layout structure is computed and hierarchically represented. The combined color and layout features are arranged in a symbolic file, which is unique for each document and is called the document-s visual signature. Our identification method first uses the color information in the signatures in order to focus the search space on documents having a similar color distribution, and finally selects the document having the most similar layout structure in the remaining search space. Finally, our experiment considers slide documents, which are often captured using handheld devices.Keywords: Document color modeling, document visual signature, kernel density estimation, document identification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1375365 Sparse Unmixing of Hyperspectral Data by Exploiting Joint-Sparsity and Rank-Deficiency
Authors: Fanqiang Kong, Chending Bian
Abstract:
In this work, we exploit two assumed properties of the abundances of the observed signatures (endmembers) in order to reconstruct the abundances from hyperspectral data. Joint-sparsity is the first property of the abundances, which assumes the adjacent pixels can be expressed as different linear combinations of same materials. The second property is rank-deficiency where the number of endmembers participating in hyperspectral data is very small compared with the dimensionality of spectral library, which means that the abundances matrix of the endmembers is a low-rank matrix. These assumptions lead to an optimization problem for the sparse unmixing model that requires minimizing a combined l2,p-norm and nuclear norm. We propose a variable splitting and augmented Lagrangian algorithm to solve the optimization problem. Experimental evaluation carried out on synthetic and real hyperspectral data shows that the proposed method outperforms the state-of-the-art algorithms with a better spectral unmixing accuracy.Keywords: Hyperspectral unmixing, joint-sparse, low-rank representation, abundance estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 770364 Kinetic Parameter Estimation from Thermogravimetry and Microscale Combustion Calorimetry
Authors: Rhoda Afriyie Mensah, Lin Jiang, Solomon Asante-Okyere, Xu Qiang, Cong Jin
Abstract:
Flammability analysis of extruded polystyrene (XPS) has become crucial due to its utilization as insulation material for energy efficient buildings. Using the Kissinger-Akahira-Sunose and Flynn-Wall-Ozawa methods, the degradation kinetics of two pure XPS from the local market, red and grey ones, were obtained from the results of thermogravity analysis (TG) and microscale combustion calorimetry (MCC) experiments performed under the same heating rates. From the experiments, it was discovered that red XPS released more heat than grey XPS and both materials showed two mass loss stages. Consequently, the kinetic parameters for red XPS were higher than grey XPS. A comparative evaluation of activation energies from MCC and TG showed an insignificant degree of deviation signifying an equivalent apparent activation energy from both methods. However, different activation energy profiles as a result of the different chemical pathways were presented when the dependencies of the activation energies on extent of conversion for TG and MCC were compared.
Keywords: Flammability, microscale combustion calorimetry, thermogravity analysis, thermal degradation, kinetic analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 884363 Spatio-Temporal Analysis and Mapping of Malaria in Thailand
Authors: Krisada Lekdee, Sunee Sammatat, Nittaya Boonsit
Abstract:
This paper proposes a GLMM with spatial and temporal effects for malaria data in Thailand. A Bayesian method is used for parameter estimation via Gibbs sampling MCMC. A conditional autoregressive (CAR) model is assumed to present the spatial effects. The temporal correlation is presented through the covariance matrix of the random effects. The malaria quarterly data have been extracted from the Bureau of Epidemiology, Ministry of Public Health of Thailand. The factors considered are rainfall and temperature. The result shows that rainfall and temperature are positively related to the malaria morbidity rate. The posterior means of the estimated morbidity rates are used to construct the malaria maps. The top 5 highest morbidity rates (per 100,000 population) are in Trat (Q3, 111.70), Chiang Mai (Q3, 104.70), Narathiwat (Q4, 97.69), Chiang Mai (Q2, 88.51), and Chanthaburi (Q3, 86.82). According to the DIC criterion, the proposed model has a better performance than the GLMM with spatial effects but without temporal terms.
Keywords: Bayesian method, generalized linear mixed model (GLMM), malaria, spatial effects, temporal correlation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2147362 Deployment of a Biocompatible International Space Station into Geostationary Orbit
Authors: Tim Falk, Chris Chatwin
Abstract:
This study explores the possibility of a space station that will occupy a geostationary equatorial orbit (GEO) and create artificial gravity using centripetal acceleration. The concept of the station is to create a habitable, safe environment that can increase the possibility of space tourism by reducing the wide variation of hazards associated with space exploration. The ability to control the intensity of artificial gravity through Hall-effect thrusters will allow experiments to be carried out at different levels of artificial gravity. A feasible prototype model was built to convey the concept and to enable cost estimation. The SpaceX Falcon Heavy rocket with a 26,700 kg payload to GEO was selected to take the 675 tonne spacecraft into orbit; space station construction will require up to 30 launches, this would be reduced to 5 launches when the SpaceX BFR becomes available. The estimated total cost of implementing the Sussex Biocompatible International Space Station (BISS) is approximately $47.039 billion, which is very attractive when compared to the cost of the International Space Station, which cost $150 billion.
Keywords: Artificial gravity, biocompatible, geostationary orbit, space station.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 566361 Application of Generalized Autoregressive Score Model to Stock Returns
Authors: Katleho Daniel Makatjane, Diteboho Lawrence Xaba, Ntebogang Dinah Moroke
Abstract:
The current study investigates the behaviour of time-varying parameters that are based on the score function of the predictive model density at time t. The mechanism to update the parameters over time is the scaled score of the likelihood function. The results revealed that there is high persistence of time-varying, as the location parameter is higher and the skewness parameter implied the departure of scale parameter from the normality with the unconditional parameter as 1.5. The results also revealed that there is a perseverance of the leptokurtic behaviour in stock returns which implies the returns are heavily tailed. Prior to model estimation, the White Neural Network test exposed that the stock price can be modelled by a GAS model. Finally, we proposed further researches specifically to model the existence of time-varying parameters with a more detailed model that encounters the heavy tail distribution of the series and computes the risk measure associated with the returns.
Keywords: Generalized autoregressive score model, stock returns, time-varying.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1034360 The Estimation of Human Vital Signs Complexity
Authors: L. Bikulciene, E. Venskaityte, G. Jarusevicius
Abstract:
Nonstationary and nonlinear signals generated by living complex systems defy traditional mechanistic approaches, which are based on homeostasis. Previous our studies have shown that the evaluation of the interactions of physiological signals by using special analysis methods is suitable for observation of physiological processes. It is demonstrated the possibility of using deep physiological model, based on the interpretation of the changes of the human body’s functional states combined with an application of the analytical method based on matrix theory for the physiological signals analysis, which was applied on high risk cardiac patients. It is shown that evaluation of cardiac signals interactions show peculiar for each individual functional changes at the onset of hemodynamic restoration procedure. Therefore, we suggest that the alterations of functional state of the body, after patients overcome surgery can be complemented by the data received from the suggested approach of the evaluation of functional variables’ interactions.
Keywords: Cardiac diseases, Complex systems theory, ECG analysis, matrix analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2247359 Nodal Load Profiles Estimation for Time Series Load Flow Using Independent Component Analysis
Authors: Mashitah Mohd Hussain, Salleh Serwan, Zuhaina Hj Zakaria
Abstract:
This paper presents a method to estimate load profile in a multiple power flow solutions for every minutes in 24 hours per day. A method to calculate multiple solutions of non linear profile is introduced. The Power System Simulation/Engineering (PSS®E) and python has been used to solve the load power flow. The result of this power flow solutions has been used to estimate the load profiles for each load at buses using Independent Component Analysis (ICA) without any knowledge of parameter and network topology of the systems. The proposed algorithm is tested with IEEE 69 test bus system represents for distribution part and the method of ICA has been programmed in MATLAB R2012b version. Simulation results and errors of estimations are discussed in this paper.Keywords: Electrical Distribution System, Power Flow Solution, Distribution Network, Independent Component Analysis, Newton Raphson, Power System Simulation for Engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2916358 Speech Intelligibility Improvement Using Variable Level Decomposition DWT
Authors: Samba Raju, Chiluveru, Manoj Tripathy
Abstract:
Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methodsKeywords: Discrete Wavelet Transform, speech intelligibility, STOI, standard deviation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 693357 Effect of VA-Mycorrhiza on Growth and Yield of Sunflower (Helianthus annuus L.) at Different Phosphorus Levels
Authors: Hossein Soleimanzadeh
Abstract:
The effect of seed inoculation by VA- mycorrhiza and different levels of phosphorus fertilizer on growth and yield of sunflower (Azargol cultivar) was studied in experiment farm of Islamic Azad University, Karaj Branch during 2008 growing season. The experiment treatments were arranged in factorial based on a complete randomized block design with three replications. Four phosphorus fertilizer levels of 25%, 50% 75% and 100% P recommended with two levels of Mycorrhiza: with and without Mycorrhiza (control) were assigned in a factorial combination. Results showed that head diameter, number of seeds in head, seed yield and oil yield were significantly higher in inoculated plants than in non-inoculated plants. Head diameter, number of seeds in head, 1000 seeds weight, biological yield, seed yield and oil yield increased with increasing P level above 75% P recommended in non-inoculated plants, whereas no significant difference was observed between 75% and 100% P recommended. The positive effect of mycorrhizal inoculation decreased with increasing P levels due to decreased percent root colonization at higher P levels. According to the results of this experiment, application of mycorrhiza in present of 50% P recommended had an appropriate performance and could increase seed yield and oil production to an acceptable level, so it could be considered as a suitable substitute for chemical phosphorus fertilizer in organic agricultural systems.Keywords: phosphorus fertilizer, seed yield, sunflower, VA-mycorrhiza
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2315356 Comparison of Hough Transform and Mean Shift Algorithm for Estimation of the Orientation Angle of Industrial Data Matrix Codes
Authors: Ion-Cosmin Dita, Vasile Gui, Franz Quint, Marius Otesteanu
Abstract:
In automatic manufacturing and assembling of mechanical, electrical and electronic parts one needs to reliably identify the position of components and to extract the information of these components. Data Matrix Codes (DMC) are established by these days in many areas of industrial manufacturing thanks to their concentration of information on small spaces. In today’s usually order-related industry, where increased tracing requirements prevail, they offer further advantages over other identification systems. This underlines in an impressive way the necessity of a robust code reading system for detecting DMC on the components in factories. This paper compares two methods for estimating the angle of orientation of Data Matrix Codes: one method based on the Hough Transform and the other based on the Mean Shift Algorithm. We concentrate on Data Matrix Codes in industrial environment, punched, milled, lasered or etched on different materials in arbitrary orientation.
Keywords: Industrial data matrix code, Hough transform, mean shift.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1336355 Calibration of Syringe Pumps Using Interferometry and Optical Methods
Authors: E. Batista, R. Mendes, A. Furtado, M. C. Ferreira, I. Godinho, J. A. Sousa, M. Alvares, R. Martins
Abstract:
Syringe pumps are commonly used for drug delivery in hospitals and clinical environments. These instruments are critical in neonatology and oncology, where any variation in the flow rate and drug dosing quantity can lead to severe incidents and even death of the patient. Therefore it is very important to determine the accuracy and precision of these devices using the suitable calibration methods. The Volume Laboratory of the Portuguese Institute for Quality (LVC/IPQ) uses two different methods to calibrate syringe pumps from 16 nL/min up to 20 mL/min. The Interferometric method uses an interferometer to monitor the distance travelled by a pusher block of the syringe pump in order to determine the flow rate. Therefore, knowing the internal diameter of the syringe with very high precision, the travelled distance, and the time needed for that travelled distance, it was possible to calculate the flow rate of the fluid inside the syringe and its uncertainty. As an alternative to the gravimetric and the interferometric method, a methodology based on the application of optical technology was also developed to measure flow rates. Mainly this method relies on measuring the increase of volume of a drop over time. The objective of this work is to compare the results of the calibration of two syringe pumps using the different methodologies described above. The obtained results were consistent for the three methods used. The uncertainties values were very similar for all the three methods, being higher for the optical drop method due to setup limitations.
Keywords: Calibration, interferometry, syringe pump, optical method, uncertainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781354 Unscented Transformation for Estimating the Lyapunov Exponents of Chaotic Time Series Corrupted by Random Noise
Authors: K. Kamalanand, P. Mannar Jawahar
Abstract:
Many systems in the natural world exhibit chaos or non-linear behavior, the complexity of which is so great that they appear to be random. Identification of chaos in experimental data is essential for characterizing the system and for analyzing the predictability of the data under analysis. The Lyapunov exponents provide a quantitative measure of the sensitivity to initial conditions and are the most useful dynamical diagnostic for chaotic systems. However, it is difficult to accurately estimate the Lyapunov exponents of chaotic signals which are corrupted by a random noise. In this work, a method for estimation of Lyapunov exponents from noisy time series using unscented transformation is proposed. The proposed methodology was validated using time series obtained from known chaotic maps. In this paper, the objective of the work, the proposed methodology and validation results are discussed in detail.
Keywords: Lyapunov exponents, unscented transformation, chaos theory, neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1989353 Estimating Shortest Circuit Path Length Complexity
Authors: Azam Beg, P. W. Chandana Prasad, S.M.N.A Senenayake
Abstract:
When binary decision diagrams are formed from uniformly distributed Monte Carlo data for a large number of variables, the complexity of the decision diagrams exhibits a predictable relationship to the number of variables and minterms. In the present work, a neural network model has been used to analyze the pattern of shortest path length for larger number of Monte Carlo data points. The neural model shows a strong descriptive power for the ISCAS benchmark data with an RMS error of 0.102 for the shortest path length complexity. Therefore, the model can be considered as a method of predicting path length complexities; this is expected to lead to minimum time complexity of very large-scale integrated circuitries and related computer-aided design tools that use binary decision diagrams.Keywords: Monte Carlo circuit simulation data, binary decision diagrams, neural network modeling, shortest path length estimation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1378352 An Efficient Collocation Method for Solving the Variable-Order Time-Fractional Partial Differential Equations Arising from the Physical Phenomenon
Authors: Haniye Dehestani, Yadollah Ordokhani
Abstract:
In this work, we present an efficient approach for solving variable-order time-fractional partial differential equations, which are based on Legendre and Laguerre polynomials. First, we introduced the pseudo-operational matrices of integer and variable fractional order of integration by use of some properties of Riemann-Liouville fractional integral. Then, applied together with collocation method and Legendre-Laguerre functions for solving variable-order time-fractional partial differential equations. Also, an estimation of the error is presented. At last, we investigate numerical examples which arise in physics to demonstrate the accuracy of the present method. In comparison results obtained by the present method with the exact solution and the other methods reveals that the method is very effective.Keywords: Collocation method, fractional partial differential equations, Legendre-Laguerre functions, pseudo-operational matrix of integration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1022351 Potential of Safflower (Carthamus tinctorius L.) for Phytoremedation of Soils Contaminated with Heavy Metals
Authors: Violina R. Angelova, Vanja I. Akova, Stefan V. Krustev, Krasimir I. Ivanov
Abstract:
A field study was conducted to evaluate the efficacy of safflower plant for phytoremediation of contaminated soils. The experiment was performed on an agricultural fields contaminated by the Non-Ferrous-Metal Works near Plovdiv, Bulgaria. Field experiments with randomized complete block design with five treatments (control, compost amendments added at 20 and 40 t/daa, and vermicompost amendments added at 20 and 40 t/daa) were carried out. The quality of safflower seeds and oil (heavy metals and fatty acid composition) were determined. Tested organic amendments significantly influenced the chemical composition of safflower seeds and oil. The compost and vermicompost treatments significantly reduced heavy metals concentration in safflower seeds and oils, but the effect differed among them. Addition of vermicompost and compost leads to an increase in the content of palmitic acid and linoleic acid, and a decrease in the stearic and oleic acids compared with the control. A significant increase in the quantity of saturated acids was observed in the variants with 20 t/daa of compost and 20 t/daa of vermicompost (9.1 and 8.9% relative to the control). Safflower is a plant which is tolerant to heavy metals and can be successfully used in the phytoremediation of heavy metal contaminated soils. The processing of seeds to oil and using the obtained oil for nutritional purposes will greatly reduce the cost of phytoremediation.Keywords: Heavy metals, organic amendments, phytoremediation, safflower.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2792350 Ionanofluids as Novel Fluids for Advanced Heat Transfer Applications
Authors: S. M. Sohel Murshed, C. A. Nieto de Castro, M. J. V. Lourenço, J. França, A. P. C. Ribeiro, S. I. C.Vieira, C. S. Queirós
Abstract:
Ionanofluids are a new and innovative class of heat transfer fluids which exhibit fascinating thermophysical properties compared to their base ionic liquids. This paper deals with the findings of thermal conductivity and specific heat capacity of ionanofluids as a function of a temperature and concentration of nanotubes. Simulation results using ionanofluids as coolants in heat exchanger are also used to access their feasibility and performance in heat transfer devices. Results on thermal conductivity and heat capacity of ionanofluids as well as the estimation of heat transfer areas for ionanofluids and ionic liquids in a model shell and tube heat exchanger reveal that ionanofluids possess superior thermal conductivity and heat capacity and require considerably less heat transfer areas as compared to those of their base ionic liquids. This novel class of fluids shows great potential for advanced heat transfer applications.
Keywords: Heat transfer, Ionanofluids, Ionic liquids, Nanotubes, Thermal conductivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2218349 A Reliable FPGA-based Real-time Optical-flow Estimation
Authors: M. M. Abutaleb, A. Hamdy, M. E. Abuelwafa, E. M. Saad
Abstract:
Optical flow is a research topic of interest for many years. It has, until recently, been largely inapplicable to real-time applications due to its computationally expensive nature. This paper presents a new reliable flow technique which is combined with a motion detection algorithm, from stationary camera image streams, to allow flow-based analyses of moving entities, such as rigidity, in real-time. The combination of the optical flow analysis with motion detection technique greatly reduces the expensive computation of flow vectors as compared with standard approaches, rendering the method to be applicable in real-time implementation. This paper describes also the hardware implementation of a proposed pipelined system to estimate the flow vectors from image sequences in real time. This design can process 768 x 576 images at a very high frame rate that reaches to 156 fps in a single low cost FPGA chip, which is adequate for most real-time vision applications.Keywords: Optical flow, motion detection, real-time systems, FPGA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744348 Investigating Performance of Numerical Distance Relay with Higher Order Antialiasing Filter
Authors: Venkatesh C., K. Shanti Swarup
Abstract:
This paper investigates the impact on operating time delay and relay maloperation when 1st,2nd and 3rd order analog antialiasing filters are used in numerical distance protection. RC filter with cut-off frequency 90 Hz is used. Simulations are carried out for different SIR (Source to line Impedance Ratio), load, fault type and fault conditions using SIMULINK, where the voltage and current signals are fed online to the developed numerical distance relay model. Matlab is used for plotting the impedance trajectory. Investigation results shows that, about 75 % of the simulated cases, numerical distance relay operating time is not increased even-though there is a time delay when higher order filters are used. Relay maloperation (selectivity) also reduces (increases) when higher order filters are used in numerical distance protection.
Keywords: Antialiasing, capacitive voltage transformers, delay estimation, discrete Fourier transform (DFT), distance measurement, low-pass filters, source to line impedance ratio (SIR), protective relaying.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2797347 Affine Radial Basis Function Neural Networks for the Robust Control of Hyperbolic Distributed Parameter Systems
Authors: Eleni Aggelogiannaki, Haralambos Sarimveis
Abstract:
In this work, a radial basis function (RBF) neural network is developed for the identification of hyperbolic distributed parameter systems (DPSs). This empirical model is based only on process input-output data and used for the estimation of the controlled variables at specific locations, without the need of online solution of partial differential equations (PDEs). The nonlinear model that is obtained is suitably transformed to a nonlinear state space formulation that also takes into account the model mismatch. A stable robust control law is implemented for the attenuation of external disturbances. The proposed identification and control methodology is applied on a long duct, a common component of thermal systems, for a flow based control of temperature distribution. The closed loop performance is significantly improved in comparison to existing control methodologies.
Keywords: Hyperbolic Distributed Parameter Systems, Radial Basis Function Neural Networks, H∞ control, Thermal systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420346 Synthesizing an Artificial Loess for Geotechnical Investigations of Collapsible Soil Behavior
Authors: Hamed Sadeghi, Pouya A. Panahi, Hamed Nasiri, Mohammad Sadeghi
Abstract:
Collapsible soils like loess comprise an important category of problematic soils for construction purposes and sustainable development. As a result, research on both geological and geotechnical aspects of this type of soil have been in progress for decades. However, considerable natural variability in physical properties of in-situ loess strata even in a single block sample challenges the fundamental laboratory investigations. The reason behind this is that it is somehow impossible to remove the effect of a specific factor like void ratio from fair comparisons to come with a reliable conclusion. In order to cope with this limitation, two types of artificially made dispersive and calcareous loess are introduced which can be easily reproduced in any soil mechanics laboratory provided that all its compositions are known and controlled. The collapse potential is explored for a variety of soil water salinity and lime content and comparisons are made against the natural soil behavior. Trends are reported for the influence of pore water salinity on collapse potential under different osmotic flow conditions. The most important advantage of artificial loess is the ease of controlling cementing agent content like calcite or dispersive potential for studying their influence on mechanical soil behavior.
Keywords: Artificial loess, unsaturated soils, collapse potential, dispersive clays, laboratory tests.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 774345 Evaluation of Wind Potential for the Lagoon of Venice (Italy) and Estimation of the Annual Energy Output for two Candidate Horizontal- Axis Low-Wind Turbines
Authors: M. Raciti Castelli, L. M. Moglia, E. Benini
Abstract:
This paper presents an evaluation of the wind potential in the area of the Lagoon of Venice (Italy). A full anemometric campaign of 2 year measurements, performed by the "Osservatorio Bioclimatologico dell'Ospedale al Mare di Venezia" has been analyzed to obtain the Weibull wind speed distribution and the main wind directions. The annual energy outputs of two candidate horizontal-axis wind turbines (“Aventa AV-7 LoWind" and “Gaia Wind 133-11kW") have been estimated on the basis of the computed Weibull wind distribution, registering a better performance of the former turbine, due to a higher ratio between rotor swept area and rated power of the electric generator, determining a lower cut-in wind speed.
Keywords: Wind potential, Annual Energy Output (AEO), Weibull distribution, Horizontal-Axis Wind Turbine (HAWT).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2223344 Optimum Radio Capacity Estimation of a Single-Cell Spread Spectrum MIMO System under Rayleigh Fading Conditions
Authors: P. Varzakas
Abstract:
In this paper, the problem of estimating the optimal radio capacity of a single-cell spread spectrum (SS) multiple-inputmultiple- output (MIMO) system operating in a Rayleigh fading environment is examined. The optimisation between the radio capacity and the theoretically achievable average channel capacity (in the sense of information theory) per user of a MIMO single-cell SS system operating in a Rayleigh fading environment is presented. Then, the spectral efficiency is estimated in terms of the achievable average channel capacity per user, during the operation over a broadcast time-varying link, and leads to a simple novel-closed form expression for the optimal radio capacity value based on the maximization of the achieved spectral efficiency. Numerical results are presented to illustrate the proposed analysis.Keywords: Channel capacity, MIMO systems, Radio capacity, Rayleigh fading, Spectral efficiency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1278343 Sorting Primitives and Genome Rearrangementin Bioinformatics: A Unified Perspective
Authors: Swapnoneel Roy, Minhazur Rahman, Ashok Kumar Thakur
Abstract:
Bioinformatics and computational biology involve the use of techniques including applied mathematics, informatics, statistics, computer science, artificial intelligence, chemistry, and biochemistry to solve biological problems usually on the molecular level. Research in computational biology often overlaps with systems biology. Major research efforts in the field include sequence alignment, gene finding, genome assembly, protein structure alignment, protein structure prediction, prediction of gene expression and proteinprotein interactions, and the modeling of evolution. Various global rearrangements of permutations, such as reversals and transpositions,have recently become of interest because of their applications in computational molecular biology. A reversal is an operation that reverses the order of a substring of a permutation. A transposition is an operation that swaps two adjacent substrings of a permutation. The problem of determining the smallest number of reversals required to transform a given permutation into the identity permutation is called sorting by reversals. Similar problems can be defined for transpositions and other global rearrangements. In this work we perform a study about some genome rearrangement primitives. We show how a genome is modelled by a permutation, introduce some of the existing primitives and the lower and upper bounds on them. We then provide a comparison of the introduced primitives.Keywords: Sorting Primitives, Genome Rearrangements, Transpositions, Block Interchanges, Strip Exchanges.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2161