Search results for: MV noise sources
1352 Utilization of Wheat Bran as Bed Material in Solid State Bacterial Production of Lactic Acid with Various Nitrogen Sources
Abstract:
The present experimental investigation brings about a comparative study of lactic acid production by pure strains of Lactobacilli (1) L. delbreuckii (NCIM2025), (2) L. pentosus (NCIM 2912), (3) Lactobacillus sp.(NCIM 2734, (4) Lactobacillus sp. (NCIM2084) and coculture of strain-1 and Stain-2 in solid bed of wheat bran, under the influence of different nitrogen sources such as baker-s yeast, meat extract and proteose peptone. Among the pure cultures, strain-3 attained lowest pH value of 3.44, hence highest acid formation 46.41 g/L, while the coculture attained an overall maximum value 47.56 g/L lactic acid (pH 3.38) at 15 g/L and 20 g/L level of baker-s yeast, respectively.Keywords: Eco-friendly, lactic acid, lactobacilli, wheat bran
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20731351 High Specific Speed in Circulating Water Pump Can Cause Cavitation, Noise and Vibration
Authors: Chandra Gupt Porwal
Abstract:
Excessive vibration means increased wear, increased repair efforts, bad product selection & quality and high energy consumption. This may be sometimes experienced by cavitation or suction/discharge recirculation which could occur only when net positive suction head available NPSHA drops below the net positive suction head required NPSHR. Cavitation can cause axial surging, if it is excessive, will damage mechanical seals, bearings, possibly other pump components frequently, and shorten the life of the impeller. Efforts have been made to explain Suction Energy (SE), Specific Speed (Ns), Suction Specific Speed (Nss), NPSHA, NPSHR & their significance, possible reasons of cavitation /internal recirculation, its diagnostics and remedial measures to arrest and prevent cavitation in this paper. A case study is presented by the author highlighting that the root cause of unwanted noise and vibration is due to cavitation, caused by high specific speeds or inadequate net- positive suction head available which results in damages to material surfaces of impeller & suction bells and degradation of machine performance, its capacity and efficiency too. Author strongly recommends revisiting the technical specifications of CW pumps to provide sufficient NPSH margin ratios >1.5, for future projects and Nss be limited to 8500 - 9000 for cavitation free operation.
Keywords: Best efficiency point (BEP), Net positive suction head NPSHA, NPSHR, Specific Speed NS, Suction Specific Speed Nss.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 50611350 Angles of Arrival Estimation with Unitary Partial Propagator
Authors: Youssef Khmou, Said Safi
Abstract:
In this paper, we investigated the effect of real valued transformation of the spectral matrix of the received data for Angles Of Arrival estimation problem. Indeed, the unitary transformation of Partial Propagator (UPP) for narrowband sources is proposed and applied on Uniform Linear Array (ULA).
Monte Carlo simulations proved the performance of the UPP spectrum comparatively with Forward Backward Partial Propagator (FBPP) and Unitary Propagator (UP). The results demonstrates that when some of the sources are fully correlated and closer than the Rayleigh angular limit resolution of the broadside array, the UPP method outperforms the FBPP in both of spatial resolution and complexity.
Keywords: DOA, Uniform Linear Array, Narrowband, Propagator, Real valued transformation, Subspace, Unitary Operator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22841349 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes
Authors: Sky Chou, Joseph C. Chen
Abstract:
This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.
Keywords: Injection molding, shrinkage, six sigma, Taguchi parameter design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13811348 Ant Colony Optimization for Optimal Distributed Generation in Distribution Systems
Authors: I. A. Farhat
Abstract:
The problem of optimal planning of multiple sources of distributed generation (DG) in distribution networks is treated in this paper using an improved Ant Colony Optimization algorithm (ACO). This objective of this problem is to determine the DG optimal size and location that in order to minimize the network real power losses. Considering the multiple sources of DG, both size and location are simultaneously optimized in a single run of the proposed ACO algorithm. The various practical constraints of the problem are taken into consideration by the problem formulation and the algorithm implementation. A radial power flow algorithm for distribution networks is adopted and applied to satisfy these constraints. To validate the proposed technique and demonstrate its effectiveness, the well-know 69-bus feeder standard test system is employed.cm.
Keywords: About Ant Colony Optimization (ACO), Distributed Generation (DG).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32801347 Pervasive Differentiated Services: A QoS Model for Pervasive Systems
Authors: Sherif G. Aly
Abstract:
In this article, we introduce a mechanism by which the same concept of differentiated services used in network transmission can be applied to provide quality of service levels to pervasive systems applications. The classical DiffServ model, including marking and classification, assured forwarding, and expedited forwarding, are all utilized to create quality of service guarantees for various pervasive applications requiring different levels of quality of service. Through a collection of various sensors, personal devices, and data sources, the transmission of contextsensitive data can automatically occur within a pervasive system with a given quality of service level. Triggers, initiators, sources, and receivers are four entities labeled in our mechanism. An explanation of the role of each is provided, and how quality of service is guaranteed.
Keywords: Pervasive systems, quality of service, differentiated services, mobile devices.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14971346 Vibroacoustic Modulation of Wideband Vibrations and Its Possible Application for Windmill Blade Diagnostics
Authors: Abdullah Alnutayfat, Alexander Sutin, Dong Liu
Abstract:
Wind turbine has become one of the most popular energy production methods. However, failure of blades and maintenance costs evolve into significant issues in the wind power industry, so it is essential to detect the initial blade defects to avoid the collapse of the blades and structure. This paper aims to apply modulation of high-frequency blade vibrations by low-frequency blade rotation, which is close to the known Vibro-Acoustic Modulation (VAM) method. The high-frequency wideband blade vibration is produced by the interaction of the surface blades with the environment air turbulence, and the low-frequency modulation is produced by alternating bending stress due to gravity. The low-frequency load of rotational wind turbine blades ranges between 0.2-0.4 Hz and can reach up to 2 Hz for strong wind. The main difference between this study and previous ones on VAM methods is the use of a wideband vibration signal from the blade's natural vibrations. Different features of the VAM are considered using a simple model of breathing crack. This model considers the simple mechanical oscillator, where the parameters of the oscillator are varied due to low-frequency blade rotation. During the blade's operation, the internal stress caused by the weight of the blade modifies the crack's elasticity and damping. The laboratory experiment using steel samples demonstrates the possibility of VAM using a probe wideband noise signal. A cycle load with a small amplitude was used as a pump wave to damage the tested sample, and a small transducer generated a wideband probe wave. The received signal demodulation was conducted using the Detecting of Envelope Modulation on Noise (DEMON) approach. In addition, the experimental results were compared with the modulation index (MI) technique regarding the harmonic pump wave. The wideband and traditional VAM methods demonstrated similar sensitivity for earlier detection of invisible cracks. Importantly, employing a wideband probe signal with the DEMON approach speeds up and simplifies testing since it eliminates the need to conduct tests repeatedly for various harmonic probe frequencies and to adjust the probe frequency.
Keywords: Damage detection, turbine blades, Vibro-Acoustic Structural Health Monitoring, SHM, Detecting of Envelope Modulation on Noise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4491345 A Study of Dose Distribution and Image Quality under an Automatic Tube Current Modulation (ATCM) System for a Toshiba Aquilion 64 CT Scanner Using a New Design of Phantom
Authors: S. Sookpeng, C. J. Martin, D. J. Gentle
Abstract:
Automatic tube current modulation (ATCM) systems are available for all CT manufacturers and are used for the majority of patients. Understanding how the systems work and their influence on patient dose and image quality is important for CT users, in order to gain the most effective use of the systems. In the present study, a new phantom was used for evaluating dose distribution and image quality under the ATCM operation for the Toshiba Aquilion 64 CT scanner using different ATCM options and a fixed mAs technique. A routine chest, abdomen and pelvis (CAP) protocol was selected for study and Gafchromic film was used to measure entrance surface dose (ESD), peripheral dose and central axis dose in the phantom. The results show the dose reductions achievable with various ATCM options, in relation with the target noise. The doses and image noise distribution were more uniform when the ATCM system was implemented compared with the fixed mAs technique. The lower limit set for the tube current will affect the modulations especially for the lower dose option. This limit prevented the tube current being reduced further and therefore the lower dose ATCM setting resembled a fixed mAs technique. Selection of a lower tube current limit is likely to reduce doses for smaller patients in scans of chest and neck regions.
Keywords: Computed Tomography (CT), Automatic Tube Current Modulation (ATCM), Automatic Exposure Control (AEC).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26231344 Ethics in Negotiations: The Confrontation between Representation and Practices
Authors: Claude Alavoine
Abstract:
While in practice negotiation is always a mix of cooperation and competition, these two elements correspond to different approaches of the relationship and also different orientations in term of strategy, techniques, tactics and arguments employed by the negotiators with related effects and in the end leading to different outcomes. The levels of honesty, trust and therefore cooperation are influenced not only by the uncertainty of the situation, the objectives, stakes or power but also by the orientation given from the very beginning of the relationship. When negotiation is reduced to a confrontation of power, participants rely on coercive measures, using different kinds of threats or make false promises and bluff in order to establish a more acceptable balance of power. Most of the negotiators have a tendency to complain about the unethical aspects of the tactics used by their counterparts while, as the same time, they are mostly unaware of the sources of influence of their own vision and practices. In this article, our intention is to clarify these sources and try to understand what can lead negotiators to unethical practices.Keywords: competition, cooperation, ethics, negotiation, power
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33331343 Screen of MicroRNA Targets in Zebrafish Using Heterogeneous Data Sources: A Case Study for Dre-miR-10 and Dre-miR-196
Authors: Yanju Zhang, Joost M. Woltering, Fons J. Verbeek
Abstract:
It has been established that microRNAs (miRNAs) play an important role in gene expression by post-transcriptional regulation of messengerRNAs (mRNAs). However, the precise relationships between microRNAs and their target genes in sense of numbers, types and biological relevance remain largely unclear. Dissecting the miRNA-target relationships will render more insights for miRNA targets identification and validation therefore promote the understanding of miRNA function. In miRBase, miRanda is the key algorithm used for target prediction for Zebrafish. This algorithm is high-throughput but brings lots of false positives (noise). Since validation of a large scale of targets through laboratory experiments is very time consuming, several computational methods for miRNA targets validation should be developed. In this paper, we present an integrative method to investigate several aspects of the relationships between miRNAs and their targets with the final purpose of extracting high confident targets from miRanda predicted targets pool. This is achieved by using the techniques ranging from statistical tests to clustering and association rules. Our research focuses on Zebrafish. It was found that validated targets do not necessarily associate with the highest sequence matching. Besides, for some miRNA families, the frequency of their predicted targets is significantly higher in the genomic region nearby their own physical location. Finally, in a case study of dre-miR-10 and dre-miR-196, it was found that the predicted target genes hoxd13a, hoxd11a, hoxd10a and hoxc4a of dre-miR- 10 while hoxa9a, hoxc8a and hoxa13a of dre-miR-196 have similar characteristics as validated target genes and therefore represent high confidence target candidates.Keywords: MicroRNA targets validation, microRNA-target relationships, dre-miR-10, dre-miR-196.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19911342 New Wavelet Indices to Assess Muscle Fatigue during Dynamic Contractions
Authors: González-Izal M., Rodríguez-Carreño I, Mallor-Giménez F, Malanda A, Izquierdo M
Abstract:
The purpose of this study was to evaluate and compare new indices based on the discrete wavelet transform with another spectral parameters proposed in the literature as mean average voltage, median frequency and ratios between spectral moments applied to estimate acute exercise-induced changes in power output, i.e., to assess peripheral muscle fatigue during a dynamic fatiguing protocol. 15 trained subjects performed 5 sets consisting of 10 leg press, with 2 minutes rest between sets. Surface electromyography was recorded from vastus medialis (VM) muscle. Several surface electromyographic parameters were compared to detect peripheral muscle fatigue. These were: mean average voltage (MAV), median spectral frequency (Fmed), Dimitrov spectral index of muscle fatigue (FInsm5), as well as other five parameters obtained from the discrete wavelet transform (DWT) as ratios between different scales. The new wavelet indices achieved the best results in Pearson correlation coefficients with power output changes during acute dynamic contractions. Their regressions were significantly different from MAV and Fmed. On the other hand, they showed the highest robustness in presence of additive white gaussian noise for different signal to noise ratios (SNRs). Therefore, peripheral impairments assessed by sEMG wavelet indices may be a relevant factor involved in the loss of power output after dynamic high-loading fatiguing task.Keywords: Median Frequency, EMG, wavelet transform, muscle fatigue
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18671341 Time Series Forecasting Using Independent Component Analysis
Authors: Theodor D. Popescu
Abstract:
The paper presents a method for multivariate time series forecasting using Independent Component Analysis (ICA), as a preprocessing tool. The idea of this approach is to do the forecasting in the space of independent components (sources), and then to transform back the results to the original time series space. The forecasting can be done separately and with a different method for each component, depending on its time structure. The paper gives also a review of the main algorithms for independent component analysis in the case of instantaneous mixture models, using second and high-order statistics. The method has been applied in simulation to an artificial multivariate time series with five components, generated from three sources and a mixing matrix, randomly generated.Keywords: Independent Component Analysis, second order statistics, simulation, time series forecasting
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17791340 Replicating Brain’s Resting State Functional Connectivity Network Using a Multi-Factor Hub-Based Model
Authors: B. L. Ho, L. Shi, D. F. Wang, V. C. T. Mok
Abstract:
The brain’s functional connectivity while temporally non-stationary does express consistency at a macro spatial level. The study of stable resting state connectivity patterns hence provides opportunities for identification of diseases if such stability is severely perturbed. A mathematical model replicating the brain’s spatial connections will be useful for understanding brain’s representative geometry and complements the empirical model where it falls short. Empirical computations tend to involve large matrices and become infeasible with fine parcellation. However, the proposed analytical model has no such computational problems. To improve replicability, 92 subject data are obtained from two open sources. The proposed methodology, inspired by financial theory, uses multivariate regression to find relationships of every cortical region of interest (ROI) with some pre-identified hubs. These hubs acted as representatives for the entire cortical surface. A variance-covariance framework of all ROIs is then built based on these relationships to link up all the ROIs. The result is a high level of match between model and empirical correlations in the range of 0.59 to 0.66 after adjusting for sample size; an increase of almost forty percent. More significantly, the model framework provides an intuitive way to delineate between systemic drivers and idiosyncratic noise while reducing dimensions by more than 30 folds, hence, providing a way to conduct attribution analysis. Due to its analytical nature and simple structure, the model is useful as a standalone toolkit for network dependency analysis or as a module for other mathematical models.Keywords: Functional magnetic resonance imaging, multivariate regression, network hubs, resting state functional connectivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8071339 Image Transmission via Iterative Cellular-Turbo System
Authors: Ersin Gose, Kenan Buyukatak, Onur Osman, Osman N. Ucan
Abstract:
To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.
Keywords: Iterative Cellular Image Processing Algorithm (ICIPA), Turbo Coding, Iterative Cellular Turbo System (IC-TS), Image Compression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18131338 Use of Plant Antimicrobials for Food Preservation
Authors: Oladotun A. Fatoki, Deborah A. Onifade
Abstract:
Spoilage occurs in plant produce due to the action of field and storage microorganisms. The conditions of storage can also cause physiological spoilage. Various methods exist to ensure that these food substances maintain their quality long after harvesting. However, many of these methods either fail to keep the plant for the required period or predispose the plant to other spoilage risks. The major shortcoming posed by the use of many antimicrobials is the chemical residues it deposits in the food substance. The use of plants in preservation has been in use for a long period, though little understood then, it served its purposes. A better understanding of the roles of these plant parts in increasing the shelf life of farm produce has helped in the creation of more effective and safer means of pest and microbial control. This can be extended to plants that have not been used for these purposes initially. Microbial sources should also be investigated as these have provided cheaper sources of secondary metabolites.
Keywords: Antimicrobials, Food preservation, Phytochemicals
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40181337 Screening of Potential Sources of Tannin and Its Therapeutic Application
Authors: Mamta Kumari, Shashi Jain
Abstract:
Tannins are a unique category of plant phytochemicals especially in terms of their vast potential health-benefiting properties. Researchers have described the capacity of tannins to enhance glucose uptake and inhibit adipogenesis, thus being potential drugs for the treatment of non-insulin dependent diabetes mellitus. Thus, the present research was conducted to find out tannin content of food products. The percentage of tannin in various analyzed sources ranged from 0.0 to 108.53%; highest in kathaa and lowest in ker and mango bark. The percentage of tannins present in the plants, however, varies. Numerous studies have confirmed that the naturally occurring polyphenols are key factor for the beneficial effects of the herbal medicines. Isolation and identification of active constituents from plants, preparation of standardized dose & dosage regimen can play a significant role in improving the hypoglycaemic action.Keywords: Tannins, Diabetes, Polyphenols, Antioxidants, Hypoglycemia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23131336 Hysteresis Control of Power Conditioning Unit for Fuel Cell Distributed Generation System
Authors: Kanhu Charan Bhuyan, Subhransu Padhee, Rajesh Kumar Patjoshi, Kamalakanta Mahapatra
Abstract:
Fuel cell is an emerging technology in the field of renewable energy sources which has the capacity to replace conventional energy generation sources. Fuel cell utilizes hydrogen energy to produce electricity. The electricity generated by the fuel cell can’t be directly used for a specific application as it needs proper power conditioning. Moreover, the output power fluctuates with different operating conditions. To get a stable output power at an economic rate, power conditioning circuit is essential for fuel cell. This paper implements a two-staged power conditioning unit for fuel cell based distributed generation using hysteresis current control technique.
Keywords: Fuel cell, power conditioning unit, hysteresis control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24231335 Modeling Spatial Distributions of Point and Nonpoint Source Pollution Loadings in the Great Lakes Watersheds
Authors: Chansheng He, Carlo DeMarchi
Abstract:
A physically based, spatially-distributed water quality model is being developed to simulate spatial and temporal distributions of material transport in the Great Lakes Watersheds of the U.S. Multiple databases of meteorology, land use, topography, hydrography, soils, agricultural statistics, and water quality were used to estimate nonpoint source loading potential in the study watersheds. Animal manure production was computed from tabulations of animals by zip code area for the census years of 1987, 1992, 1997, and 2002. Relative chemical loadings for agricultural land use were calculated from fertilizer and pesticide estimates by crop for the same periods. Comparison of these estimates to the monitored total phosphorous load indicates that both point and nonpoint sources are major contributors to the total nutrient loads in the study watersheds, with nonpoint sources being the largest contributor, particularly in the rural watersheds. These estimates are used as the input to the distributed water quality model for simulating pollutant transport through surface and subsurface processes to Great Lakes waters. Visualization and GIS interfaces are developed to visualize the spatial and temporal distribution of the pollutant transport in support of water management programs.
Keywords: Distributed Large Basin Runoff Model, Great LakesWatersheds, nonpoint source pollution, and point sources.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15331334 Controlling of Multi-Level Inverter under Shading Conditions Using Artificial Neural Network
Authors: Abed Sami Qawasme, Sameer Khader
Abstract:
This paper describes the effects of photovoltaic voltage changes on Multi-level inverter (MLI) due to solar irradiation variations, and methods to overcome these changes. The irradiation variation affects the generated voltage, which in turn varies the switching angles required to turn-on the inverter power switches in order to obtain minimum harmonic content in the output voltage profile. Genetic Algorithm (GA) is used to solve harmonics elimination equations of eleven level inverters with equal and non-equal dc sources. After that artificial neural network (ANN) algorithm is proposed to generate appropriate set of switching angles for MLI at any level of input dc sources voltage causing minimization of the total harmonic distortion (THD) to an acceptable limit. MATLAB/Simulink platform is used as a simulation tool and Fast Fourier Transform (FFT) analyses are carried out for output voltage profile to verify the reliability and accuracy of the applied technique for controlling the MLI harmonic distortion. According to the simulation results, the obtained THD for equal dc source is 9.38%, while for variable or unequal dc sources it varies between 10.26% and 12.93% as the input dc voltage varies between 4.47V nd 11.43V respectively. The proposed ANN algorithm provides satisfied simulation results that match with results obtained by alternative algorithms.
Keywords: Multi level inverter, genetic algorithm, artificial neural network, total harmonic distortion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6171333 Study Forecast Indoor Acoustics. A Case Study: the Auditorium Theatre-Hotel “Casa Tra Noi“
Authors: D. Germanò, D. Plutino, G. Cannistraro
Abstract:
The theatre-auditorium under investigation following the highly reflective characteristics of materials used in it (marble, painted wood, smooth plaster, etc), architectural and structural features of the Protocol and its intended use (very multifunctional: Auditorium, theatre, cinema, musicals, conference room) from the analysis of the statement of fact made by the acoustic simulation software Ramsete and supported by data obtained through a campaign of acoustic measurements of the state of fact made on the spot by a Fonomet Svantek model SVAN 957, appears to be acoustically inadequate. After the completion of the 3D model according to the specifications necessary software used forecast in order to be recognized by him, have made three simulations, acoustic simulation of the state of and acoustic simulation of two design solutions. Improved noise characteristics found in the first design solution, compared to the state in fact consists therefore in lowering Reverberation Time that you turn most desirable value, while the Indicators of Clarity, the Baricentric Time, the Lateral Efficiency, Ratio of Low Tmedia BR and defined the Speech Intelligibility improved significantly. Improved noise characteristics found instead in the second design solution, as compared to first design solution, is finally mostly in a more uniform distribution of Leq and in lowering Reverberation Time that you turn the optimum values. Indicators of Clarity, and the Lateral Efficiency improve further but at the expense of a value slightly worse than the BR. Slightly vary the remaining indices.Keywords: Indoor, Acoustic, Acoustic simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41941332 MAS Simulations of Optical Antenna Structures
Authors: K.Tavzarashvili, G.Ghvedashili
Abstract:
A semi-analytic boundary discretization method, the Method of Auxiliary Sources (MAS) is used to analyze Optical Antennas consisting of metallic parts. In addition to standard dipoletype antennas, consisting of two pieces of metal, a new structure consisting of a single metal piece with a tiny groove in the center is analyzed. It is demonstrated that difficult numerical problems are caused because optical antennas exhibit strong material dispersion, loss, and plasmon-polariton effects that require a very accurate numerical simulation. This structure takes advantage of the Channel Plasmon-Polariton (CPP) effect and exhibits a strong enhancement of the electric field in the groove. Also primitive 3D antenna model with spherical nano particles is analyzed.
Keywords: optical antenna, channel plasmon-polariton, computational physics, Method of Auxiliary Sources
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19141331 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique
Authors: Satyasen Panda, Urmila Bhanja
Abstract:
In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.Keywords: Cross correlation, three-dimensional optical code division multiple access, spectral amplitude coding optical code division multiple access, multiple access interference, phase induced intensity noise, three-dimensional modified quadratic congruence/modified prime code.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15291330 Quadrotor Black-Box System Identification
Authors: Ionel Stanculeanu, Theodor Borangiu
Abstract:
This paper presents a new approach in the identification of the quadrotor dynamic model using a black-box system for identification. Also the paper considers the problems which appear during the identification in the closed-loop and offers a technical solution for overcoming the correlation between the input noise present in the output
Keywords: System identification, UAV, prediction error method, quadrotor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34591329 Performance Analysis of Reconstruction Algorithms in Diffuse Optical Tomography
Authors: K. Uma Maheswari, S. Sathiyamoorthy, G. Lakshmi
Abstract:
Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.
Keywords: Diffuse optical tomography, ill-posedness, Levenberg Marquardt method, Split Bregman, the Gradient projection for sparse reconstruction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16181328 Numerical Study of Heat Release of the Symmetrically Arranged Extruded-Type Heat Sinks
Authors: Man Young Kim, Gyo Woo Lee
Abstract:
In this numerical study, we want to present the design of highly efficient extruded-type heat sink. The symmetrically arranged extruded-type heat sinks are used instead of a single extruded or swaged-type heat sink. In this parametric study, the maximum temperatures, the base temperatures between heaters, and the heat release rates were investigated with respect to the arrangements of heat sources, air flow rates, and amounts of heat input. Based on the results we believe that the use of both side of heat sink is to be much better for release the heat than the use of single side. Also from the results, it is believed that the symmetric arrangement of heat sources is recommended to achieve a higher heat transfer from the heat sink.
Keywords: Heat Sink, Forced Convection, Heat Transfer, Performance Evaluation, Symmetrically Arranged.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16341327 Evaluation of Horizontal Seismic Hazard of Naghan, Iran
Authors: S. A. Razavian Amrei, G.Ghodrati Amiri, D. Rezaei
Abstract:
This paper presents probabilistic horizontal seismic hazard assessment of Naghan, Iran. It displays the probabilistic estimate of Peak Ground Horizontal Acceleration (PGHA) for the return period of 475, 950 and 2475 years. The output of the probabilistic seismic hazard analysis is based on peak ground acceleration (PGA), which is the most common criterion in designing of buildings. A catalogue of seismic events that includes both historical and instrumental events was developed and covers the period from 840 to 2009. The seismic sources that affect the hazard in Naghan were identified within the radius of 200 km and the recurrence relationships of these sources were generated by Kijko and Sellevoll. Finally Peak Ground Horizontal Acceleration (PGHA) has been prepared to indicate the earthquake hazard of Naghan for different hazard levels by using SEISRISK III software.Keywords: Seismic Hazard Assessment, Seismicity Parameters, PGA, Naghan, Iran
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17021326 DWT-SATS Based Detection of Image Region Cloning
Authors: Michael Zimba
Abstract:
A duplicated image region may be subjected to a number of attacks such as noise addition, compression, reflection, rotation, and scaling with the intention of either merely mating it to its targeted neighborhood or preventing its detection. In this paper, we present an effective and robust method of detecting duplicated regions inclusive of those affected by the various attacks. In order to reduce the dimension of the image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of a suspicious image. However, unlike most existing copy move image forgery (CMIF) detection algorithms operating in the DWT domain which extract only the low frequency subband of the DWT of the suspicious image thereby leaving valuable information in the other three subbands, the proposed algorithm simultaneously extracts features from all the four subbands. The extracted features are not only more accurate representation of image regions but also robust to additive noise, JPEG compression, and affine transformation. Furthermore, principal component analysis-eigenvalue decomposition, PCA-EVD, is applied to reduce the dimension of the features. The extracted features are then sorted using the more computationally efficient Radix Sort algorithm. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. The proposed algorithm is not only fast but also more robust to attacks compared to the related CMIF detection algorithms. The experimental results show high detection rates.
Keywords: Affine Transformation, Discrete Wavelet Transform, Radix Sort, SATS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19101325 Digital Automatic Gain Control Integrated on WLAN Platform
Authors: Emilija Miletic, Milos Krstic, Maxim Piz, Michael Methfessel
Abstract:
In this work we present a solution for DAGC (Digital Automatic Gain Control) in WLAN receivers compatible to IEEE 802.11a/g standard. Those standards define communication in 5/2.4 GHz band using Orthogonal Frequency Division Multiplexing OFDM modulation scheme. WLAN Transceiver that we have used enables gain control over Low Noise Amplifier (LNA) and a Variable Gain Amplifier (VGA). The control over those signals is performed in our digital baseband processor using dedicated hardware block DAGC. DAGC in this process is used to automatically control the VGA and LNA in order to achieve better signal-to-noise ratio, decrease FER (Frame Error Rate) and hold the average power of the baseband signal close to the desired set point. DAGC function in baseband processor is done in few steps: measuring power levels of baseband samples of an RF signal,accumulating the differences between the measured power level and actual gain setting, adjusting a gain factor of the accumulation, and applying the adjusted gain factor the baseband values. Based on the measurement results of RSSI signal dependence to input power we have concluded that this digital AGC can be implemented applying the simple linearization of the RSSI. This solution is very simple but also effective and reduces complexity and power consumption of the DAGC. This DAGC is implemented and tested both in FPGA and in ASIC as a part of our WLAN baseband processor. Finally, we have integrated this circuit in a compact WLAN PCMCIA board based on MAC and baseband ASIC chips designed from us.Keywords: WLAN, AGC, RSSI, baseband processor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39491324 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array
Authors: Yanping Liao, Zenan Wu, Ruigang Zhao
Abstract:
Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.
Keywords: Multi-carrier frequency diverse array, adaptive beamforming, correction index, limited snapshot, robust.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6771323 System Performance Comparison of Turbo and Trellis Coded Optical CDMA Systems
Authors: M. Kulkarni, R. K. Sinha, D. R. Bhaskar
Abstract:
In this paper, we have compared the performance of a Turbo and Trellis coded optical code division multiple access (OCDMA) system. The comparison of the two codes has been accomplished by employing optical orthogonal codes (OOCs). The Bit Error Rate (BER) performances have been compared by varying the code weights of address codes employed by the system. We have considered the effects of optical multiple access interference (OMAI), thermal noise and avalanche photodiode (APD) detector noise. Analysis has been carried out for the system with and without double optical hard limiter (DHL). From the simulation results it is observed that a better and distinct comparison can be drawn between the performance of Trellis and Turbo coded systems, at lower code weights of optical orthogonal codes for a fixed number of users. The BER performance of the Turbo coded system is found to be better than the Trellis coded system for all code weights that have been considered for the simulation. Nevertheless, the Trellis coded OCDMA system is found to be better than the uncoded OCDMA system. Trellis coded OCDMA can be used in systems where decoding time has to be kept low, bandwidth is limited and high reliability is not a crucial factor as in local area networks. Also the system hardware is less complex in comparison to the Turbo coded system. Trellis coded OCDMA system can be used without significant modification of the existing chipsets. Turbo-coded OCDMA can however be employed in systems where high reliability is needed and bandwidth is not a limiting factor.
Keywords: avalanche photodiode, optical code division multipleaccess, optical multiple access interference, Trellis codedmodulation, Turbo code
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1897