Search results for: discrete fractional Laplacian
181 Advantages of Multispectral Imaging for Accurate Gas Temperature Profile Retrieval from Fire Combustion Reactions
Authors: Jean-Philippe Gagnon, Benjamin Saute, Stéphane Boubanga-Tombet
Abstract:
Infrared thermal imaging is used for a wide range of applications, especially in the combustion domain. However, it is well known that most combustion gases such as carbon dioxide (CO₂), water vapor (H₂O), and carbon monoxide (CO) selectively absorb/emit infrared radiation at discrete energies, i.e., over a very narrow spectral range. Therefore, temperature profiles of most combustion processes derived from conventional broadband imaging are inaccurate without prior knowledge or assumptions about the spectral emissivity properties of the combustion gases. Using spectral filters allows estimating these critical emissivity parameters in addition to providing selectivity regarding the chemical nature of the combustion gases. However, due to the turbulent nature of most flames, it is crucial that such information be obtained without sacrificing temporal resolution. For this reason, Telops has developed a time-resolved multispectral imaging system which combines a high-performance broadband camera synchronized with a rotating spectral filter wheel. In order to illustrate the benefits of using this system to characterize combustion experiments, measurements were carried out using a Telops MS-IR MW on a very simple combustion system: a wood fire. The temperature profiles calculated using the spectral information from the different channels were compared with corresponding temperature profiles obtained with conventional broadband imaging. The results illustrate the benefits of the Telops MS-IR cameras for the characterization of laminar and turbulent combustion systems at a high temporal resolution.Keywords: infrared, multispectral, fire, broadband, gas temperature, IR camera
Procedia PDF Downloads 144180 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases
Authors: Mohammad A. Bani-Khaled
Abstract:
In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.Keywords: coupled dynamics, geometric complexity, proper orthogonal decomposition (POD), thin walled beams
Procedia PDF Downloads 420179 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation
Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar
Abstract:
The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.Keywords: computational fluid dynamics (CFD), erosion, slurry transportation, k-ε Model
Procedia PDF Downloads 409178 Computational Fluid Dynamics (CFD) Simulation Approach for Developing New Powder Dispensing Device
Authors: Revanth Rallapalli
Abstract:
Manually dispensing solids and powders can be difficult as it requires gradually pour and check the amount on the scale to be dispensed. Current systems are manual and non-continuous in nature and are user-dependent and difficult to control powder dispensation. Recurrent dosing of powdered medicines in precise amounts quickly and accurately has been an all-time challenge. Various new powder dispensing mechanisms are being designed to overcome these challenges. A battery-operated screw conveyor mechanism is being innovated to overcome the above problems faced. These inventions are numerically evaluated at the concept development level by employing Computational Fluid Dynamics (CFD) of gas-solids multiphase flow systems. CFD has been very helpful in development of such devices saving time and money by reducing the number of prototypes and testing. Furthermore, this paper describes a simulation of powder dispensation from the trocar’s end by considering the powder as secondary flow in air, is simulated by using the technique called Dense Discrete Phase Model incorporated with Kinetic Theory of Granular Flow (DDPM-KTGF). By considering the volume fraction of powder as 50%, the transportation of powder from the inlet side to trocar’s end side is done by rotation of the screw conveyor. Thus, the performance is calculated for a 1-sec time frame in an unsteady computation manner. This methodology will help designers in developing design concepts to improve the dispensation and also at the effective area within a quick turnaround time frame.Keywords: DDPM-KTGF, gas-solids multiphase flow, screw conveyor, Unsteady
Procedia PDF Downloads 183177 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback
Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu
Abstract:
With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.Keywords: input performance, mobile device, slim keyboard, tactile feedback
Procedia PDF Downloads 300176 Superamolecular Chemistry and Packing of FAMEs in the Liquid Phase for Optimization of Combustion and Emission
Authors: Zeev Wiesman, Paula Berman, Nitzan Meiri, Charles Linder
Abstract:
Supramolecular chemistry refers to the domain of chemistry beyond that of molecules and focuses on the chemical systems made up of a discrete number of assembled molecular sub units or components. Biodiesel components self arrangements is closely related/affect their physical properties in combustion systems and emission. Due to technological difficulties, knowledge regarding the molecular packing of FAMEs (biodiesel) in the liquid phase is limited. Spectral tools such as X-ray and NMR are known to provide evidences related to molecular structure organization. Recently, it was reported by our research group that using 1H Time Domain NMR methodology based on relaxation time and self diffusion coefficients, FAMEs clusters with different motilities can be accurately studied in the liquid phase. Head to head dimarization with quasi-smectic clusters organization, based on molecular motion analysis, was clearly demonstrated. These findings about the assembly/packing of the FAME components are directly associated with fluidity/viscosity of the biodiesel. Furthermore, these findings may provide information of micro/nano-particles that are formed in the delivery and injection system of various combustion systems (affected by thermodynamic conditions). Various relevant parameters to combustion such as: distillation/Liquid Gas phase transition, cetane number/ignition delay, shoot, oxidation/NOX emission maybe predicted. These data may open the window for further optimization of FAME/diesel mixture in terms of combustion and emission.Keywords: supermolecular chemistry, FAMEs, liquid phase, fluidity, LF-NMR
Procedia PDF Downloads 341175 Color Image Compression/Encryption/Contour Extraction using 3L-DWT and SSPCE Method
Authors: Ali A. Ukasha, Majdi F. Elbireki, Mohammad F. Abdullah
Abstract:
Data security needed in data transmission, storage, and communication to ensure the security. This paper is divided into two parts. This work interests with the color image which is decomposed into red, green and blue channels. The blue and green channels are compressed using 3-levels discrete wavelet transform. The Arnold transform uses to changes the locations of red image channel pixels as image scrambling process. Then all these channels are encrypted separately using the key image that has same original size and are generating using private keys and modulo operations. Performing the X-OR and modulo operations between the encrypted channels images for image pixel values change purpose. The extracted contours from color images recovery can be obtained with accepted level of distortion using single step parallel contour extraction (SSPCE) method. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Color images and completely reconstructed without any distortion. Also shown that the analyzed algorithm has extremely large security against some attacks like salt and pepper and Jpeg compression. Its proof that the color images can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.Keywords: SSPCE method, image compression and salt and peppers attacks, bitplanes decomposition, Arnold transform, color image, wavelet transform, lossless image encryption
Procedia PDF Downloads 520174 CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet
Authors: Amir Moslemi, Amir movafeghi, Shahab Moradi
Abstract:
One of the most important challenging factors in medical images is nominated as noise.Image denoising refers to the improvement of a digital medical image that has been infected by Additive White Gaussian Noise (AWGN). The digital medical image or video can be affected by different types of noises. They are impulse noise, Poisson noise and AWGN. Computed tomography (CT) images are subjected to low quality due to the noise. The quality of CT images is dependent on the absorbed dose to patients directly in such a way that increase in absorbed radiation, consequently absorbed dose to patients (ADP), enhances the CT images quality. In this manner, noise reduction techniques on the purpose of images quality enhancement exposing no excess radiation to patients is one the challenging problems for CT images processing. In this work, noise reduction in CT images was performed using two different directional 2 dimensional (2D) transformations; i.e., Curvelet and Contourlet and Discrete wavelet transform(DWT) thresholding methods of BayesShrink and AdaptShrink, compared to each other and we proposed a new threshold in wavelet domain for not only noise reduction but also edge retaining, consequently the proposed method retains the modified coefficients significantly that result in good visual quality. Data evaluations were accomplished by using two criterions; namely, peak signal to noise ratio (PSNR) and Structure similarity (Ssim).Keywords: computed tomography (CT), noise reduction, curve-let, contour-let, signal to noise peak-peak ratio (PSNR), structure similarity (Ssim), absorbed dose to patient (ADP)
Procedia PDF Downloads 441173 THz Phase Extraction Algorithms for a THz Modulating Interferometric Doppler Radar
Authors: Shaolin Allen Liao, Hual-Te Chien
Abstract:
Various THz phase extraction algorithms have been developed for a novel THz Modulating Interferometric Doppler Radar (THz-MIDR) developed recently by the author. The THz-MIDR differs from the well-known FTIR technique in that it introduces a continuously modulating reference branch, compared to the time-consuming discrete FTIR stepping reference branch. Such change allows real-time tracking of a moving object and capturing of its Doppler signature. The working principle of the THz-MIDR is similar to the FTIR technique: the incoming THz emission from the scene is split by a beam splitter/combiner; one of the beams is continuously modulated by a vibrating mirror or phase modulator and the other split beam is reflected by a reflection mirror; finally both the modulated reference beam and reflected beam are combined by the same beam splitter/combiner and detected by a THz intensity detector (for example, a pyroelectric detector). In order to extract THz phase from the single intensity measurement signal, we have derived rigorous mathematical formulas for 3 Frequency Banded (FB) signals: 1) DC Low-Frequency Banded (LFB) signal; 2) Fundamental Frequency Banded (FFB) signal; and 3) Harmonic Frequency Banded (HFB) signal. The THz phase extraction algorithms are then developed based combinations of 2 or all of these 3 FB signals with efficient algorithms such as Levenberg-Marquardt nonlinear fitting algorithm. Numerical simulation has also been performed in Matlab with simulated THz-MIDR interferometric signal of various Signal to Noise Ratio (SNR) to verify the algorithms.Keywords: algorithm, modulation, THz phase, THz interferometry doppler radar
Procedia PDF Downloads 346172 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows
Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid
Abstract:
Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil
Procedia PDF Downloads 130171 Electronic Spectral Function of Double Quantum Dots–Superconductors Nanoscopic Junction
Authors: Rajendra Kumar
Abstract:
We study the Electronic spectral density of a double coupled quantum dots sandwich between superconducting leads, where one of the superconducting leads (QD1) are connected with left superconductor lead and (QD1) also connected right superconductor lead. (QD1) and (QD2) are coupling to each other. The electronic spectral density through a quantum dots between superconducting leads having s-wave symmetry of the superconducting order parameter. Such junction is called superconducting –quantum dot (S-QD-S) junction. For this purpose, we have considered a renormalized Anderson model that includes the double coupled of the superconducting leads with the quantum dots level and an attractive BCS-type effective interaction in superconducting leads. We employed the Green’s function technique to obtain superconducting order parameter with the BCS framework and Ambegaoker-Baratoff formalism to analyze the electronic spectral density through such (S-QD-S) junction. It has been pointed out that electronic spectral density through such a junction is dominated by the attractive the paring interaction in the leads, energy of the level on the dot with respect to Fermi energy and also on the coupling parameter of the two in an essential way. On the basis of numerical analysis we have compared the theoretical results of electronic spectral density with the recent transport existing theoretical analysis. QDs is the charging energy that may give rise to effects based on the interplay of Coulomb repulsion and superconducting correlations. It is, therefore, an interesting question to ask how the discrete level spectrum and the charging energy affect the DC and AC Josephson transport between two superconductors coupled via a QD. In the absence of a bias voltage, a finite DC current can be sustained in such an S-QD-S by the DC Josephson effect.Keywords: quantum dots, S-QD-S junction, BCS superconductors, Anderson model
Procedia PDF Downloads 376170 Covalently Conjugated Gold–Porphyrin Nanostructures
Authors: L. Spitaleri, C. M. A. Gangemi, R. Purrello, G. Nicotra, G. Trusso Sfrazzetto, G. Casella, M. Casarin, A. Gulino
Abstract:
Hybrid molecular–nanoparticle materials, obtained with a bottom-up approach, are suitable for the fabrication of functional nanostructures showing structural control and well-defined properties, i.e., optical, electronic or catalytic properties, in the perspective of applications in different fields of nanotechnology. Gold nanoparticles (Au NPs) exhibit important chemical, electronic and optical properties due to their size, shape and electronic structures. In fact, Au NPs containing no more than 30-40 atoms are only luminescent because they can be considered as large molecules with discrete energy levels, while nano-sized Au NPs only show the surface plasmon resonance. Hence, it appears that gold nanoparticles can alternatively be luminescent or plasmonic, and this represents a severe constraint for their use as an optical material. The aim of this work was the fabrication of nanoscale assembly of Au NPs covalently anchored to each other by means of novel bi-functional porphyrin molecules that work as bridges between different gold nanoparticles. This functional architecture shows a strong surface plasmon due to the Au nanoparticles and a strong luminescence signal coming from porphyrin molecules, thus, behaving like an artificial organized plasmonic and fluorescent network. The self-assembly geometry of this porphyrin on the Au NPs was studied by investigation of the conformational properties of the porphyrin derivative at the DFT level. The morphology, electronic structure and optical properties of the conjugated Au NPs – porphyrin system were investigated by TEM, XPS, UV–vis and Luminescence. The present nanostructures can be used for plasmon-enhanced fluorescence, photocatalysis, nonlinear optics, etc., under atmospheric conditions since our system is not reactive to air nor water and does not need to be stored in a vacuum or inert gas.Keywords: gold nanoparticle, porphyrin, surface plasmon resonance, luminescence, nanostructures
Procedia PDF Downloads 156169 Life Time Improvement of Clamp Structural by Using Fatigue Analysis
Authors: Pisut Boonkaew, Jatuporn Thongsri
Abstract:
In hard disk drive manufacturing industry, the process of reducing an unnecessary part and qualifying the quality of part before assembling is important. Thus, clamp was designed and fabricated as a fixture for holding in testing process. Basically, testing by trial and error consumes a long time to improve. Consequently, the simulation was brought to improve the part and reduce the time taken. The problem is the present clamp has a low life expectancy because of the critical stress that occurred. Hence, the simulation was brought to study the behavior of stress and compressive force to improve the clamp expectancy with all probability of designs which are present up to 27 designs, which excluding the repeated designs. The probability was calculated followed by the full fractional rules of six sigma methodology which was provided correctly. The six sigma methodology is a well-structured method for improving quality level by detecting and reducing the variability of the process. Therefore, the defective will be decreased while the process capability increasing. This research focuses on the methodology of stress and fatigue reduction while compressive force still remains in the acceptable range that has been set by the company. In the simulation, ANSYS simulates the 3D CAD with the same condition during the experiment. Then the force at each distance started from 0.01 to 0.1 mm will be recorded. The setting in ANSYS was verified by mesh convergence methodology and compared the percentage error with the experimental result; the error must not exceed the acceptable range. Therefore, the improved process focuses on degree, radius, and length that will reduce stress and still remain in the acceptable force number. Therefore, the fatigue analysis will be brought as the next process in order to guarantee that the lifetime will be extended by simulating through ANSYS simulation program. Not only to simulate it, but also to confirm the setting by comparing with the actual clamp in order to observe the different of fatigue between both designs. This brings the life time improvement up to 57% compared with the actual clamp in the manufacturing. This study provides a precise and trustable setting enough to be set as a reference methodology for the future design. Because of the combination and adaptation from the six sigma method, finite element, fatigue and linear regressive analysis that lead to accurate calculation, this project will able to save up to 60 million dollars annually.Keywords: clamp, finite element analysis, structural, six sigma, linear regressive analysis, fatigue analysis, probability
Procedia PDF Downloads 235168 Interaction of Metals with Non-Conventional Solvents
Authors: Evgeny E. Tereshatov, C. M. Folden
Abstract:
Ionic liquids and deep eutectic mixtures represent so-called non-conventional solvents. The former, composed of discrete ions, is a salt with a melting temperature below 100°С. The latter, consisting of hydrogen bond donors and acceptors, is a mixture of at least two compounds, resulting in a melting temperature depression in comparison with that of the individual moiety. These systems also can be water-immiscible, which makes them applicable for metal extraction. This work will cover interactions of In, Tl, Ir, and Rh in hydrochloric acid media with eutectic mixtures and Er, Ir, and At in a gas phase with chemically modified α-detectors. The purpose is to study chemical systems based on non-conventional solvents in terms of their interaction with metals. Once promising systems are found, the next step is to modify the surface of α-detectors used in the online element production at cyclotrons to get the detector chemical selectivity. Initially, the metal interactions are studied by means of the liquid-liquid extraction technique. Then appropriate molecules are chemisorbed on the surrogate surface first to understand the coating quality. Finally, a detector is covered with the same molecule, and the metal sorption on such detectors is studied in the online regime. It was found that chemical treatment of the surface can result in 99% coverage with a monolayer formation. This surface is chemically active and can adsorb metals from hydrochloric acid solutions. Similarly, a detector surface was modified and tested during cyclotron-based experiments. Thus, a procedure of detectors functionalization has been developed, and this opens an interesting opportunity of studying chemisorption of elements which do not have stable isotopes.Keywords: mechanism, radioisotopes, solvent extraction, gas phase sorption
Procedia PDF Downloads 103167 Effects of Hypolipidemic Agents in Aminoglycoside-Induced Experimental Nephrotoxicity in Rats: Biochemical and Histopathological Evidence
Authors: Balakumar Pitchai, Xiang Llan Ang, Sunil Prajapati, Varatharajan Rajavel, Sundram Karupiah, Mohd Baidi Bahari
Abstract:
The study examined the pretreatment and post-treatment effects of low-doses of fenofibrate and rosuvastatin in gentamicin-induced acute nephrotoxicity in rats. Gentamicin (100 mg/kg/day, i.p.) was administered to rats for 8 days. In the pretreatment protocol, low-dose fenofibrate (30 mg/kg/day, p.o.) or low-dose rosuvastatin (2 mg/kg/day, p.o.) treatments were started a day before the administration of gentamicin and continued for 8 days. In the post-treatment protocol, rats administered gentamicin were treated with low-dose fenofibrate (30 mg/kg/day, p.o.) or low-dose rosuvastatin (2 mg/kg/day, p.o.) for 6 days after the completion of 8 days protocol of gentamicin administration. Gentamicin-associated acute nephrotoxicity in rats was assessed in terms of biochemical analysis and renal histopathological studies. Gentamicin-administered rats showed marked renal functional changes as assessed in terms of a significant increase in serum creatinine and urea levels as compared to normal rats. The renal dysfunction noted in gentamicin administered rats was accompanied with elevated serum uric acid level as compared to normal rats while there was no significant change in lipid profile. Low-dose fenofibrate pretreatment in gentamicin-administered rats afforded a significant renal functional improvements and renoprotection while its post-treatment showed no significant renoprotection. On the other hand, pretreatment with low-dose rosuvastatin partially reduced gentamicin-induced increase in serum creatinine level, but its post-treatment did not afford renal functional improvements in gentamicin-administered rats. However, all pre and post-treatments with low-doses of fenofibrate or rosuvastatin significantly reduced the elevated serum uric acid concentration in gentamicin-administered rats. Renal histopathological analysis showed a discernible incidence of acute tubular necrosis in gentamicin-administered rats which were markedly reduced by low-dose fenofibrate or low-dose rosuvastatin pretreatments; but, not by their post-treatments. In conclusion, low-dose fenofibrate pretreatment considerably prevented gentamicin-induced acute tubular necrosis and renal functional abnormalities in rats while its post-treatment resulted in no significant renoprotective action. In spite of effective prevention of gentamicin-induced acute tubular necrosis, the pretreatment with low-dose rosuvastatin had only a partial and fractional protection on renal functional abnormalities. The post-treatment with low-dose rosuvastatin was ineffective in affording a renoprotection in gentamicin-administered rats.Keywords: gentamicin-nephrotoxicity, low-dose fenofibrate, low-dose rosuvastatin, renoprotection
Procedia PDF Downloads 204166 Fluid–Structure Interaction Modeling of Wind Turbines
Authors: Andre F. A. Cyrino
Abstract:
Knowing that the technological advance is the focus on the efficient extraction of energy from wind, and therefore in the design of wind turbine structures, this work aims the study of the fluid-structure interaction of an idealized wind turbine. The blade was studied as a beam attached to a cylindrical Hub with rotation axis pointing the air flow that passes through the rotor. Using the calculus of variations and the finite difference method the blade will be simulated by a discrete number of nodes and the aerodynamic forces were evaluated. The study presented here was written on Matlab and performs a numeric simulation of a simplified model of windmill containing a Hub and three blades modeled as Euler-Bernoulli beams for small strains and under the constant and uniform wind. The mathematical approach is done by Hamilton’s Extended Principle with the aerodynamic loads applied on the nodes considering the local relative wind speed, angle of attack and aerodynamic lift and drag coefficients. Due to the wide range of angles of attack, a wind turbine blade operates, the airfoil used on the model was NREL SERI S809 which allowed obtaining equations for Cl and Cd as functions of the angle of attack, based on a NASA study. Tridimensional flow effects were no taken in part, as well as torsion of the beam, which only bends. The results showed the dynamic response of the system in terms of displacement and rotational speed as the turbine reached the final speed. Although the results were not compared to real windmills or more complete models, the resulting values were consistent with the size of the system and wind speed.Keywords: blade aerodynamics, fluid–structure interaction, wind turbine aerodynamics, wind turbine blade
Procedia PDF Downloads 268165 Effect of Bi-Dispersity on Particle Clustering in Sedimentation
Authors: Ali Abbas Zaidi
Abstract:
In free settling or sedimentation, particles form clusters at high Reynolds number and dilute suspensions. It is due to the entrapment of particles in the wakes of upstream particles. In this paper, the effect of bi-dispersity of settling particles on particle clustering is investigated using particle-resolved direct numerical simulation. Immersed boundary method is used for particle fluid interactions and discrete element method is used for particle-particle interactions. The solid volume fraction used in the simulation is 1% and the Reynolds number based on Sauter mean diameter is 350. Both solid volume fraction and Reynolds number lie in the clustering regime of sedimentation. In simulations, the particle diameter ratio (i.e. diameter of larger particle to smaller particle (d₁/d₂)) is varied from 2:1, 3:1 and 4:1. For each case of particle diameter ratio, solid volume fraction for each particle size (φ₁/φ₂) is varied from 1:1, 1:2 and 2:1. For comparison, simulations are also performed for monodisperse particles. For studying particles clustering, radial distribution function and instantaneous location of particles in the computational domain are studied. It is observed that the degree of particle clustering decreases with the increase in the bi-dispersity of settling particles. The smallest degree of particle clustering or dispersion of particles is observed for particles with d₁/d₂ equal to 4:1 and φ₁/φ₂ equal to 1:2. Simulations showed that the reduction in particle clustering by increasing bi-dispersity is due to the difference in settling velocity of particles. Particles with larger size settle faster and knockout the smaller particles from clustered regions of particles in the computational domain.Keywords: dispersion in bi-disperse settling particles, particle microstructures in bi-disperse suspensions, particle resolved direct numerical simulations, settling of bi-disperse particles
Procedia PDF Downloads 208164 Maximizing Profit Using Optimal Control by Exploiting the Flexibility in Thermal Power Plants
Authors: Daud Mustafa Minhas, Raja Rehan Khalid, Georg Frey
Abstract:
The next generation power systems are equipped with abundantly available free renewable energy resources (RES). During their low-cost operations, the price of electricity significantly reduces to a lower value, and sometimes it becomes negative. Therefore, it is recommended not to operate the traditional power plants (e.g. coal power plants) and to reduce the losses. In fact, it is not a cost-effective solution, because these power plants exhibit some shutdown and startup costs. Moreover, they require certain time for shutdown and also need enough pause before starting up again, increasing inefficiency in the whole power network. Hence, there is always a trade-off between avoiding negative electricity prices, and the startup costs of power plants. To exploit this trade-off and to increase the profit of a power plant, two main contributions are made: 1) introducing retrofit technology for state of art coal power plant; 2) proposing optimal control strategy for a power plant by exploiting different flexibility features. These flexibility features include: improving ramp rate of power plant, reducing startup time and lowering minimum load. While, the control strategy is solved as mixed integer linear programming (MILP), ensuring optimal solution for the profit maximization problem. Extensive comparisons are made considering pre and post-retrofit coal power plant having the same efficiencies under different electricity price scenarios. It concludes that if the power plant must remain in the market (providing services), more flexibility reflects direct economic advantage to the plant operator.Keywords: discrete optimization, power plant flexibility, profit maximization, unit commitment model
Procedia PDF Downloads 144163 Correlative Look at Relationship between Emotional Intelligence and Effective Crisis Management in Context of Covid-19 in France and Canada
Authors: Brittany Duboz-Quinville
Abstract:
Emotional Intelligence (EI) is a growing field, and many studies are examining how it pertains to the workplace. In the context of crisis management several studies have postulated that EI could play a role in individuals’ ability to execute crisis plans. However, research evaluating the EI of leaders who have actually managed a crisis is still lacking. The COVID-19 pandemic forced many businesses into a crisis situation beginning in March and April of 2020. This study sought to measure both EI and effective crisis management (CM) during the COVID-19 pandemic to determine if they were positively correlated. A quantitative survey was distributed via the internet that comprised of 15 EI statements, and 15 CM statements with Likert scale responses, and 6 demographic questions with discrete responses. The hypothesis of the study was: it is believed that EI correlates positively with effective crisis management. The results of the study did not support the studies hypothesis as the correlation between EI and CM was not statistically significant. An additional correlation was tested, comparing employees’ perception of their superiors’ EI (Perception) to employees’ opinion of how their superiors managed the crisis (Opinion). This Opinion and Perception correlation was statistically significant. Furthermore, by examining this correlation through demographic divisions there are additional significant results, notably that French speaking employees have a stronger Opinion/Perception correlation than English speaking employees. Implications for cultural differences in EI and CM are discussed as well as possible differences across job sectors. Finally, it is hoped that this study will serve to convince more companies, particularly in France, to embrace EI training for staff and especially managers.Keywords: crisis management, emotional intelligence, empathy, management training
Procedia PDF Downloads 166162 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function
Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos
Abstract:
Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.Keywords: diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion process, trends functions, bi-parameters weibull density function
Procedia PDF Downloads 309161 An Investigation into the Impacts of High-Frequency Electromagnetic Fields Utilized in the 5G Technology on Insects
Authors: Veriko Jeladze, Besarion Partsvania, Levan Shoshiashvili
Abstract:
This paper addresses a very topical issue today. The frequency range 2.5-100 GHz contains frequencies that have already been used or will be used in modern 5G technologies. The wavelengths used in 5G systems will be close to the body dimensions of small size biological objects, particularly insects. Because the body and body parts dimensions of insects at these frequencies are comparable with the wavelength, the high absorption of EMF energy in the body tissues can occur(body resonance) and therefore can cause harmful effects, possibly the extinction of some of them. An investigation into the impact of radio-frequency nonionizing electromagnetic field (EMF) utilized in the future 5G on insects is of great importance as a very high number of 5G network components will increase the total EMF exposure in the environment. All ecosystems of the earth are interconnected. If one component of an ecosystem is disrupted, the whole system will be affected (which could cause cascading effects). The study of these problems is an important challenge for scientists today because the existing studies are incomplete and insufficient. Consequently, the purpose of this proposed research is to investigate the possible hazardous impact of RF-EMFs (including 5G EMFs) on insects. The project will study the effects of these EMFs on various insects that have different body sizes through computer modeling at frequencies from 2.5 to 100 GHz. The selected insects are honey bee, wasp, and ladybug. For this purpose, the detailed 3D discrete models of insects are created for EM and thermal modeling through FDTD and will be evaluated whole-body Specific Absorption Rates (SAR) at selected frequencies. All these studies represent a novelty. The proposed study will promote new investigations about the bio-effects of 5G-EMFs and will contribute to the harmonization of safe exposure levels and frequencies of 5G-EMFs'.Keywords: electromagnetic field, insect, FDTD, specific absorption rate (SAR)
Procedia PDF Downloads 91160 Infrared Lightbox and iPhone App for Improving Detection Limit of Phosphate Detecting Dip Strips
Authors: H. Heidari-Bafroui, B. Ribeiro, A. Charbaji, C. Anagnostopoulos, M. Faghri
Abstract:
In this paper, we report the development of a portable and inexpensive infrared lightbox for improving the detection limits of paper-based phosphate devices. Commercial paper-based devices utilize the molybdenum blue protocol to detect phosphate in the environment. Although these devices are easy to use and have a long shelf life, their main deficiency is their low sensitivity based on the qualitative results obtained via a color chart. To improve the results, we constructed a compact infrared lightbox that communicates wirelessly with a smartphone. The system measures the absorbance of radiation for the molybdenum blue reaction in the infrared region of the spectrum. It consists of a lightbox illuminated by four infrared light-emitting diodes, an infrared digital camera, a Raspberry Pi microcontroller, a mini-router, and an iPhone to control the microcontroller. An iPhone application was also developed to analyze images captured by the infrared camera in order to quantify phosphate concentrations. Additionally, the app connects to an online data center to present a highly scalable worldwide system for tracking and analyzing field measurements. In this study, the detection limits for two popular commercial devices were improved by a factor of 4 for the Quantofix devices (from 1.3 ppm using visible light to 300 ppb using infrared illumination) and a factor of 6 for the Indigo units (from 9.2 ppm to 1.4 ppm) with repeatability of less than or equal to 1.2% relative standard deviation (RSD). The system also provides more granular concentration information compared to the discrete color chart used by commercial devices and it can be easily adapted for use in other applications.Keywords: infrared lightbox, paper-based device, phosphate detection, smartphone colorimetric analyzer
Procedia PDF Downloads 123159 Sharing Tacit Knowledge: The Essence of Knowledge Management
Authors: Ayesha Khatun
Abstract:
In 21st century where markets are unstable, technologies rapidly proliferate, competitors multiply, products and services become obsolete almost overnight and customers demand low cost high value product, leveraging and harnessing knowledge is not just a potential source of competitive advantage rather a necessity in technology based and information intensive industries. Knowledge management focuses on leveraging the available knowledge and sharing the same among the individuals in the organization so that the employees can make best use of it towards achieving the organizational goals. Knowledge is not a discrete object. It is embedded in people and so difficult to transfer outside the immediate context that it becomes a major competitive advantage. However, internal transfer of knowledge among the employees is essential to maximize the use of knowledge available in the organization in an unstructured manner. But as knowledge is the source of competitive advantage for the organization it is also the source of competitive advantage for the individuals. People think that knowledge is power and sharing the same may lead to lose the competitive position. Moreover, the very nature of tacit knowledge poses many difficulties in sharing the same. But sharing tacit knowledge is the vital part of knowledge management process because it is the tacit knowledge which is inimitable. Knowledge management has been made synonymous with the use of software and technology leading to the management of explicit knowledge only ignoring personal interaction and forming of informal networks which are considered as the most successful means of sharing tacit knowledge. Factors responsible for effective sharing of tacit knowledge are grouped into –individual, organizational and technological factors. Different factors under each category have been identified. Creating a positive organizational culture, encouraging personal interaction, practicing reward system are some of the strategies that can help to overcome many of the barriers to effective sharing of tacit knowledge. Methodology applied here is completely secondary. Extensive review of relevant literature has been undertaken for the purpose.Keywords: knowledge, tacit knowledge, knowledge management, sustainable competitive advantage, organization, knowledge sharing
Procedia PDF Downloads 400158 Multiple Negative-Differential Resistance Regions Based on AlN/GaN Resonant Tunneling Structures by the Vertical Growth of Molecular Beam Epitaxy
Authors: Yao Jiajia, Wu Guanlin, LIU Fang, Xue Junshuai, Zhang Jincheng, Hao Yue
Abstract:
Resonant tunneling diodes (RTDs) based on GaN have been extensively studied. However, no results of multiple logic states achieved by RTDs were reported by the methods of epitaxy in the GaN materials. In this paper, the multiple negative-differential resistance regions by combining two discrete double-barrier RTDs in series have been first demonstrated. Plasma-assisted molecular beam epitaxy (PA-MBE) was used to grow structures consisting of two vertical RTDs. The substrate was a GaN-on-sapphire template. Each resonant tunneling structure was composed of a double barrier of AlN and a single well of GaN with undoped 4-nm space layers of GaN on each side. The AlN barriers were 1.5 nm thick, and the GaN well was 2 nm thick. The resonant tunneling structures were separated from each other by 30-nm thick n+ GaN layers. The bottom and top layers of the structures, grown neighboring to the spacer layers that consist of 200-nm-thick n+ GaN. These devices with two tunneling structures exhibited uniform peaks and valleys current and also had two negative differential resistance NDR regions equally spaced in bias voltage. The current-voltage (I-V) characteristics of resonant tunneling structures with diameters of 1 and 2 μm were analyzed in this study. These structures exhibit three stable operating points, which are investigated in detail. This research demonstrates that using molecular beam epitaxy MBE to vertically grow multiple resonant tunneling structures is a promising method for achieving multiple negative differential resistance regions and stable logic states. These findings have significant implications for the development of digital circuits capable of multi-value logic, which can be achieved with a small number of devices.Keywords: GaN, AlN, RTDs, MBE, logic state
Procedia PDF Downloads 92157 A Parking Demand Forecasting Method for Making Parking Policy in the Center of Kabul City
Authors: Roien Qiam, Shoshi Mizokami
Abstract:
Parking demand in the Central Business District (CBD) has enlarged with the increase of the number of private vehicles due to rapid economic growth, lack of an efficient public transport and traffic management system. This has resulted in low mobility, poor accessibility, serious congestion, high rates of traffic accident fatalities and injuries and air pollution, mainly because people have to drive slowly around to find a vacant spot. With parking pricing and enforcement policy, considerable advancement could be found, and on-street parking spaces could be managed efficiently and effectively. To evaluate parking demand and making parking policy, it is required to understand the current parking condition and driver’s behavior, understand how drivers choose their parking type and location as well as their behavior toward finding a vacant parking spot under parking charges and search times. This study illustrates the result from an observational, revealed and stated preference surveys and experiment. Attained data shows that there is a gap between supply and demand in parking and it has maximized. For the modeling of the parking decision, a choice model was constructed based on discrete choice modeling theory and multinomial logit model estimated by using SP survey data; the model represents the choice of an alternative among different alternatives which are priced on-street, off-street, and illegal parking. Individuals choose a parking type based on their preference concerning parking charges, searching times, access times and waiting times. The parking assignment model was obtained directly from behavioral model and is used in parking simulation. The study concludes with an evaluation of parking policy.Keywords: CBD, parking demand forecast, parking policy, parking choice model
Procedia PDF Downloads 198156 Long Wavelength Coherent Pulse of Sound Propagating in Granular Media
Authors: Rohit Kumar Shrivastava, Amalia Thomas, Nathalie Vriend, Stefan Luding
Abstract:
A mechanical wave or vibration propagating through granular media exhibits a specific signature in time. A coherent pulse or wavefront arrives first with multiply scattered waves (coda) arriving later. The coherent pulse is micro-structure independent i.e. it depends only on the bulk properties of the disordered granular sample, the sound wave velocity of the granular sample and hence bulk and shear moduli. The coherent wavefront attenuates (decreases in amplitude) and broadens with distance from its source. The pulse attenuation and broadening effects are affected by disorder (polydispersity; contrast in size of the granules) and have often been attributed to dispersion and scattering. To study the effect of disorder and initial amplitude (non-linearity) of the pulse imparted to the system on the coherent wavefront, numerical simulations have been carried out on one-dimensional sets of particles (granular chains). The interaction force between the particles is given by a Hertzian contact model. The sizes of particles have been selected randomly from a Gaussian distribution, where the standard deviation of this distribution is the relevant parameter that quantifies the effect of disorder on the coherent wavefront. Since, the coherent wavefront is system configuration independent, ensemble averaging has been used for improving the signal quality of the coherent pulse and removing the multiply scattered waves. The results concerning the width of the coherent wavefront have been formulated in terms of scaling laws. An experimental set-up of photoelastic particles constituting a granular chain is proposed to validate the numerical results.Keywords: discrete elements, Hertzian contact, polydispersity, weakly nonlinear, wave propagation
Procedia PDF Downloads 205155 Reservoir Potential, Net Pay Zone and 3D Modeling of Cretaceous Clastic Reservoir in Eastern Sulieman Belt Pakistan
Authors: Hadayat Ullah, Pervez Khalid, Saad Ahmed Mashwani, Zaheer Abbasi, Mubashir Mehmood, Muhammad Jahangir, Ehsan ul Haq
Abstract:
The aim of the study is to explore subsurface structures through data that is acquired from the seismic survey to delineate the characteristics of the reservoir through petrophysical analysis. Ghazij Shale of Eocene age is regional seal rock in this field. In this research work, 3D property models of subsurface were prepared by applying Petrel software to identify various lithologies and reservoir fluids distribution throughout the field. The 3D static modeling shows a better distribution of the discrete and continuous properties in the field. This model helped to understand the reservoir properties and enhance production by selecting the best location for future drilling. A complete workflow is proposed for formation evaluation, electrofacies modeling, and structural interpretation of the subsurface geology. Based on the wireline logs, it is interpreted that the thickness of the Pab Sandstone varies from 250 m to 350 m in the entire study area. The sandstone is massive with high porosity and intercalated layers of shales. Faulted anticlinal structures are present in the study area, which are favorable for the accumulation of hydrocarbon. 3D structural models and various seismic attribute models were prepared to analyze the reservoir character of this clastic reservoir. Based on wireline logs and seismic data, clean sand, shaly sand, and shale are marked as dominant facies in the study area. However, clean sand facies are more favorable to act as a potential net pay zone.Keywords: cretaceous, pab sandstone, petrophysics, electrofacies, hydrocarbon
Procedia PDF Downloads 144154 Finite Element Model to Evaluate Gas Conning Phenomenon in Naturally Fractured Oil Reservoirs
Authors: Reda Abdel Azim
Abstract:
Gas conning phenomenon considered one of the prevalent matter in oil field applications as it significantly affects the amount of produced oil, increase cost of production operation and it has a direct effect on oil reservoirs recovery efficiency as well. Therefore, evaluation of such phenomenon and study the reservoir mechanisms that may strongly affect invading gas to the producing formation is crucial. Gas conning is a result of an imbalance between two major forces controlling the oil production: gravitational and viscous forces especially in naturally fractured reservoirs where the capillary pressure forces are negligible. Once the gas invading the producing formation near the wellbore due to large producing oil rate, the oil gas contact will change and such reservoirs are prone to gas conning. Moreover, the oil volume expected to be produced requires the use of long horizontal perforated well. This work presents a numerical simulation study to predict and propose solutions to gas coning in naturally fractured oil reservoirs. The simulation work is based on discrete fractures and permeability tensors approaches. The governing equations are discretized using finite element approach and Galerkin’s least square technique (GLS) is employed to stabilize the equation solutions. The developed simulator is validated against Eclipse-100 using horizontal fractures. The matrix and fracture properties are modelled. Critical rate, breakthrough time and GOR are determined to be used in investigation of the effect of matrix and fracture properties on gas coning. Results show that fracture distribution in terms of diverse dip and azimuth has a great effect on conning occurring. In addition, fracture porosity, anisotropy ratio, and fracture aperture.Keywords: gas conning, finite element, fractured reservoirs, multiphase
Procedia PDF Downloads 195153 The Effectiveness of Exercise Therapy on Decreasing Pain in Women with Temporomandibular Disorders and How Their Brains Respond: A Pilot Randomized Controlled Trial
Authors: Zenah Gheblawi, Susan Armijo-Olivo, Elisa B. Pelai, Vaishali Sharma, Musa Tashfeen, Angela Fung, Francisca Claveria
Abstract:
Due to physiological differences between men and women, pain is experienced differently between the two sexes. Chronic pain disorders, notably temporomandibular disorders (TMDs), disproportionately affect women in diagnosis, and pain severity in opposition of their male counterparts. TMDs are a type of musculoskeletal disorder that target the masticatory muscles, temporalis muscle, and temporomandibular joints, causing considerable orofacial pain which can usually be referred to the neck and back. Therapeutic methods are scarce, and are not TMD-centered, with the latest research suggesting that subjects with chronic musculoskeletal pain disorders have abnormal alterations in the grey matter of their brains which can be remedied with exercise, and thus, decreasing the pain experienced. The aim of the study is to investigate the effects of exercise therapy in TMD female patients experiencing chronic jaw pain and to assess the consequential effects on brain activity. In a randomized controlled trial, the effectiveness of an exercise program to improve brain alterations and clinical outcomes in women with TMD pain will be tested. Women with chronic TMD pain will be randomized to either an intervention arm or a placebo control group. Women in the intervention arm will receive 8 weeks of progressive exercise of motor control training using visual feedback (MCTF) of the cervical muscles, twice per week. Women in the placebo arm will receive innocuous transcutaneous electrical nerve stimulation during 8 weeks as well. The primary outcomes will be changes in 1) pain, measured with the Visual Analogue Scale, 2) brain structure and networks, measured by fractional anisotropy (brain structure) and the blood-oxygen level dependent signal (brain networks). Outcomes will be measured at baseline, after 8 weeks of treatment, and 4 months after treatment ends and will determine effectiveness of MCTF in managing TMD, through improved clinical outcomes. Results will directly inform and guide clinicians in prescribing more effective interventions for women with TMD. This study is underway, and no results are available at this point. The results of this study will have substantial implications on the advancement in understanding the scope of plasticity the brain has in regards with pain, and how it can be used to improve the treatment and pain of women with TMD, and more generally, other musculoskeletal disorders.Keywords: exercise therapy, musculoskeletal disorders, physical therapy, rehabilitation, tempomandibular disorders
Procedia PDF Downloads 293152 Stereo Motion Tracking
Authors: Yudhajit Datta, Hamsi Iyer, Jonathan Bandi, Ankit Sethia
Abstract:
Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.Keywords: kalman filter, stereo vision, motion tracking, matlab, object tracking, camera calibration, computer vision system toolbox
Procedia PDF Downloads 327