Search results for: noise estimation
1143 Optimization of Proton Exchange Membrane Fuel Cell Parameters Based on Modified Particle Swarm Algorithms
Authors: M. Dezvarei, S. Morovati
Abstract:
In recent years, increasing usage of electrical energy provides a widespread field for investigating new methods to produce clean electricity with high reliability and cost management. Fuel cells are new clean generations to make electricity and thermal energy together with high performance and no environmental pollution. According to the expansion of fuel cell usage in different industrial networks, the identification and optimization of its parameters is really significant. This paper presents optimization of a proton exchange membrane fuel cell (PEMFC) parameters based on modified particle swarm optimization with real valued mutation (RVM) and clonal algorithms. Mathematical equations of this type of fuel cell are presented as the main model structure in the optimization process. Optimized parameters based on clonal and RVM algorithms are compared with the desired values in the presence and absence of measurement noise. This paper shows that these methods can improve the performance of traditional optimization methods. Simulation results are employed to analyze and compare the performance of these methodologies in order to optimize the proton exchange membrane fuel cell parameters.Keywords: clonal algorithm, proton exchange membrane fuel cell (PEMFC), particle swarm optimization (PSO), real-valued mutation (RVM)
Procedia PDF Downloads 3511142 Effect of White Roofing on Refrigerated Buildings
Authors: Samuel Matylewicz, K. W. Goossen
Abstract:
The deployment of white or cool (high albedo) roofing is a common energy savings recommendation for a variety of buildings all over the world. Here, the effect of a white roof on the energy savings of an ice rink facility in the northeastern US is determined by measuring the effect of solar irradiance on the consumption of the rink's ice refrigeration system. The consumption of the refrigeration system was logged over a year, along with multiple weather vectors, and a statistical model was applied. The experimental model indicates that the expected savings of replacing the existing grey roof with a white roof on the consumption of the refrigeration system is only 4.7 %. This overall result of the statistical model is confirmed with isolated instances of otherwise similar weather days, but cloudy vs. sunny, where there was no measurable difference in refrigeration consumption up to the noise in the local data, which was a few percent. This compares with a simple theoretical calculation that indicates 30% savings. The difference is attributed to a lack of convective cooling of the roof in the theoretical model. The best experimental model shows a relative effect of the weather vectors dry bulb temperature, solar irradiance, wind speed, and relative humidity on refrigeration consumption of 1, 0.026, 0.163, and -0.056, respectively. This result can have an impact on decisions to apply white roofing to refrigerated buildings in general.Keywords: cool roofs, solar cooling load, refrigerated buildings, energy-efficient building envelopes
Procedia PDF Downloads 1291141 An Optimized Method for 3D Magnetic Navigation of Nanoparticles inside Human Arteries
Authors: Evangelos G. Karvelas, Christos Liosis, Andreas Theodorakakos, Theodoros E. Karakasidis
Abstract:
In the present work, a numerical method for the estimation of the appropriate gradient magnetic fields for optimum driving of the particles into the desired area inside the human body is presented. The proposed method combines Computational Fluid Dynamics (CFD), Discrete Element Method (DEM) and Covariance Matrix Adaptation (CMA) evolution strategy for the magnetic navigation of nanoparticles. It is based on an iteration procedure that intents to eliminate the deviation of the nanoparticles from a desired path. Hence, the gradient magnetic field is constantly adjusted in a suitable way so that the particles’ follow as close as possible to a desired trajectory. Using the proposed method, it is obvious that the diameter of particles is crucial parameter for an efficient navigation. In addition, increase of particles' diameter decreases their deviation from the desired path. Moreover, the navigation method can navigate nanoparticles into the desired areas with efficiency approximately 99%.Keywords: computational fluid dynamics, CFD, covariance matrix adaptation evolution strategy, discrete element method, DEM, magnetic navigation, spherical particles
Procedia PDF Downloads 1421140 Statically Fused Unbiased Converted Measurements Kalman Filter
Authors: Zhengkun Guo, Yanbin Li, Wenqing Wang, Bo Zou
Abstract:
The statically fused converted position and doppler measurements Kalman filter (SF-CMKF) with additive debiased measurement conversion has been previously presented to combine the resulting states of converted position measurements Kalman filter (CPMKF) and converted doppler measurement Kalman filter (CDMKF) to yield the final state estimates under minimum mean squared error (MMSE) criterion. However, the exact compensation for the bias in the polar-to-cartesian and spherical-to-cartesian conversion are multiplicative and depend on the statistics of the cosine of the angle measurement errors. As a result, the consistency and performance of the SF-CMKF may be suboptimal in large-angle error situations. In this paper, the multiplicative unbiased position and Doppler measurement conversion for 2D (polar-to-cartesian) tracking are derived, and the SF-CMKF is improved to use those conversions. Monte Carlo simulations are presented to demonstrate the statistical consistency of the multiplicative unbiased conversion and the superior performance of the modified SF-CMKF (SF-UCMKF).Keywords: measurement conversion, Doppler, Kalman filter, estimation, tracking
Procedia PDF Downloads 2081139 Estimation of Reservoirs Fracture Network Properties Using an Artificial Intelligence Technique
Authors: Reda Abdel Azim, Tariq Shehab
Abstract:
The main objective of this study is to develop a subsurface fracture map of naturally fractured reservoirs by overcoming the limitations associated with different data sources in characterising fracture properties. Some of these limitations are overcome by employing a nested neuro-stochastic technique to establish inter-relationship between different data, as conventional well logs, borehole images (FMI), core description, seismic attributes, and etc. and then characterise fracture properties in terms of fracture density and fractal dimension for each data source. Fracture density is an important property of a system of fracture network as it is a measure of the cumulative area of all the fractures in a unit volume of a fracture network system and Fractal dimension is also used to characterize self-similar objects such as fractures. At the wellbore locations, fracture density and fractal dimension can only be estimated for limited sections where FMI data are available. Therefore, artificial intelligence technique is applied to approximate the quantities at locations along the wellbore, where the hard data is not available. It should be noted that Artificial intelligence techniques have proven their effectiveness in this domain of applications.Keywords: naturally fractured reservoirs, artificial intelligence, fracture intensity, fractal dimension
Procedia PDF Downloads 2541138 Design of a Graphical User Interface for Data Preprocessing and Image Segmentation Process in 2D MRI Images
Authors: Enver Kucukkulahli, Pakize Erdogmus, Kemal Polat
Abstract:
The 2D image segmentation is a significant process in finding a suitable region in medical images such as MRI, PET, CT etc. In this study, we have focused on 2D MRI images for image segmentation process. We have designed a GUI (graphical user interface) written in MATLABTM for 2D MRI images. In this program, there are two different interfaces including data pre-processing and image clustering or segmentation. In the data pre-processing section, there are median filter, average filter, unsharp mask filter, Wiener filter, and custom filter (a filter that is designed by user in MATLAB). As for the image clustering, there are seven different image segmentations for 2D MR images. These image segmentation algorithms are as follows: PSO (particle swarm optimization), GA (genetic algorithm), Lloyds algorithm, k-means, the combination of Lloyds and k-means, mean shift clustering, and finally BBO (Biogeography Based Optimization). To find the suitable cluster number in 2D MRI, we have designed the histogram based cluster estimation method and then applied to these numbers to image segmentation algorithms to cluster an image automatically. Also, we have selected the best hybrid method for each 2D MR images thanks to this GUI software.Keywords: image segmentation, clustering, GUI, 2D MRI
Procedia PDF Downloads 3771137 Taguchi-Based Optimization of Surface Roughness and Dimensional Accuracy in Wire EDM Process with S7 Heat Treated Steel
Authors: Joseph C. Chen, Joshua Cox
Abstract:
This research focuses on the use of the Taguchi method to reduce the surface roughness and improve dimensional accuracy of parts machined by Wire Electrical Discharge Machining (EDM) with S7 heat treated steel material. Due to its high impact toughness, the material is a candidate for a wide variety of tooling applications which require high precision in dimension and desired surface roughness. This paper demonstrates that Taguchi Parameter Design methodology is able to optimize both dimensioning and surface roughness successfully by investigating seven wire-EDM controllable parameters: pulse on time (ON), pulse off time (OFF), servo voltage (SV), voltage (V), servo feed (SF), wire tension (WT), and wire speed (WS). The temperature of the water in the Wire EDM process is investigated as the noise factor in this research. Experimental design and analysis based on L18 Taguchi orthogonal arrays are conducted. This paper demonstrates that the Taguchi-based system enables the wire EDM process to produce (1) high precision parts with an average of 0.6601 inches dimension, while the desired dimension is 0.6600 inches; and (2) surface roughness of 1.7322 microns which is significantly improved from 2.8160 microns.Keywords: Taguchi Parameter Design, surface roughness, Wire EDM, dimensional accuracy
Procedia PDF Downloads 3711136 Nonparametric Path Analysis with Truncated Spline Approach in Modeling Rural Poverty in Indonesia
Authors: Usriatur Rohma, Adji Achmad Rinaldo Fernandes
Abstract:
Nonparametric path analysis is a statistical method that does not rely on the assumption that the curve is known. The purpose of this study is to determine the best nonparametric truncated spline path function between linear and quadratic polynomial degrees with 1, 2, and 3-knot points and to determine the significance of estimating the best nonparametric truncated spline path function in the model of the effect of population migration and agricultural economic growth on rural poverty through the variable unemployment rate using the t-test statistic at the jackknife resampling stage. The data used in this study are secondary data obtained from statistical publications. The results showed that the best model of nonparametric truncated spline path analysis is quadratic polynomial degree with 3-knot points. In addition, the significance of the best-truncated spline nonparametric path function estimation using jackknife resampling shows that all exogenous variables have a significant influence on the endogenous variables.Keywords: nonparametric path analysis, truncated spline, linear, quadratic, rural poverty, jackknife resampling
Procedia PDF Downloads 461135 A Sequential Approach for Random-Effects Meta-Analysis
Authors: Samson Henry Dogo, Allan Clark, Elena Kulinskaya
Abstract:
The objective in meta-analysis is to combine results from several independent studies in order to create generalization and provide evidence based for decision making. But recent studies show that the magnitude of effect size estimates reported in many areas of research finding changed with year publication and this can impair the results and conclusions of meta-analysis. A number of sequential methods have been proposed for monitoring the effect size estimates in meta-analysis. However they are based on statistical theory applicable to fixed effect model (FEM). For random-effects model (REM), the analysis incorporates the heterogeneity variance, tau-squared and its estimation create complications. In this paper proposed the use of Gombay and Serbian (2005) truncated CUSUM-type test with asymptotically valid critical values for sequential monitoring of REM. Simulation results show that the test does not control the Type I error well, and is not recommended. Further work required to derive an appropriate test in this important area of application.Keywords: meta-analysis, random-effects model, sequential test, temporal changes in effect sizes
Procedia PDF Downloads 4671134 Influence of Physical Properties on Estimation of Mechanical Strength of Limestone
Authors: Khaled Benyounes
Abstract:
Determination of the rock mechanical properties such as unconfined compressive strength UCS, Young’s modulus E, and tensile strength by the Brazilian test Rtb is considered to be the most important component in drilling and mining engineering project. Research related to establishing correlation between strength and physical parameters of rocks has always been of interest to mining and reservoir engineering. For this, many rock blocks of limestone were collected from the quarry located in Meftah(Algeria), the cores were crafted in the laboratory using a core drill. This work examines the relationships between mechanical properties and some physical properties of limestone. Many empirical equations are established between UCS and physical properties of limestone (such as dry bulk density, velocity of P-waves, dynamic Young’s modulus, alteration index, and total porosity). Others correlations UCS-tensile strength, dynamic Young’s modulus-static Young’s modulus have been find. Based on the Mohr-Coulomb failure criterion, we were able to establish mathematical relationships that will allow estimating the cohesion and internal friction angle from UCS and indirect tensile strength. Results from this study can be useful for mining industry for resolve range of geomechanical problems such as slope stability.Keywords: limestone, mechanical strength, Young’s modulus, porosity
Procedia PDF Downloads 4541133 Precision Grinding of Titanium (Ti-6Al-4V) Alloy Using Nanolubrication
Authors: Ahmed A. D. Sarhan, Hong Wan Ping, M. Sayuti
Abstract:
In this current era of competitive machinery productions, the industries are designed to place more emphasis on the product quality and reduction of cost whilst abiding by the pollution-preventing policy. In attempting to delve into the concerns, the industries are aware that the effectiveness of existing lubrication systems must be improved to achieve power-efficient and pollution-preventing machining processes. As such, this research is targeted to study on a plausible solution to the issue in grinding titanium alloy (Ti-6Al-4V) by using nanolubrication, as an alternative to flood grinding. The aim of this research is to evaluate the optimum condition of grinding force and surface roughness using MQL lubricating system to deliver nano-oil at different level of weight concentration of Silicon Dioxide (SiO2) mixed normal mineral oil. Taguchi Design of Experiment (DoE) method is carried out using a standard Taguchi orthogonal array of L16(43) to find the optimized combination of weight concentration mixture of SiO2, nozzle orientation and pressure of MQL. Surface roughness and grinding force are also analyzed using signal-to-noise(S/N) ratio to determine the best level of each factor that are tested. Consequently, the best combination of parameters is tested for a period of time and the results are compared with conventional grinding method of dry and flood condition. The results show a positive performance of MQL nanolubrication.Keywords: grinding, MQL, precision grinding, Taguchi optimization, titanium alloy
Procedia PDF Downloads 2761132 Parametric Investigation of Wire-Cut Electric Discharge Machining on Steel ST-37
Authors: Mearg Berhe Gebregziabher
Abstract:
Wire-cut electric discharge machining (WEDM) is one of the advanced machining processes. Due to the development of the current manufacturing sector, there has been no research work done before about the optimization of the process parameters based on the availability of the workpiece of the Steel St-37 material in Ethiopia. Material Removal Rate (MRR) is considered as the experimental response of WCEDM. The main objective of this work is to investigate and optimize the process parameters on machining quality that gives high MRR during machining of Steel St-37. Throughout the investigation, Pulse on Time (TON), Pulse off Time (TOFF) and Velocities of Wire Feed (WR) are used as variable parameters at three different levels, and Wire tension, flow rate, type of dielectric fluid, type of the workpiece and wire material and dielectric flow rate are keeping as constants for each experiment. The Taguchi methodology, as per Taguchi‟ 's standard L9 (3^3) Orthogonal Array (OA), has been carried out to investigate their effects and to predict the optimal combination of process parameters over MRR. Signal to Noise ratio (S/N) and Analysis of Variance (ANOVA) were used to analyze the effect of the parameters and to identify the optimum cutting parameters on MRR. MRR was measured by using the Electronic Balance Model SI-32. The results indicated that the most significant factors for MRR are TOFF, TON and lastly WR. Taguchi analysis shows that, the optimal process parameters combination is A2B2C2, i.e., TON 6μs, TOFF 29μs and WR 2 m/min. At this level, the MRR of 0.414 gram/min has been achieved.Keywords: ANOVA, MRR, parameter, Taguchi Methode
Procedia PDF Downloads 431131 The Involvement of Visual and Verbal Representations Within a Quantitative and Qualitative Visual Change Detection Paradigm
Authors: Laura Jenkins, Tim Eschle, Joanne Ciafone, Colin Hamilton
Abstract:
An original working memory model suggested the separation of visual and verbal systems in working memory architecture, in which only visual working memory components were used during visual working memory tasks. It was later suggested that the visuo spatial sketch pad was the only memory component at use during visual working memory tasks, and components such as the phonological loop were not considered. In more recent years, a contrasting approach has been developed with the use of an executive resource to incorporate both visual and verbal representations in visual working memory paradigms. This was supported using research demonstrating the use of verbal representations and an executive resource in a visual matrix patterns task. The aim of the current research is to investigate the working memory architecture during both a quantitative and a qualitative visual working memory task. A dual task method will be used. Three secondary tasks will be used which are designed to hit specific components within the working memory architecture – Dynamic Visual Noise (visual components), Visual Attention (spatial components) and Verbal Attention (verbal components). A comparison of the visual working memory tasks will be made to discover if verbal representations are at use, as the previous literature suggested. This direct comparison has not been made so far in the literature. Considerations will be made as to whether a domain specific approach should be employed when discussing visual working memory tasks, or whether a more domain general approach could be used instead.Keywords: semantic organisation, visual memory, change detection
Procedia PDF Downloads 5951130 On the Effects of the Frequency and Amplitude of Sinusoidal External Cross-Flow Excitation Forces on the Vortex-Induced-Vibrations of an Oscillating Cylinder
Authors: Abouzar Kaboudian, Ravi Chaithanya Mysa, Boo Cheong Khoo, Rajeev Kumar Jaiman
Abstract:
Vortex induced vibrations can significantly affect the effectiveness of structures in aerospace as well as offshore marine industries. The oscillatory nature of the forces resulting from the vortex shedding around bluff bodies can result in undesirable effects such as increased loading, stresses, deflections, vibrations and noise in the structures, and also reduced fatigue life of the structures. To date, most studies concentrate on either the free oscillations or the prescribed motion of the bluff bodies. However, the structures in operation are usually subject to the external oscillatory forces (e.g. due to the platform motions in offshore industries). Periodic forces can be considered as a combinations of sinusoids. In this work, we present the effects of sinusoidal external cross-flow forces on the vortex-induced vibrations of an oscillating cylinder. The effects of the amplitude, as well as the frequency of these sinusoidal external force on the fluid-forces on the oscillating cylinder are carefully studied and presented. Moreover, we present the transition of the response to be dominated by the vortex-induced-vibrations to the range where it is mostly dictated by the external oscillatory forces. Furthermore, we will discuss how the external forces can affect the flow structures around a cylinder. All results are compared against free oscillations of the cylinder.Keywords: circular cylinder, external force, vortex-shedding, VIV
Procedia PDF Downloads 3691129 A Self Organized Map Method to Classify Auditory-Color Synesthesia from Frontal Lobe Brain Blood Volume
Authors: Takashi Kaburagi, Takamasa Komura, Yosuke Kurihara
Abstract:
Absolute pitch is the ability to identify a musical note without a reference tone. Training for absolute pitch often occurs in preschool education. It is necessary to clarify how well the trainee can make use of synesthesia in order to evaluate the effect of the training. To the best of our knowledge, there are no existing methods for objectively confirming whether the subject is using synesthesia. Therefore, in this study, we present a method to distinguish the use of color-auditory synesthesia from the separate use of color and audition during absolute pitch training. This method measures blood volume in the prefrontal cortex using functional Near-infrared spectroscopy (fNIRS) and assumes that the cognitive step has two parts, a non-linear step and a linear step. For the linear step, we assume a second order ordinary differential equation. For the non-linear part, it is extremely difficult, if not impossible, to create an inverse filter of such a complex system as the brain. Therefore, we apply a method based on a self-organizing map (SOM) and are guided by the available data. The presented method was tested using 15 subjects, and the estimation accuracy is reported.Keywords: absolute pitch, functional near-infrared spectroscopy, prefrontal cortex, synesthesia
Procedia PDF Downloads 2631128 Experimental and Analytical Dose Assessment of Patient's Family Members Treated with I-131
Authors: Marzieh Ebrahimi, Vahid Changizi, Mohammad Reza Kardan, Seyed Mahdi Hosseini Pooya, Parham Geramifar
Abstract:
Radiation exposure to the patient's family members is one of the major concerns during thyroid cancer radionuclide therapy. The aim of this study was to measure the total effective dose of the family members by means of thermoluminescence personal dosimeter, and compare with those calculated by analytical methods. Eighty-five adult family members of fifty-one patients volunteered to participate in this research study. Considering the minimum and maximum range of dose rate from 15 µsv/h to 120 µsv/h at patients' release time, the calculated mean and median dose values of family members were 0.45 mSv and 0.28 mSv, respectively. Moreover, almost all family members’ doses were measured to be less than the dose constraint of 5 mSv recommended by Basic Safety Standards. Considering the influence parameters such as patient dose rate and administrated activity, the total effective doses of family members were calculated by TEDE and NRC formulas and compared with those of experimental results. The results indicated that, it is fruitful to use the quantitative calculations for releasing patients treated with I-131 and correct estimation of patients' family doses.Keywords: effective dose, thermoluminescence, I-131, thyroid cancer
Procedia PDF Downloads 3991127 Shoreline Change Estimation from Survey Image Coordinates and Neural Network Approximation
Authors: Tienfuan Kerh, Hsienchang Lu, Rob Saunders
Abstract:
Shoreline erosion problems caused by global warming and sea level rising may result in losing of land areas, so it should be examined regularly to reduce possible negative impacts. Initially in this study, three sets of survey images obtained from the years of 1990, 2001, and 2010, respectively, are digitalized by using graphical software to establish the spatial coordinates of six major beaches around the island of Taiwan. Then, by overlaying the known multi-period images, the change of shoreline can be observed from their distribution of coordinates. In addition, the neural network approximation is used to develop a model for predicting shoreline variation in the years of 2015 and 2020. The comparison results show that there is no significant change of total sandy area for all beaches in the three different periods. However, the prediction results show that two beaches may exhibit an increasing of total sandy areas under a statistical 95% confidence interval. The proposed method adopted in this study may be applicable to other shorelines of interest around the world.Keywords: digitalized shoreline coordinates, survey image overlaying, neural network approximation, total beach sandy areas
Procedia PDF Downloads 2721126 Gaussian Probability Density for Forest Fire Detection Using Satellite Imagery
Authors: S. Benkraouda, Z. Djelloul-Khedda, B. Yagoubi
Abstract:
we present a method for early detection of forest fires from a thermal infrared satellite image, using the image matrix of the probability of belonging. The principle of the method is to compare a theoretical mathematical model to an experimental model. We considered that each line of the image matrix, as an embodiment of a non-stationary random process. Since the distribution of pixels in the satellite image is statistically dependent, we divided these lines into small stationary and ergodic intervals to characterize the image by an adequate mathematical model. A standard deviation was chosen to generate random variables, so each interval behaves naturally like white Gaussian noise. The latter has been selected as the mathematical model that represents a set of very majority pixels, which we can be considered as the image background. Before modeling the image, we made a few pretreatments, then the parameters of the theoretical Gaussian model were extracted from the modeled image, these settings will be used to calculate the probability of each interval of the modeled image to belong to the theoretical Gaussian model. The high intensities pixels are regarded as foreign elements to it, so they will have a low probability, and the pixels that belong to the background image will have a high probability. Finally, we did present the reverse of the matrix of probabilities of these intervals for a better fire detection.Keywords: forest fire, forest fire detection, satellite image, normal distribution, theoretical gaussian model, thermal infrared matrix image
Procedia PDF Downloads 1421125 Freeform Lens System for Collimation SERS irradiation Radiation Produced by Biolayers which Deposit on High Quality Resonant System
Authors: Iuliia Riabenko, Konstantin Beloshenko, Sergey Shulga, Valeriy Shulga
Abstract:
An optical system has been developed consisting of a TIR lens and an aspherical surface designed to collect Stokes radiation from biomolecules. The freeform material is SYLGARD-184, which provides a low level of noise associated with the luminescence of the substrate. The refractive index of SYLGARD-184 is 1.4028 for a wavelength of 632 nm, the Abbe number is 72, these material parameters make it possible to design the desired shape for the wavelength range of 640-700 nm. The system consists of a TIR lens, inside which is placed a high-quality resonant system consisting of a biomolecule and a metal colloid. This system can be described using the coupled oscillator model. The laser excitation radiation was fed through the base of the TIR lens. The sample was mounted inside the TIR lens at a distance of 8 mm from the base. As a result of Raman scattering of laser radiation, a Stokes bend appeared from the biolayer. The task of this work was that it was necessary to collect this radiation emitted at a 4π steradian angle. For this, an internal aspherical surface was used, which made it possible to defocus the beam emanating from the biolayer and direct its radiation to the borders of the TIR lens at the Brewster angle. The collated beam of Stokes radiation contains 97% of the energy scattered by the biolayer. Thus, a simple scheme was proposed for collecting and collimating the Stokes radiation of biomolecules.Keywords: TIR lens, freeform material, raman scattering, biolayer, brewster angle
Procedia PDF Downloads 1381124 Short Arc Technique for Baselines Determinations
Authors: Gamal F.Attia
Abstract:
The baselines are the distances and lengths of the chords between projections of the positions of the laser stations on the reference ellipsoid. For the satellite geodesy, it is very important to determine the optimal length of orbital arc along which laser measurements are to be carried out. It is clear that for the dynamical methods long arcs (one month or more) are to be used. According to which more errors of modeling of different physical forces such as earth's gravitational field, air drag, solar radiation pressure, and others that may influence the accuracy of the estimation of the satellites position, at the same time the measured errors con be almost completely excluded and high stability in determination of relative coordinate system can be achieved. It is possible to diminish the influence of the errors of modeling by using short-arcs of the satellite orbit (several revolutions or days), but the station's coordinates estimated by different arcs con differ from each other by a larger quantity than statistical zero. Under the semidynamical ‘short arc’ method one or several passes of the satellite in one of simultaneous visibility from both ends of the chord is known and the estimated parameter in this case is the length of the chord. The comparison of the same baselines calculated with long and short arcs methods shows a good agreement and even speaks in favor of the last one. In this paper the Short Arc technique has been explained and 3 baselines have been determined using the ‘short arc’ method.Keywords: baselines, short arc, dynamical, gravitational field
Procedia PDF Downloads 4631123 Case Study of High-Resolution Marine Seismic Survey in Shallow Water, Arabian Gulf, Saudi Arabia
Authors: Almalki M., Alajmi M., Qadrouh Y., Alzahrani E., Sulaiman A., Aleid M., Albaiji A., Alfaifi H., Alhadadi A., Almotairy H., Alrasheed R., Alhafedh Y.
Abstract:
High-resolution marine seismic survey is a well-established technique that commonly used to characterize near-surface sediments and geological structures at shallow water. We conduct single channel seismic survey to provide high quality seismic images for near-surface sediments upto 100m depth at Jubal costal area, Arabian Gulf. Eight hydrophones streamer has been used to collect stacked seismic traces alone 5km seismic line. To reach the required depth, we have used spark system that discharges energies above 5000 J with expected frequency output span the range from 200 to 2000 Hz. A suitable processing flow implemented to enhance the signal-to-noise ratio of the seismic profile. We have found that shallow sedimentary layers at the study site have complex pattern of reflectivity, which decay significantly due to amount of source energy used as well as the multiples associated to seafloor. In fact, the results reveal that single channel marine seismic at shallow water is a cost-effective technique that can be easily repeated to observe any possibly changes in the wave physical properties at the near surface layersKeywords: shallow marine single-channel data, high resolution, frequency filtering, shallow water
Procedia PDF Downloads 721122 Home Range and Spatial Interaction Modelling of Black Bears
Authors: Fekadu L. Bayisa, Elvan Ceyhan, Todd D. Steury
Abstract:
Interaction between individuals within the same species is an important component of population dynamics. An interaction can be either static (based on spatial overlap) or dynamic (based on movement interactions). Using GPS collar data, we can quantify both static and dynamic interactions between black bears. The goal of this work is to determine the level of black bear interactions using the 95% and 50% home ranges, as well as to model black bear spatial interactions, which could be attraction, avoidance/repulsion, or a lack of interaction at all, to gain new insights and improve our understanding of ecological processes. Recent methodological developments in home range estimation, inhomogeneous multitype/cross-type summary statistics, and envelope testing methods are explored to study the nature of black bear interactions. Our findings, in general, indicate that the black bears of one type in our data set tend to cluster around another type.Keywords: autocorrelated kernel density estimator, cross-type summary function, inhomogeneous multitype Poisson process, kernel density estimator, minimum convex polygon, pointwise and global envelope tests
Procedia PDF Downloads 811121 Implementation of a Multimodal Biometrics Recognition System with Combined Palm Print and Iris Features
Authors: Rabab M. Ramadan, Elaraby A. Elgallad
Abstract:
With extensive application, the performance of unimodal biometrics systems has to face a diversity of problems such as signal and background noise, distortion, and environment differences. Therefore, multimodal biometric systems are proposed to solve the above stated problems. This paper introduces a bimodal biometric recognition system based on the extracted features of the human palm print and iris. Palm print biometric is fairly a new evolving technology that is used to identify people by their palm features. The iris is a strong competitor together with face and fingerprints for presence in multimodal recognition systems. In this research, we introduced an algorithm to the combination of the palm and iris-extracted features using a texture-based descriptor, the Scale Invariant Feature Transform (SIFT). Since the feature sets are non-homogeneous as features of different biometric modalities are used, these features will be concatenated to form a single feature vector. Particle swarm optimization (PSO) is used as a feature selection technique to reduce the dimensionality of the feature. The proposed algorithm will be applied to the Institute of Technology of Delhi (IITD) database and its performance will be compared with various iris recognition algorithms found in the literature.Keywords: iris recognition, particle swarm optimization, feature extraction, feature selection, palm print, the Scale Invariant Feature Transform (SIFT)
Procedia PDF Downloads 2351120 Verification of Space System Dynamics Using the MATLAB Identification Toolbox in Space Qualification Test
Authors: Yuri V. Kim
Abstract:
This article presents a new approach to the Functional Testing of Space Systems (SS). It can be considered as a generic test and used for a wide class of SS that from the point of view of System Dynamics and Control may be described by the ordinary differential equations. Suggested methodology is based on using semi-natural experiment- laboratory stand that doesn’t require complicated, precise and expensive technological control-verification equipment. However, it allows for testing system as a whole totally assembled unit during Assembling, Integration and Testing (AIT) activities, involving system hardware (HW) and software (SW). The test physically activates system input (sensors) and output (actuators) and requires recording their outputs in real time. The data is then inserted in laboratory PC where it is post-experiment processed by Matlab/Simulink Identification Toolbox. It allows for estimating system dynamics in form of estimation of system differential equations by the experimental way and comparing them with expected mathematical model prematurely verified by mathematical simulation during the design process.Keywords: system dynamics, space system ground tests and space qualification, system dynamics identification, satellite attitude control, assembling, integration and testing
Procedia PDF Downloads 1631119 A Critical Study of the Performance of Self Compacting Concrete (SCC) Using Locally Supplied Materials in Bahrain
Abstract:
Development of new types of concrete with improved performance is a very important issue for the whole building industry. The development is based on the optimization of the concrete mix design, with an emphasis not only on the workability and mechanical properties but also to the durability and the reliability of the concrete structure in general. Self-compacting concrete (SCC) is a high-performance material designed to flow into formwork under its own weight and without the aid of mechanical vibration. At the same time it is cohesive enough to fill spaces of almost any size and shape without segregation or bleeding. Construction time is shorter and production of SCC is environmentally friendly (no noise, no vibration). Furthermore, SCC produces a good surface finish. Despite these advantages, SCC has not gained much local acceptance though it has been promoted in the Middle East for the last ten to twelve years. The reluctance in utilizing the advantages of SCC, in Bahrain, may be due to lack of research or published data pertaining to locally produced SCC. Therefore, there is a need to conduct studies on SCC using locally available material supplies. From the literature, it has been observed that the use of viscosity modifying admixtures (VMA), micro silica and glass fibers have proved to be very effective in stabilizing the rheological properties and the strength of fresh and hardened properties of self-compacting concrete (SCC). Therefore, in the present study, it is proposed to carry out investigations of SCC with combinations of various dosages of VMAs with and without micro silica and glass fibers and to study their influence on the properties of fresh and hardened concrete.Keywords: self-compacting concrete, viscosity modifying admixture, micro silica, glass fibers
Procedia PDF Downloads 6481118 Dynamic Fault Diagnosis for Semi-Batch Reactor Under Closed-Loop Control via Independent RBFNN
Authors: Abdelkarim M. Ertiame, D. W. Yu, D. L. Yu, J. B. Gomm
Abstract:
In this paper, a new robust fault detection and isolation (FDI) scheme is developed to monitor a multivariable nonlinear chemical process called the Chylla-Haase polymerization reactor when it is under the cascade PI control. The scheme employs a radial basis function neural network (RBFNN) in an independent mode to model the process dynamics and using the weighted sum-squared prediction error as the residual. The recursive orthogonal Least Squares algorithm (ROLS) is employed to train the model to overcome the training difficulty of the independent mode of the network. Then, another RBFNN is used as a fault classifier to isolate faults from different features involved in the residual vector. The several actuator and sensor faults are simulated in a nonlinear simulation of the reactor in Simulink. The scheme is used to detect and isolate the faults on-line. The simulation results show the effectiveness of the scheme even the process is subjected to disturbances and uncertainties including significant changes in the monomer feed rate, fouling factor, impurity factor, ambient temperature and measurement noise. The simulation results are presented to illustrate the effectiveness and robustness of the proposed method.Keywords: Robust fault detection, cascade control, independent RBF model, RBF neural networks, Chylla-Haase reactor, FDI under closed-loop control
Procedia PDF Downloads 4981117 Friction Calculation and Simulation of Column Electric Power Steering System
Authors: Seyed Hamid Mirmohammad Sadeghi, Raffaella Sesana, Daniela Maffiodo
Abstract:
This study presents a procedure for friction calculation of column electric power steering (C-EPS) system which affects handling and comfort in driving. The friction losses estimation is obtained from experimental tests and mathematical calculation. Parts in C-EPS mainly involved in friction losses are bearings and worm gear. In the theoretical approach, the gear geometry and Hertz law were employed to measure the normal load and the sliding velocity and contact areas from the worm gears driving conditions. The viscous friction generated in the worm gear was obtained with a theoretical approach and the result was applied to model the friction in the steering system. Finally, by viscous friction coefficient and Coulomb friction coefficient, values of friction in worm gear were calculated. According to the Bearing Company and the characteristics of each bearing, the friction torques due to load and due to speed were calculated. A MATLAB Simulink model for calculating the friction in bearings and worm gear in C-EPS were done and the total friction value was estimated.Keywords: friction, worm gear, column electric power steering system, simulink, bearing, EPS
Procedia PDF Downloads 3581116 Bayes Estimation of Parameters of Binomial Type Rayleigh Class Software Reliability Growth Model using Non-informative Priors
Authors: Rajesh Singh, Kailash Kale
Abstract:
In this paper, the Binomial process type occurrence of software failures is considered and failure intensity has been characterized by one parameter Rayleigh class Software Reliability Growth Model (SRGM). The proposed SRGM is mathematical function of parameters namely; total number of failures i.e. η-0 and scale parameter i.e. η-1. It is assumed that very little or no information is available about both these parameters and then considering non-informative priors for both these parameters, the Bayes estimators for the parameters η-0 and η-1 have been obtained under square error loss function. The proposed Bayes estimators are compared with their corresponding maximum likelihood estimators on the basis of risk efficiencies obtained by Monte Carlo simulation technique. It is concluded that both the proposed Bayes estimators of total number of failures and scale parameter perform well for proper choice of execution time.Keywords: binomial process, non-informative prior, maximum likelihood estimator (MLE), rayleigh class, software reliability growth model (SRGM)
Procedia PDF Downloads 3891115 A Posteriori Trading-Inspired Model-Free Time Series Segmentation
Authors: Plessen Mogens Graf
Abstract:
Within the context of multivariate time series segmentation, this paper proposes a method inspired by a posteriori optimal trading. After a normalization step, time series are treated channelwise as surrogate stock prices that can be traded optimally a posteriori in a virtual portfolio holding either stock or cash. Linear transaction costs are interpreted as hyperparameters for noise filtering. Trading signals, as well as trading signals obtained on the reversed time series, are used for unsupervised channelwise labeling before a consensus over all channels is reached that determines the final segmentation time instants. The method is model-free such that no model prescriptions for segments are made. Benefits of proposed approach include simplicity, computational efficiency, and adaptability to a wide range of different shapes of time series. Performance is demonstrated on synthetic and real-world data, including a large-scale dataset comprising a multivariate time series of dimension 1000 and length 2709. Proposed method is compared to a popular model-based bottom-up approach fitting piecewise affine models and to a recent model-based top-down approach fitting Gaussian models and found to be consistently faster while producing more intuitive results in the sense of segmenting time series at peaks and valleys.Keywords: time series segmentation, model-free, trading-inspired, multivariate data
Procedia PDF Downloads 1361114 Optimization of End Milling Process Parameters for Minimization of Surface Roughness of AISI D2 Steel
Authors: Pankaj Chandna, Dinesh Kumar
Abstract:
The present work analyses different parameters of end milling to minimize the surface roughness for AISI D2 steel. D2 Steel is generally used for stamping or forming dies, punches, forming rolls, knives, slitters, shear blades, tools, scrap choppers, tyre shredders etc. Surface roughness is one of the main indices that determines the quality of machined products and is influenced by various cutting parameters. In machining operations, achieving desired surface quality by optimization of machining parameters, is a challenging job. In case of mating components the surface roughness become more essential and is influenced by the cutting parameters, because, these quality structures are highly correlated and are expected to be influenced directly or indirectly by the direct effect of process parameters or their interactive effects (i.e. on process environment). In this work, the effects of selected process parameters on surface roughness and subsequent setting of parameters with the levels have been accomplished by Taguchi’s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L9 orthogonal array. Experimental investigation of the end milling of AISI D2 steel with carbide tool by varying feed, speed and depth of cut and the surface roughness has been measured using surface roughness tester. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the contribution of the different process parameters on the process.Keywords: D2 steel, orthogonal array, optimization, surface roughness, Taguchi methodology
Procedia PDF Downloads 544