Search results for: velocity estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1827

Search results for: velocity estimation

87 Estimation of Hysteretic Damping in Steel Dual Systems with Buckling Restrained Brace and Moment Resisting Frame

Authors: Seyed Saeid Tabaee, Omid Bahar

Abstract:

Nowadays, energy dissipation devices are commonly used in structures. High rate of energy absorption during earthquakes is the benefit of using such devices, which results in damage reduction of structural elements, specifically columns. The hysteretic damping capacity of energy dissipation devices is the key point that it may adversely make analysis and design process complicated. This effect may be generally represented by Equivalent Viscous Damping (EVD). The equivalent viscous damping might be obtained from the expected hysteretic behavior regarding to the design or maximum considered displacement of a structure. In this paper, the hysteretic damping coefficient of a steel Moment Resisting Frame (MRF), which its performance is enhanced by a Buckling Restrained Brace (BRB) system has been evaluated. Having foresight of damping fraction between BRB and MRF is inevitable for seismic design procedures like Direct Displacement-Based Design (DDBD) method. This paper presents an approach to calculate the damping fraction for such systems by carrying out the dynamic nonlinear time history analysis (NTHA) under harmonic loading, which is tuned to the natural system frequency. Two MRF structures, one equipped with BRB and the other without BRB are simultaneously studied. Extensive analysis shows that proportion of each system damping fraction may be calculated by its shear story portion. In this way, contribution of each BRB in the floors and their general contribution in the structural performance may be clearly recognized, in advance.

Keywords: Buckling restrained brace, Direct displacement based design, Dual systems, Hysteretic damping, Moment resisting frames.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2436
86 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: Diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion equation, trends functions, bi-parameters Weibull density function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1936
85 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data

Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu

Abstract:

Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant  of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual  value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.

Keywords: Food waste reduction, particle filter, point of sales, sustainable development goals, Taylor's Law, time series analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 818
84 Field Study on Thermal Performance of a Green Office in Bangkok, Thailand: A Possibility of Increasing Temperature Set-Points

Authors: T. Sikram, M. Ichinose, R. Sasaki

Abstract:

In the tropics, indoor thermal environment is usually provided by a cooling mode to maintain comfort all year. Indoor thermal environment performance is sometimes different from the standard or from the first design process because of operation, maintenance, and utilization. The field study of thermal environment in the green building is still limited in this region, while the green building continues to increase. This study aims to clarify thermal performance and subjective perception in the green building by testing the temperature set-points. A Thai green office was investigated twice in October 2018 and in May 2019. Indoor environment variables (temperature, relative humidity, and wind velocity) were collected continuously. The temperature set-point was normally set as 23 °C, and it was changed into 24 °C and 25 °C. The study found that this gap of temperature set-point produced average room temperature from 22.7 to 24.6 °C and average relative humidity from 55% to 62%. Thermal environments slight shifted out of the ASHRAE comfort zone when the set-point was increased. Based on the thermal sensation vote, the feeling-colder vote decreased by 30% and 18% when changing +1 °C and +2 °C, respectively. Predicted mean vote (PMV) shows that most of the calculated median values were negative. The values went close to the optimal neutral value (0) when the set-point was set at 25 °C. The neutral temperature was slightly decreased when changing warmer temperature set-points. Building-related symptom reports were found in this study that the number of votes reduced continuously when the temperature was warmer. The symptoms that occurred by a cooler condition had the number of votes more than ones that occurred by a warmer condition. In sum, for this green office, there is a possibility to adjust a higher temperature set-point to +1 °C (24 °C) in terms of reducing cold sensitivity, discomfort, and symptoms. All results could support the policy of changing a warmer temperature of this office to become “a better green building”.

Keywords: Thermal environment, green office, temperature set-point, comfort.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 623
83 Reinforcing Effects of Natural Micro-Particles on the Dynamic Impact Behaviour of Hybrid Bio-Composites Made of Short Kevlar Fibers Reinforced Thermoplastic Composite Armor

Authors: Edison E. Haro, Akindele G. Odeshi, Jerzy A. Szpunar

Abstract:

Hybrid bio-composites are developed for use in protective armor through positive hybridization offered by reinforcement of high-density polyethylene (HDPE) with Kevlar short fibers and palm wood micro-fillers. The manufacturing process involved a combination of extrusion and compression molding techniques. The mechanical behavior of Kevlar fiber reinforced HDPE with and without palm wood filler additions are compared. The effect of the weight fraction of the added palm wood micro-fillers is also determined. The Young modulus was found to increase as the weight fraction of organic micro-particles increased. However, the flexural strength decreased with increasing weight fraction of added micro-fillers. The interfacial interactions between the components were investigated using scanning electron microscopy. The influence of the size, random alignment and distribution of the natural micro-particles was evaluated. Ballistic impact and dynamic shock loading tests were performed to determine the optimum proportion of Kevlar short fibers and organic micro-fillers needed to improve impact strength of the HDPE. These results indicate a positive hybridization by deposition of organic micro-fillers on the surface of short Kevlar fibers used in reinforcing the thermoplastic matrix leading to enhancement of the mechanical strength and dynamic impact behavior of these materials. Therefore, these hybrid bio-composites can be promising materials for different applications against high velocity impacts.

Keywords: Hybrid bio-composites, organic nano-fillers, dynamic shocking loading, ballistic impacts, energy absorption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 708
82 Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements

Authors: Azadeh Rouhandeh, Chris Joslin, Zhen Qu, Yuu Ono

Abstract:

The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics.

Keywords: Hip joint centre, motion capture, soft tissue artefact, ultrasound depth measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2831
81 A Preliminary Study on the Suitability of Data Driven Approach for Continuous Water Level Modeling

Authors: Muhammad Aqil, Ichiro Kita, Moses Macalinao

Abstract:

Reliable water level forecasts are particularly important for warning against dangerous flood and inundation. The current study aims at investigating the suitability of the adaptive network based fuzzy inference system for continuous water level modeling. A hybrid learning algorithm, which combines the least square method and the back propagation algorithm, is used to identify the parameters of the network. For this study, water levels data are available for a hydrological year of 2002 with a sampling interval of 1-hour. The number of antecedent water level that should be included in the input variables is determined by two statistical methods, i.e. autocorrelation function and partial autocorrelation function between the variables. Forecasting was done for 1-hour until 12-hour ahead in order to compare the models generalization at higher horizons. The results demonstrate that the adaptive networkbased fuzzy inference system model can be applied successfully and provide high accuracy and reliability for river water level estimation. In general, the adaptive network-based fuzzy inference system provides accurate and reliable water level prediction for 1-hour ahead where the MAPE=1.15% and correlation=0.98 was achieved. Up to 12-hour ahead prediction, the model still shows relatively good performance where the error of prediction resulted was less than 9.65%. The information gathered from the preliminary results provide a useful guidance or reference for flood early warning system design in which the magnitude and the timing of a potential extreme flood are indicated.

Keywords: Neural Network, Fuzzy, River, Forecasting

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1254
80 Effect of Environmental Changes in Working Heart Rate among Industrial Workers: An Ergonomic Interpretation

Authors: P. Mukhopadhyay, N. C. Dey

Abstract:

Occupational health hazard is a very common term in every emerging country. Along with the unorganized sector, most organized sectors including government industries are suffering from this affliction. In addition to workload, the seasonal changes also have some impacts on working environment. With this focus in mind, one hundred male industrial workers, who are directly involved to the task of Periodic Overhauling (POH) in a fabricating workshop in the public domain are selected for this research work. They have been studied during work periods throughout different seasons in a year. For each and every season, the participants working heart rate (WHR) is measured and compared with the standards given by different national and internationally recognized agencies i.e., World Health Organization (WHO) and American Conference of Governmental Industrial Hygienists (ACGIH) etc. The different environmental parameters i.e. dry bulb temperature (DBT), wet bulb temperature (WBT), globe temperature (GT), natural wet bulb temperature (NWB), relative humidity (RH), wet bulb globe temperature (WBGT), air velocity (AV), effective temperature (ET) are recorded throughout the seasons to critically observe the effect of seasonal changes on the WHR of the workers. The effect of changes in environment to the WHR of the workers is very much surprising. It is found that the percentages of workers who belong to the ‘very heavy’ workload category are 83.33%, 66.66% and 16.66% in the summer, rainy and winter seasons, respectively. Ongoing undertaking of this type of job profile forces the worker towards occupational disorders causing absenteeism. This occurrence results in lower production rates, and on the other hand, costs due to medical claims also weaken the industry’s economic condition. In this circumstance, the authors are trying to focus on some remedial measures from the ergonomic angle by proposing a new work/ rest regimen and introducing engineering controls along with management controls which may help the worker, and consequently, the management also.

Keywords: Environmental changes, industrial worker, working heart rate, workload, occupational health hazard.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 990
79 A Case Study on Performance of Isolated Bridges under Near-Fault Ground Motion

Authors: Daniele Losanno, H. A. Hadad, Giorgio Serino

Abstract:

This paper presents a numerical investigation on the seismic performance of a benchmark bridge with different optimal isolation systems under near fault ground motion. Usually, very large displacements make seismic isolation an unfeasible solution due to boundary conditions, especially in case of existing bridges or high risk seismic regions. Hence, near-fault ground motions are most likely to affect either structures with long natural period range like isolated structures or structures sensitive to velocity content such as viscously damped structures. The work is aimed at analyzing the seismic performance of a three-span continuous bridge designed with different isolation systems having different levels of damping. The case study was analyzed in different configurations including: (a) simply supported, (b) isolated with lead rubber bearings (LRBs), (c) isolated with rubber isolators and 10% classical damping (HDLRBs), and (d) isolated with rubber isolators and 70% supplemental damping ratio. Case (d) represents an alternative control strategy that combines the effect of seismic isolation with additional supplemental damping trying to take advantages from both solutions. The bridge is modeled in SAP2000 and solved by time history direct-integration analyses under a set of six recorded near-fault ground motions. In addition to this, a set of analysis under Italian code provided seismic action is also conducted, in order to evaluate the effectiveness of the suggested optimal control strategies under far field seismic action. Results of the analysis demonstrated that an isolated bridge equipped with HDLRBs and a total equivalent damping ratio of 70% represents a very effective design solution for both mitigation of displacement demand at the isolation level and base shear reduction in the piers also in case of near fault ground motion.

Keywords: Isolated bridges, optimal design, near-fault motion, supplemental damping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1222
78 Feature Point Reduction for Video Stabilization

Authors: Theerawat Songyot, Tham Manjing, Bunyarit Uyyanonvara, Chanjira Sinthanayothin

Abstract:

Corner detection and optical flow are common techniques for feature-based video stabilization. However, these algorithms are computationally expensive and should be performed at a reasonable rate. This paper presents an algorithm for discarding irrelevant feature points and maintaining them for future use so as to improve the computational cost. The algorithm starts by initializing a maintained set. The feature points in the maintained set are examined against its accuracy for modeling. Corner detection is required only when the feature points are insufficiently accurate for future modeling. Then, optical flows are computed from the maintained feature points toward the consecutive frame. After that, a motion model is estimated based on the simplified affine motion model and least square method, with outliers belonging to moving objects presented. Studentized residuals are used to eliminate such outliers. The model estimation and elimination processes repeat until no more outliers are identified. Finally, the entire algorithm repeats along the video sequence with the points remaining from the previous iteration used as the maintained set. As a practical application, an efficient video stabilization can be achieved by exploiting the computed motion models. Our study shows that the number of times corner detection needs to perform is greatly reduced, thus significantly improving the computational cost. Moreover, optical flow vectors are computed for only the maintained feature points, not for outliers, thus also reducing the computational cost. In addition, the feature points after reduction can sufficiently be used for background objects tracking as demonstrated in the simple video stabilizer based on our proposed algorithm.

Keywords: background object tracking, feature point reduction, low cost tracking, video stabilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1725
77 Creating the Color Panoramic View using Medley of Grayscale and Color Partial Images

Authors: Dr. H. B. Kekre, Sudeep D. Thepade

Abstract:

Panoramic view generation has always offered novel and distinct challenges in the field of image processing. Panoramic view generation is nothing but construction of bigger view mosaic image from set of partial images of the desired view. The paper presents a solution to one of the problems of image seascape formation where some of the partial images are color and others are grayscale. The simplest solution could be to convert all image parts into grayscale images and fusing them to get grayscale image panorama. But in the multihued world, obtaining the colored seascape will always be preferred. This could be achieved by picking colors from the color parts and squirting them in grayscale parts of the seascape. So firstly the grayscale image parts should be colored with help of color image parts and then these parts should be fused to construct the seascape image. The problem of coloring grayscale images has no exact solution. In the proposed technique of panoramic view generation, the job of transferring color traits from reference color image to grayscale image is done by palette based method. In this technique, the color palette is prepared using pixel windows of some degrees taken from color image parts. Then the grayscale image part is divided into pixel windows with same degrees. For every window of grayscale image part the palette is searched and equivalent color values are found, which could be used to color grayscale window. For palette preparation we have used RGB color space and Kekre-s LUV color space. Kekre-s LUV color space gives better quality of coloring. The searching time through color palette is improved over the exhaustive search using Kekre-s fast search technique. After coloring the grayscale image pieces the next job is fusion of all these pieces to obtain panoramic view. For similarity estimation between partial images correlation coefficient is used.

Keywords: Panoramic View, Similarity Estimate, Color Transfer, Color Palette, Kekre's Fast Search, Kekre's LUV

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1705
76 A Three-Dimensional TLM Simulation Method for Thermal Effect in PV-Solar Cells

Authors: R. Hocine, A. Boudjemai, A. Amrani, K. Belkacemi

Abstract:

Temperature rising is a negative factor in almost all systems. It could cause by self heating or ambient temperature. In solar photovoltaic cells this temperature rising affects on the behavior of cells. The ability of a PV module to withstand the effects of periodic hot-spot heating that occurs when cells are operated under reverse biased conditions is closely related to the properties of the cell semi-conductor material.

In addition, the thermal effect also influences the estimation of the maximum power point (MPP) and electrical parameters for the PV modules, such as maximum output power, maximum conversion efficiency, internal efficiency, reliability, and lifetime. The cells junction temperature is a critical parameter that significantly affects the electrical characteristics of PV modules. For practical applications of PV modules, it is very important to accurately estimate the junction temperature of PV modules and analyze the thermal characteristics of the PV modules. Once the temperature variation is taken into account, we can then acquire a more accurate MPP for the PV modules, and the maximum utilization efficiency of the PV modules can also be further achieved.

In this paper, the three-Dimensional Transmission Line Matrix (3D-TLM) method was used to map the surface temperature distribution of solar cells while in the reverse bias mode. It was observed that some cells exhibited an inhomogeneity of the surface temperature resulting in localized heating (hot-spot). This hot-spot heating causes irreversible destruction of the solar cell structure. Hot spots can have a deleterious impact on the total solar modules if individual solar cells are heated. So, the results show clearly that the solar cells are capable of self-generating considerable amounts of heat that should be dissipated very quickly to increase PV module's lifetime.

Keywords: Thermal effect, Conduction, Heat dissipation, Thermal conductivity, Solar cell, PV module, Nodes, 3D-TLM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2311
75 Quality Classification and Monitoring Using Adaptive Metric Distance and Neural Networks: Application in Pickling Process

Authors: S. Bouhouche, M. Lahreche, S. Ziani, J. Bast

Abstract:

Modern manufacturing facilities are large scale, highly complex, and operate with large number of variables under closed loop control. Early and accurate fault detection and diagnosis for these plants can minimise down time, increase the safety of plant operations, and reduce manufacturing costs. Fault detection and isolation is more complex particularly in the case of the faulty analog control systems. Analog control systems are not equipped with monitoring function where the process parameters are continually visualised. In this situation, It is very difficult to find the relationship between the fault importance and its consequences on the product failure. We consider in this paper an approach to fault detection and analysis of its effect on the production quality using an adaptive centring and scaling in the pickling process in cold rolling. The fault appeared on one of the power unit driving a rotary machine, this machine can not track a reference speed given by another machine. The length of metal loop is then in continuous oscillation, this affects the product quality. Using a computerised data acquisition system, the main machine parameters have been monitored. The fault has been detected and isolated on basis of analysis of monitored data. Normal and faulty situation have been obtained by an artificial neural network (ANN) model which is implemented to simulate the normal and faulty status of rotary machine. Correlation between the product quality defined by an index and the residual is used to quality classification.

Keywords: Modeling, fault detection and diagnosis, parameters estimation, neural networks, Fault Detection and Diagnosis (FDD), pickling process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1540
74 Reducing the Imbalance Penalty through Artificial Intelligence Methods Geothermal Production Forecasting: A Case Study for Turkey

Authors: H. Anıl, G. Kar

Abstract:

In addition to being rich in renewable energy resources, Turkey is one of the countries that promise potential in geothermal energy production with its high installed power, cheapness, and sustainability. Increasing imbalance penalties become an economic burden for organizations, since the geothermal generation plants cannot maintain the balance of supply and demand due to the inadequacy of the production forecasts given in the day-ahead market. A better production forecast reduces the imbalance penalties of market participants and provides a better imbalance in the day ahead market. In this study, using machine learning, deep learning and time series methods, the total generation of the power plants belonging to Zorlu Doğal Electricity Generation, which has a high installed capacity in terms of geothermal, was predicted for the first one-week and first two-weeks of March, then the imbalance penalties were calculated with these estimates and compared with the real values. These modeling operations were carried out on two datasets, the basic dataset and the dataset created by extracting new features from this dataset with the feature engineering method. According to the results, Support Vector Regression from traditional machine learning models outperformed other models and exhibited the best performance. In addition, the estimation results in the feature engineering dataset showed lower error rates than the basic dataset. It has been concluded that the estimated imbalance penalty calculated for the selected organization is lower than the actual imbalance penalty, optimum and profitable accounts.

Keywords: Machine learning, deep learning, time series models, feature engineering, geothermal energy production forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 145
73 FT-NIR Method to Determine Moisture in Gluten Free Rice Based Pasta during Drying

Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra

Abstract:

Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.

Keywords: FT-NIR, Pasta, moisture determination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2777
72 Development of Map of Gridded Basin Flash Flood Potential Index: GBFFPI Map of QuangNam, QuangNgai, DaNang, Hue Provinces

Authors: Le Xuan Cau

Abstract:

Flash flood is occurred in short time rainfall interval: from 1 hour to 12 hours in small and medium basins. Flash floods typically have two characteristics: large water flow and big flow velocity. Flash flood is occurred at hill valley site (strip of lowland of terrain) in a catchment with large enough distribution area, steep basin slope, and heavy rainfall. The risk of flash floods is determined through Gridded Basin Flash Flood Potential Index (GBFFPI). Flash Flood Potential Index (FFPI) is determined through terrain slope flash flood index, soil erosion flash flood index, land cover flash floods index, land use flash flood index, rainfall flash flood index. Determining GBFFPI, each cell in a map can be considered as outlet of a water accumulation basin. GBFFPI of the cell is determined as basin average value of FFPI of the corresponding water accumulation basin. Based on GIS, a tool is developed to compute GBFFPI using ArcObjects SDK for .NET. The maps of GBFFPI are built in two types: GBFFPI including rainfall flash flood index (real time flash flood warning) or GBFFPI excluding rainfall flash flood index. GBFFPI Tool can be used to determine a high flash flood potential site in a large region as quick as possible. The GBFFPI is improved from conventional FFPI. The advantage of GBFFPI is that GBFFPI is taking into account the basin response (interaction of cells) and determines more true flash flood site (strip of lowland of terrain) while conventional FFPI is taking into account single cell and does not consider the interaction between cells. The GBFFPI Map of QuangNam, QuangNgai, DaNang, Hue is built and exported to Google Earth. The obtained map proves scientific basis of GBFFPI.

Keywords: ArcObjects SDK for .NET, Basin average value of FFPI, Gridded basin flash flood potential index, GBFFPI map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1873
71 Optimization of Assembly and Welding of Complex 3D Structures on the Base of Modeling with Use of Finite Elements Method

Authors: M. N. Zelenin, V. S. Mikhailov, R. P. Zhivotovsky

Abstract:

It is known that residual welding deformations give negative effect to processability and operational quality of welded structures, complicating their assembly and reducing strength. Therefore, selection of optimal technology, ensuring minimum welding deformations, is one of the main goals in developing a technology for manufacturing of welded structures. Through years, JSC SSTC has been developing a theory for estimation of welding deformations and practical activities for reducing and compensating such deformations during welding process. During long time a methodology was used, based on analytic dependence. This methodology allowed defining volumetric changes of metal due to welding heating and subsequent cooling. However, dependences for definition of structures deformations, arising as a result of volumetric changes of metal in the weld area, allowed performing calculations only for simple structures, such as units, flat sections and sections with small curvature. In case of complex 3D structures, estimations on the base of analytic dependences gave significant errors. To eliminate this shortage, it was suggested to use finite elements method for resolving of deformation problem. Here, one shall first calculate volumes of longitudinal and transversal shortenings of welding joints using method of analytic dependences and further, with obtained shortenings, calculate forces, which action is equivalent to the action of active welding stresses. Further, a finiteelements model of the structure is developed and equivalent forces are added to this model. Having results of calculations, an optimal sequence of assembly and welding is selected and special measures to reduce and compensate welding deformations are developed and taken.

Keywords: Finite elements method, modeling, expected welding deformations, welding, assembling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1714
70 Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured Global Navigation Satellite System Denied Environments

Authors: David L. Olson, Stephen B. H. Bruder, Adam S. Watkins, Cleon E. Davis

Abstract:

In global navigation satellite system (GNSS) denied settings, such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.

Keywords: Autonomous mobile robotics, dead reckoning, depth camera, inertial navigation, Kalman filtering, localization, sensor fusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 646
69 Dengue Disease Mapping with Standardized Morbidity Ratio and Poisson-gamma Model: An Analysis of Dengue Disease in Perak, Malaysia

Authors: N. A. Samat, S. H. Mohd Imam Ma’arof

Abstract:

Dengue disease is an infectious vector-borne viral disease that is commonly found in tropical and sub-tropical regions, especially in urban and semi-urban areas, around the world and including Malaysia. There is no currently available vaccine or chemotherapy for the prevention or treatment of dengue disease. Therefore prevention and treatment of the disease depend on vector surveillance and control measures. Disease risk mapping has been recognized as an important tool in the prevention and control strategies for diseases. The choice of statistical model used for relative risk estimation is important as a good model will subsequently produce a good disease risk map. Therefore, the aim of this study is to estimate the relative risk for dengue disease based initially on the most common statistic used in disease mapping called Standardized Morbidity Ratio (SMR) and one of the earliest applications of Bayesian methodology called Poisson-gamma model. This paper begins by providing a review of the SMR method, which we then apply to dengue data of Perak, Malaysia. We then fit an extension of the SMR method, which is the Poisson-gamma model. Both results are displayed and compared using graph, tables and maps. Results of the analysis shows that the latter method gives a better relative risk estimates compared with using the SMR. The Poisson-gamma model has been demonstrated can overcome the problem of SMR when there is no observed dengue cases in certain regions. However, covariate adjustment in this model is difficult and there is no possibility for allowing spatial correlation between risks in adjacent areas. The drawbacks of this model have motivated many researchers to propose other alternative methods for estimating the risk.

Keywords: Dengue disease, Disease mapping, Standardized Morbidity Ratio, Poisson-gamma model, Relative risk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3228
68 Pyrethroid Resistance and Its Mechanism in Field Populations of the Sand Termite, Psammotermes hypostoma Desneux

Authors: Mai. M. Toughan, Ahmed A. A. Sallam, Ashraf O. Abd El-Latif

Abstract:

Termites are eusocial insects that are found on all continents except Antarctica. Termites have serious destructive impact, damaging local huts and crops of poor subsistence. The annual cost of termite damage and its control is determined in the billions globally. In Egypt, most of these damages are due to the subterranean termite species especially the sand termite, P. hypostoma. Pyrethroids became the primary weapon for subterranean termite control, after the use of chlorpyrifos as a soil termiticide was banned. Despite the important role of pyrethroids in termite control, its extensive use in pest control led to the eventual rise of insecticide resistance which may make many of the pyrethroids ineffective. The ability to diagnose the precise mechanism of pyrethroid resistance in any insect species would be the key component of its management at specified location for a specific population. In the present study, detailed toxicological and biochemical studies was conducted on the mechanism of pyrethroid resistance in P. hypostoma. The susceptibility of field populations of P. hypostoma against deltamethrin, α-cypermethrin and ƛ-cyhalothrin was evaluated. The obtained results revealed that the workers of P. hypostoma have developed high resistance level against the tested pyrethroids. Studies carried out through estimation of detoxification enzyme activity indicated that enhanced esterase and cytochrome P450 activities were probably important mechanisms for pyrethroid resistance in field populations. Elevated esterase activity and also additional esterase isozyme were observed in the pyrethroid-resistant populations compared to the susceptible populations. Strong positive correlation between cytochrome P450 activity and pyrethroid resistance was also reported. |Deltamethrin could be recommended as a resistance-breaking pyrethroid that is active against resistant populations of P. hypostoma.

Keywords: Psammotermes hypostoma, pyrethroid resistance, esterase, cytochrome P450.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1755
67 Comparison of Cyclone Design Methods for Removal of Fine Particles from Plasma Generated Syngas

Authors: Mareli Hattingh, I. Jaco Van der Walt, Frans B. Waanders

Abstract:

A waste-to-energy plasma system was designed by Necsa for commercial use to create electricity from unsorted municipal waste. Fly ash particles must be removed from the syngas stream at operating temperatures of 1000 °C and recycled back into the reactor for complete combustion. A 2D2D high efficiency cyclone separator was chosen for this purpose. During this study, two cyclone design methods were explored: The Classic Empirical Method (smaller cyclone) and the Flow Characteristics Method (larger cyclone). These designs were optimized with regard to efficiency, so as to remove at minimum 90% of the fly ash particles of average size 10 μm by 50 μm. Wood was used as feed source at a concentration of 20 g/m3 syngas. The two designs were then compared at room temperature, using Perspex test units and three feed gases of different densities, namely nitrogen, helium and air. System conditions were imitated by adapting the gas feed velocity and particle load for each gas respectively. Helium, the least dense of the three gases, would simulate higher temperatures, whereas air, the densest gas, simulates a lower temperature. The average cyclone efficiencies ranged between 94.96% and 98.37%, reaching up to 99.89% in individual runs. The lowest efficiency attained was 94.00%. Furthermore, the design of the smaller cyclone proved to be more robust, while the larger cyclone demonstrated a stronger correlation between its separation efficiency and the feed temperatures. The larger cyclone can be assumed to achieve slightly higher efficiencies at elevated temperatures. However, both design methods led to good designs. At room temperature, the difference in efficiency between the two cyclones was almost negligible. At higher temperatures, however, these general tendencies are expected to be amplified so that the difference between the two design methods will become more obvious. Though the design specifications were met for both designs, the smaller cyclone is recommended as default particle separator for the plasma system due to its robust nature.

Keywords: Cyclone, design, plasma, renewable energy, solid separation, waste processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2333
66 Life Cycle Assessment of Residential Buildings: A Case Study in Canada

Authors: Venkatesh Kumar, Kasun Hewage, Rehan Sadiq

Abstract:

Residential buildings consume significant amounts of energy and produce large amount of emissions and waste. However, there is a substantial potential for energy savings in this sector which needs to be evaluated over the life cycle of residential buildings. Life Cycle Assessment (LCA) methodology has been employed to study the primary energy uses and associated environmental impacts of different phases (i.e., product, construction, use, end of life, and beyond building life) for residential buildings. Four different alternatives of residential buildings in Vancouver (BC, Canada) with a 50-year lifespan have been evaluated, including High Rise Apartment (HRA), Low Rise Apartment (LRA), Single family Attached House (SAH), and Single family Detached House (SDH). Life cycle performance of the buildings is evaluated for embodied energy, embodied environmental impacts, operational energy, operational environmental impacts, total life-cycle energy, and total life cycle environmental impacts. Estimation of operational energy and LCA are performed using DesignBuilder software and Athena Impact estimator software respectively. The study results revealed that over the life span of the buildings, the relationship between the energy use and the environmental impacts are identical. LRA is found to be the best alternative in terms of embodied energy use and embodied environmental impacts; while, HRA showed the best life-cycle performance in terms of minimum energy use and environmental impacts. Sensitivity analysis has also been carried out to study the influence of building service lifespan over 50, 75, and 100 years on the relative significance of embodied energy and total life cycle energy. The life-cycle energy requirements for SDH are found to be a significant component among the four types of residential buildings. The overall disclose that the primary operations of these buildings accounts for 90% of the total life cycle energy which far outweighs minor differences in embodied effects between the buildings.

Keywords: Building simulation, environmental impacts, life cycle assessment, life cycle energy analysis, residential buildings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5098
65 Sensor and Actuator Fault Detection in Connected Vehicles under a Packet Dropping Network

Authors: Z. Abdollahi Biron, P. Pisu

Abstract:

Connected vehicles are one of the promising technologies for future Intelligent Transportation Systems (ITS). A connected vehicle system is essentially a set of vehicles communicating through a network to exchange their information with each other and the infrastructure. Although this interconnection of the vehicles can be potentially beneficial in creating an efficient, sustainable, and green transportation system, a set of safety and reliability challenges come out with this technology. The first challenge arises from the information loss due to unreliable communication network which affects the control/management system of the individual vehicles and the overall system. Such scenario may lead to degraded or even unsafe operation which could be potentially catastrophic. Secondly, faulty sensors and actuators can affect the individual vehicle’s safe operation and in turn will create a potentially unsafe node in the vehicular network. Further, sending that faulty sensor information to other vehicles and failure in actuators may significantly affect the safe operation of the overall vehicular network. Therefore, it is of utmost importance to take these issues into consideration while designing the control/management algorithms of the individual vehicles as a part of connected vehicle system. In this paper, we consider a connected vehicle system under Co-operative Adaptive Cruise Control (CACC) and propose a fault diagnosis scheme that deals with these aforementioned challenges. Specifically, the conventional CACC algorithm is modified by adding a Kalman filter-based estimation algorithm to suppress the effect of lost information under unreliable network. Further, a sliding mode observer-based algorithm is used to improve the sensor reliability under faults. The effectiveness of the overall diagnostic scheme is verified via simulation studies.

Keywords: Fault diagnostics, communication network, connected vehicles, packet drop out, platoon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1955
64 Morphology and Risk Factors for Blunt Aortic Trauma in Car Accidents - An Autopsy Study

Authors: Ticijana Prijon, Branko Ermenc

Abstract:

Background: Blunt aortic trauma (BAT) includes various morphological changes that occur during deceleration, acceleration and/or body compression in traffic accidents. The various forms of BAT, from limited laceration of the intima to complete transection of the aorta, depends on the force acting on the vessel wall and the tolerance of the aorta to injury. The force depends on the change in velocity, the dynamics of the accident and of the seating position in the car. Tolerance to aortic injury depends on the anatomy, histological structure and pathomorphological alterations due to aging or disease of the aortic wall. An overview of the literature and medical documentation reveals that different terms are used to describe certain forms of BAT, which can lead to misinterpretation of findings or diagnoses. We therefore, propose a classification that would enable uniform systematic screening of all forms of BAT. We have classified BAT into three morphologycal types: TYPE I (intramural), TYPE II (transmural) and TYPE III (multiple) aortic ruptures with appropriate subtypes. Methods: All car accident casualties examined at the Institute of Forensic Medicine from 2001 to 2009 were included in this retrospective study. Autopsy reports were used to determine the occurrence of each morphological type of BAT in deceased drivers, front seat passengers and other passengers in cars and to define the morphology of BAT in relation to the accident dynamics and the age of the fatalities. Results: A total of 391 fatalities in car accidents were included in the study. TYPE I, TYPE II and TYPE III BAT were observed in 10,9%, 55,6% and 33,5%, respectively. The incidence of BAT in drivers, front seat and other passengers was 36,7%, 43,1% and 28,6%, respectively. In frontal collisions, the incidence of BAT was 32,7%, in lateral collisions 54,2%, and in other traffic accidents 29,3%. The average age of fatalities with BAT was 42,8 years and of those without BAT 39,1 years. Conclusion: Identification and early recognition of the risk factors of BAT following a traffic accident is crucial for successful treatment of patients with BAT. Front seat passengers over 50 years of age who have been injured in a lateral collision are the most at risk of BAT.

Keywords: Aorta, blunt trauma, car accidents, morphology, risk factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067
63 Spatial Mapping of Dengue Incidence: A Case Study in Hulu Langat District, Selangor, Malaysia

Authors: Er, A. C., Rosli, M. H., Asmahani A., Mohamad Naim M. R., Harsuzilawati M.

Abstract:

Dengue is a mosquito-borne infection that has peaked to an alarming rate in recent decades. It can be found in tropical and sub-tropical climate. In Malaysia, dengue has been declared as one of the national health threat to the public. This study aimed to map the spatial distributions of dengue cases in the district of Hulu Langat, Selangor via a combination of Geographic Information System (GIS) and spatial statistic tools. Data related to dengue was gathered from the various government health agencies. The location of dengue cases was geocoded using a handheld GPS Juno SB Trimble. A total of 197 dengue cases occurring in 2003 were used in this study. Those data then was aggregated into sub-district level and then converted into GIS format. The study also used population or demographic data as well as the boundary of Hulu Langat. To assess the spatial distribution of dengue cases three spatial statistics method (Moran-s I, average nearest neighborhood (ANN) and kernel density estimation) were applied together with spatial analysis in the GIS environment. Those three indices were used to analyze the spatial distribution and average distance of dengue incidence and to locate the hot spot of dengue cases. The results indicated that the dengue cases was clustered (p < 0.01) when analyze using Moran-s I with z scores 5.03. The results from ANN analysis showed that the average nearest neighbor ratio is less than 1 which is 0.518755 (p < 0.0001). From this result, we can expect the dengue cases pattern in Hulu Langat district is exhibiting a cluster pattern. The z-score for dengue incidence within the district is -13.0525 (p < 0.0001). It was also found that the significant spatial autocorrelation of dengue incidences occurs at an average distance of 380.81 meters (p < 0.0001). Several locations especially residential area also had been identified as the hot spots of dengue cases in the district.

Keywords: Dengue, geographic information system (GIS), spatial analysis, spatial statistics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5294
62 Influence of Thermo-fluid-dynamic Parameters on Fluidics in an Expanding Thermal Plasma Deposition Chamber

Authors: G. Zuppardi, F. Romano

Abstract:

Technology of thin film deposition is of interest in many engineering fields, from electronic manufacturing to corrosion protective coating. A typical deposition process, like that developed at the University of Eindhoven, considers the deposition of a thin, amorphous film of C:H or of Si:H on the substrate, using the Expanding Thermal arc Plasma technique. In this paper a computing procedure is proposed to simulate the flow field in a deposition chamber similar to that at the University of Eindhoven and a sensitivity analysis is carried out in terms of: precursor mass flow rate, electrical power, supplied to the torch and fluid-dynamic characteristics of the plasma jet, using different nozzles. To this purpose a deposition chamber similar in shape, dimensions and operating parameters to the above mentioned chamber is considered. Furthermore, a method is proposed for a very preliminary evaluation of the film thickness distribution on the substrate. The computing procedure relies on two codes working in tandem; the output from the first code is the input to the second one. The first code simulates the flow field in the torch, where Argon is ionized according to the Saha-s equation, and in the nozzle. The second code simulates the flow field in the chamber. Due to high rarefaction level, this is a (commercial) Direct Simulation Monte Carlo code. Gas is a mixture of 21 chemical species and 24 chemical reactions from Argon plasma and Acetylene are implemented in both codes. The effects of the above mentioned operating parameters are evaluated and discussed by 2-D maps and profiles of some important thermo-fluid-dynamic parameters, as per Mach number, velocity and temperature. Intensity, position and extension of the shock wave are evaluated and the influence of the above mentioned test conditions on the film thickness and uniformity of distribution are also evaluated.

Keywords: Deposition chamber, Direct Simulation Mote Carlo method (DSMC), Plasma chemistry, Rarefied gas dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1661
61 Design, Fabrication and Evaluation of MR Damper

Authors: A. Ashfak, A. Saheed, K. K. Abdul Rasheed, J. Abdul Jaleel

Abstract:

This paper presents the design, fabrication and evaluation of magneto-rheological damper. Semi-active control devices have received significant attention in recent years because they offer the adaptability of active control devices without requiring the associated large power sources. Magneto-Rheological (MR) dampers are semi- active control devices that use MR fluids to produce controllable dampers. They potentially offer highly reliable operation and can be viewed as fail-safe in that they become passive dampers if the control hardware malfunction. The advantage of MR dampers over conventional dampers are that they are simple in construction, compromise between high frequency isolation and natural frequency isolation, they offer semi-active control, use very little power, have very quick response, has few moving parts, have a relax tolerances and direct interfacing with electronics. Magneto- Rheological (MR) fluids are Controllable fluids belonging to the class of active materials that have the unique ability to change dynamic yield stress when acted upon by an electric or magnetic field, while maintaining viscosity relatively constant. This property can be utilized in MR damper where the damping force is changed by changing the rheological properties of the fluid magnetically. MR fluids have a dynamic yield stress over Electro-Rheological fluids (ER) and a broader operational temperature range. The objective of this papert was to study the application of an MR damper to vibration control, design the vibration damper using MR fluids, test and evaluate its performance. In this paper the Rheology and the theory behind MR fluids and their use on vibration control were studied. Then a MR vibration damper suitable for vehicle suspension was designed and fabricated using the MR fluid. The MR damper was tested using a dynamic test rig and the results were obtained in the form of force vs velocity and the force vs displacement plots. The results were encouraging and greatly inspire further research on the topic.

Keywords: Magneto-rheological Fluid, MR Damper, Semiactive controller, Electro-rheological fluid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5896
60 An Estimation of Rice Output Supply Response in Sierra Leone: A Nerlovian Model Approach

Authors: Alhaji M. H. Conteh, Xiangbin Yan, Issa Fofana, Brima Gegbe, Tamba I. Isaac

Abstract:

Rice grain is Sierra Leone’s staple food and the nation imports over 120,000 metric tons annually due to a shortfall in its cultivation. Thus, the insufficient level of the crop's cultivation in Sierra Leone is caused by many problems and this led to the endlessly widening supply and demand for the crop within the country. Consequently, this has instigated the government to spend huge money on the importation of this grain that would have been otherwise cultivated domestically at a cheaper cost. Hence, this research attempts to explore the response of rice supply with respect to its demand in Sierra Leone within the period 1980-2010. The Nerlovian adjustment model to the Sierra Leone rice data set within the period 1980-2010 was used. The estimated trend equations revealed that time had significant effect on output, productivity (yield) and area (acreage) of rice grain within the period 1980-2010 and this occurred generally at the 1% level of significance. The results showed that, almost the entire growth in output had the tendency to increase in the area cultivated to the crop. The time trend variable that was included for government policy intervention showed an insignificant effect on all the variables considered in this research. Therefore, both the short-run and long-run price response was inelastic since all their values were less than one. From the findings above, immediate actions that will lead to productivity growth in rice cultivation are required. To achieve the above, the responsible agencies should provide extension service schemes to farmers as well as motivating them on the adoption of modern rice varieties and technology in their rice cultivation ventures.

Keywords: Nerlovian adjustment model, price elasticities, Sierra Leone, Trend equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2749
59 Bio-Estimation of Selected Heavy Metals in Shellfish and Their Surrounding Environmental Media

Authors: Ebeed A. Saleh, Kadry M. Sadek, Safaa H. Ghorbal

Abstract:

Due to the determination of the pollution status of fresh resources in the Egyptian territorial waters is very important for public health; this study was carried out to reveal the levels of heavy metals in the shellfish and their environment and its relation to the highly developed industrial activities in those areas. A total of 100 shellfish samples from the Rosetta, Edku, El-Maadiya, Abo-Kir and El-Max coasts [10 crustaceans (shrimp) and 10 mollusks (oysters)] were randomly collected from each coast. Additionally, 10 samples from both the water and the sediment were collected from each coast. Each collected sample was analyzed for cadmium, chromium, copper, lead and zinc residues using a Perkin Elmer atomic absorption spectrophotometer (AAS). The results showed that the levels of heavy metals were higher in the water and sediment from Abo-Kir. The heavy metal levels decreased successively for the Rosetta, Edku, El-Maadiya, and El-Max coasts, and the concentrations of heavy metals, except copper and zinc, in shellfish exhibited the same pattern. For the concentration of heavy metals in shellfish tissue, the highest was zinc and the concentrations decreased successively for copper, lead, chromium and cadmium for all coasts, except the Abo-Kir coast, where the chromium level was highest and the other metals decreased successively for zinc, copper, lead and cadmium. In Rosetta, chromium was higher only in the mollusks, while the level of this metal was lower in the crustaceans; this trend was observed at the Edku, El-Maadiya and El-Max coasts as well. Herein, we discuss the importance of such contamination for public health and the sources of shellfish contamination with heavy metals. We suggest measures to minimize and prevent these pollutants in the aquatic environment and, furthermore, how to protect humans from excessive intake.

Keywords: Atomic absorption, heavy metals, sediment, shellfish, water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2813
58 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.

Keywords: linked open data, information integration, digital libraries, data mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 678