Search results for: field measurement
9479 Application of Mathematical Models for Conducting Long-Term Metal Fume Exposure Assessments for Workers in a Shipbuilding Factory
Authors: Shu-Yu Chung, Ying-Fang Wang, Shih-Min Wang
Abstract:
To conduct long-term exposure assessments are important for workers exposed to chemicals with chronic effects. However, it usually encounters with several constrains, including cost, workers' willingness, and interference to work practice, etc., leading to inadequate long-term exposure data in the real world. In this study, an integrated approach was developed for conducting long-term exposure assessment for welding workers in a shipbuilding factory. A laboratory study was conducted to yield the fume generation rates under various operating conditions. The results and the measured environmental conditions were applied to the near field/far field (NF/FF) model for predicting long term fume exposures via the Monte Carlo simulation. Then, the predicted long-term concentrations were used to determine the prior distribution in Bayesian decision analysis (BDA). Finally, the resultant posterior distributions were used to assess the long-term exposure and serve as basis for initiating control strategies for shipbuilding workers. Results show that the NF/FF model was a suitable for predicting the exposures of metal contents containing in welding fume. The resultant posterior distributions could effectively assess the long-term exposures of shipbuilding welders. Welders' long-term Fe, Mn and Pb exposures were found with high possibilities to exceed the action level indicating preventive measures should be taken for reducing welders' exposures immediately. Though the resultant posterior distribution can only be regarded as the best solution based on the currently available predicting and monitoring data, the proposed integrated approach can be regarded as a possible solution for conducting long term exposure assessment in the field.Keywords: Bayesian decision analysis, exposure assessment, near field and far field model, shipbuilding industry, welding fume
Procedia PDF Downloads 1409478 Effect of Viscosity on Propagation of MHD Waves in Astrophysical Plasma
Authors: Alemayehu Mengesha, Solomon Belay
Abstract:
We determine the general dispersion relation for the propagation of magnetohydrodynamic (MHD) waves in an astrophysical plasma by considering the effect of viscosity with an anisotropic pressure tensor. Basic MHD equations have been derived and linearized by the method of perturbation to develop the general form of the dispersion relation equation. Our result indicates that an astrophysical plasma with an anisotropic pressure tensor is stable in the presence of viscosity and a strong magnetic field at considerable wavelength. Currently, we are doing the numerical analysis of this work.Keywords: astrophysical, magnetic field, instability, MHD, wavelength, viscosity
Procedia PDF Downloads 3439477 Report of Soundings in Tappeh Shahrestan in Order to Determine Its Field and Propose Privacy, Documenting and Systematic Review of Geophysical Studies
Authors: Reza Mehrafarin, Nafiseh Mirshekari, Mahyar Mehrafarin
Abstract:
In 25 km southeast of Zabul (center of Sistan, in the east of Iran), a large hill can be seen. This hill, which is located next to the bend of the Sistan river, is known as the Tappeh Shahrestan. The length of the Tappeh Shahrestan is 1350 meters, its width is 360 meters, and its height is 20 meters, which in total reaches to 48 hectares. The capital of Sistan province was Ram Shahrestan in the Sassanid period, according to Iranian historical texts and Sassanid Pahlavi traditions. The city was abandoned because the nearby river dried up. Then another capital was built in Sistan called Zarang. But due to the long passage of time since the destruction of the city, its real location was forgotten and and some archaeologists have suggested different areas as the main location of the Ram Shahrestan. In 2018, the first archaeological field activities took place on and around the hillin order to answer this question: was Tappe Shahristan the same as Ram Shahristan, the capital of Sistan, during the Sassanid period? In order to answer this question, archaeological field activities were carried out on and around the hill. The field activities of the first season included the followings: 1- Preparation of hill topography and plan metric 3-Archaeogeophysics studies 3-Methodical study of archeology 4-Determining the range of the hill by soundings5-Documentation of the hill 6-Classification, typology, and comparison of pottery typology. The results of archaeological field activities in the first phase of Tappeh Shahrestan showed that this ancient site was the same city of Ram Shahrestan, the capital of Sistan, during the Sassanid period. The beginning of settlement in this city was the third century BC and the time of leaving was the end of the third century AD. The most important factors in the creation of the city was the abundant water of the Sistan River and its convenient location, and the most important reason for the abandonment of the city was the Sistan River, whose water completely dried up.Keywords: archaeological surveys, archaeological soundings, ram shahrestan, sistan, tappeh shahrestan
Procedia PDF Downloads 1109476 Estimation of PM10 Concentration Using Ground Measurements and Landsat 8 OLI Satellite Image
Authors: Salah Abdul Hameed Saleh, Ghada Hasan
Abstract:
The aim of this work is to produce an empirical model for the determination of particulate matter (PM10) concentration in the atmosphere using visible bands of Landsat 8 OLI satellite image over Kirkuk city- IRAQ. The suggested algorithm is established on the aerosol optical reflectance model. The reflectance model is a function of the optical properties of the atmosphere, which can be related to its concentrations. The concentration of PM10 measurements was collected using Particle Mass Profiler and Counter in a Single Handheld Unit (Aerocet 531) meter simultaneously by the Landsat 8 OLI satellite image date. The PM10 measurement locations were defined by a handheld global positioning system (GPS). The obtained reflectance values for visible bands (Coastal aerosol, Blue, Green and blue bands) of landsat 8 OLI image were correlated with in-suite measured PM10. The feasibility of the proposed algorithms was investigated based on the correlation coefficient (R) and root-mean-square error (RMSE) compared with the PM10 ground measurement data. A choice of our proposed multispectral model was founded on the highest value correlation coefficient (R) and lowest value of the root mean square error (RMSE) with PM10 ground data. The outcomes of this research showed that visible bands of Landsat 8 OLI were capable of calculating PM10 concentration with an acceptable level of accuracy.Keywords: air pollution, PM10 concentration, Lansat8 OLI image, reflectance, multispectral algorithms, Kirkuk area
Procedia PDF Downloads 4429475 Investigation of Supply and Demand Trends in Diabetes Nutrition Counseling
Authors: Maedeh Gharazi
Abstract:
Distinguishing proof of entrepreneurial open doors in the field of nutrition counseling is a focal issue in utilizing nutrition experts and addressing the needs of patients with chronic diseases better. To this end, this review has been directed keeping in mind the end goal to investigate the supply and interest patterns of diabetes sustenance advising as a fundamental stride toward recognizing the entrepreneurial open doors for nutrition advisors in Tehran, Iran. To execute this expressive overview concentrate on, a survey in light of Likert scale was sent via email to 100 dynamic experts in the field of nutrition counseling services in Tehran, of whom 52 reacted to its inquiries. At that point, the mean estimations of members' reactions were ascertained utilizing SPSS programming and contrasted to each other. The outcome acquired in view of members' reactions uncovered that the requirement for "healthful guiding as a treatment group" was basically not met in diverse age, training and salary gatherings of diabetic patients. Along these lines, nutrition counseling as a treatment group can be considered as a suitable field for entrepreneurial exercises.Keywords: nutrition counseling, chronic diseases, diabetes, likert scale, SPSS programming
Procedia PDF Downloads 3439474 First Systematic Review on Aerosol Bound Water: Exploring the Existing Knowledge Domain Using the CiteSpace Software
Authors: Kamila Widziewicz-Rzonca
Abstract:
The presence of PM bound water as an integral chemical compound of suspended aerosol particles (PM) has become one of the hottest issues in recent years. The UN climate summits on climate change (COP24) indicate that PM of anthropogenic origin (released mostly from coal combustion) is directly responsible for climate change. Chemical changes at the particle-liquid (water) interface determine many phenomena occurring in the atmosphere such as visibility, cloud formation or precipitation intensity. Since water-soluble particles such as nitrates, sulfates, or sea salt easily become cloud condensation nuclei, they affect the climate for example by increasing cloud droplet concentration. Aerosol water is a master component of atmospheric aerosols and a medium that enables all aqueous-phase reactions occurring in the atmosphere. Thanks to a thorough bibliometric analysis conducted using CiteSpace Software, it was possible to identify past trends and possible future directions in measuring aerosol-bound water. This work, in fact, doesn’t aim at reviewing the existing literature in the related topic but is an in-depth bibliometric analysis exploring existing gaps and new frontiers in the topic of PM-bound water. To assess the major scientific areas related to PM-bound water and clearly define which among those are the most active topics we checked Web of Science databases from 1996 till 2018. We give an answer to the questions: which authors, countries, institutions and aerosol journals to the greatest degree influenced PM-bound water research? Obtained results indicate that the paper with the greatest citation burst was Tang In and Munklewitz H.R. 'water activities, densities, and refractive indices of aqueous sulfates and sodium nitrate droplets of atmospheric importance', 1994. The largest number of articles in this specific field was published in atmospheric chemistry and physics. An absolute leader in the quantity of publications among all research institutions is the National Aeronautics Space Administration (NASA). Meteorology and atmospheric sciences is a category with the most studies in this field. A very small number of studies on PM-bound water conduct a quantitative measurement of its presence in ambient particles or its origin. Most articles rather point PM-bound water as an artifact in organic carbon and ions measurements without any chemical analysis of its contents. This scientometric study presents the current and most actual literature regarding particulate bound water.Keywords: systematic review, aerosol-bound water, PM-bound water, CiteSpace, knowledge domain
Procedia PDF Downloads 1239473 Glucose Measurement in Response to Environmental and Physiological Challenges: Towards a Non-Invasive Approach to Study Stress in Fishes
Authors: Tomas Makaras, Julija Razumienė, Vidutė Gurevičienė, Gintarė Sauliutė, Milda Stankevičiūtė
Abstract:
Stress responses represent animal’s natural reactions to various challenging conditions and could be used as a welfare indicator. Regardless of the wide use of glucose measurements in stress evaluation, there are some inconsistencies in its acceptance as a stress marker, especially when it comes to comparison with non-invasive cortisol measurements in the fish challenging stress. To meet the challenge and to test the reliability and applicability of glucose measurement in practice, in this study, different environmental/anthropogenic exposure scenarios were simulated to provoke chemical-induced stress in fish (14-days exposure to landfill leachate) followed by a 14-days stress recovery period and under the cumulative effect of leachate fish subsequently exposed to pathogenic oomycetes (Saprolegnia parasitica) to represent a possible infection in fish. It is endemic to all freshwater habitats worldwide and is partly responsible for the decline of natural freshwater fish populations. Brown trout (Salmo trutta fario) and sea trout (Salmo trutta trutta) juveniles were chosen because of a large amount of literature on physiological stress responses in these species was known. Glucose content in fish by applying invasive and non-invasive glucose measurement procedures in different test mediums such as fish blood, gill tissues and fish-holding water were analysed. The results indicated that the quantity of glucose released in the holding water of stressed fish increased considerably (approx. 3.5- to 8-fold) and remained substantially higher (approx. 2- to 4-fold) throughout the stress recovery period than the control level suggesting that fish did not recover from chemical-induced stress. The circulating levels of glucose in blood and gills decreased over time in fish exposed to different stressors. However, the gill glucose level in fish showed a decrease similar to the control levels measured at the same time points, which was found to be insignificant. The data analysis showed that concentrations of β-D glucose measured in gills of fish treated with S. parasitica differed significantly from the control recovery, but did not differ from the leachate recovery group showing that S. parasitica presence in water had no additive effects. In contrast, a positive correlation between blood and gills glucose were determined. Parallel trends in blood and water glucose changes suggest that water glucose measurement has much potency in predicting stress. This study demonstrated that measuring β-D-glucose in fish-holding water is not stressful as it involves no handling and manipulation of an organism and has critical technical advantages concerning current (invasive) methods, mainly using blood samples or specific tissues. The quantification of glucose could be essential for studies examining the stress physiology/aquaculture studies interested in the assessment or long-term monitoring of fish health.Keywords: brown trout, landfill leachate, sea trout, pathogenic oomycetes, β-D-glucose
Procedia PDF Downloads 1739472 Isolated Iterating Fractal Independently Corresponds with Light and Foundational Quantum Problems
Authors: Blair D. Macdonald
Abstract:
After nearly one hundred years of its origin, foundational quantum mechanics remains one of the greatest unexplained mysteries in physicists today. Within this time, chaos theory and its geometry, the fractal, has developed. In this paper, the propagation behaviour with an iteration of a simple fractal, the Koch Snowflake, was described and analysed. From an arbitrary observation point within the fractal set, the fractal propagates forward by oscillation—the focus of this study and retrospectively behind by exponential growth from a point beginning. It propagates a potentially infinite exponential oscillating sinusoidal wave of discrete triangle bits sharing many characteristics of light and quantum entities. The model's wave speed is potentially constant, offering insights into the perception and a direction of time where, to an observer, when travelling at the frontier of propagation, time may slow to a stop. In isolation, the fractal is a superposition of component bits where position and scale present a problem of location. In reality, this problem is experienced within fractal landscapes or fields where 'position' is only 'known' by the addition of information or markers. The quantum' measurement problem', 'uncertainty principle,' 'entanglement,' and the classical-quantum interface are addressed; these are a problem of scale invariance associated with isolated fractality. Dual forward and retrospective perspectives of the fractal model offer the opportunity for unification between quantum mechanics and cosmological mathematics, observations, and conjectures. Quantum and cosmological problems may be different aspects of the one fractal geometry.Keywords: measurement problem, observer, entanglement, unification
Procedia PDF Downloads 909471 Measurement of Magnetic Properties of Grainoriented Electrical Steels at Low and High Fields Using a Novel Single
Authors: Nkwachukwu Chukwuchekwa, Joy Ulumma Chukwuchekwa
Abstract:
Magnetic characteristics of grain-oriented electrical steel (GOES) are usually measured at high flux densities suitable for its typical applications in power transformers. There are limited magnetic data at low flux densities which are relevant for the characterization of GOES for applications in metering instrument transformers and low frequency magnetic shielding in magnetic resonance imaging medical scanners. Magnetic properties such as coercivity, B-H loop, AC relative permeability and specific power loss of conventional grain oriented (CGO) and high permeability grain oriented (HGO) electrical steels were measured and compared at high and low flux densities at power magnetising frequency. 40 strips comprising 20 CGO and 20 HGO, 305 mm x 30 mm x 0.27 mm from a supplier were tested. The HGO and CGO strips had average grain sizes of 9 mm and 4 mm respectively. Each strip was singly magnetised under sinusoidal peak flux density from 8.0 mT to 1.5 T at a magnetising frequency of 50 Hz. The novel single sheet tester comprises a personal computer in which LabVIEW version 8.5 from National Instruments (NI) was installed, a NI 4461 data acquisition (DAQ) card, an impedance matching transformer, to match the 600 minimum load impedance of the DAQ card with the 5 to 20 low impedance of the magnetising circuit, and a 4.7 Ω shunt resistor. A double vertical yoke made of GOES which is 290 mm long and 32 mm wide is used. A 500-turn secondary winding, about 80 mm in length, was wound around a plastic former, 270 mm x 40 mm, housing the sample, while a 100-turn primary winding, covering the entire length of the plastic former was wound over the secondary winding. A standard Epstein strip to be tested is placed between the yokes. The magnetising voltage was generated by the LabVIEW program through a voltage output from the DAQ card. The voltage drop across the shunt resistor and the secondary voltage were acquired by the card for calculation of magnetic field strength and flux density respectively. A feedback control system implemented in LabVIEW was used to control the flux density and to make the induced secondary voltage waveforms sinusoidal to have repeatable and comparable measurements. The low noise NI4461 card with 24 bit resolution and a sampling rate of 204.8 KHz and 92 KHz bandwidth were chosen to take the measurements to minimize the influence of thermal noise. In order to reduce environmental noise, the yokes, sample and search coil carrier were placed in a noise shielding chamber. HGO was found to have better magnetic properties at both high and low magnetisation regimes. This is because of the higher grain size of HGO and higher grain-grain misorientation of CGO. HGO is better CGO in both low and high magnetic field applications.Keywords: flux density, electrical steel, LabVIEW, magnetization
Procedia PDF Downloads 2919470 Advancements in Smart Home Systems: A Comprehensive Exploration in Electronic Engineering
Authors: Chukwuka E. V., Rowling J. K., Rushdie Salman
Abstract:
The field of electronic engineering encompasses the study and application of electrical systems, circuits, and devices. Engineers in this discipline design, analyze and optimize electronic components to develop innovative solutions for various industries. This abstract provides a brief overview of the diverse areas within electronic engineering, including analog and digital electronics, signal processing, communication systems, and embedded systems. It highlights the importance of staying abreast of advancements in technology and fostering interdisciplinary collaboration to address contemporary challenges in this rapidly evolving field.Keywords: smart home engineering, energy efficiency, user-centric design, security frameworks
Procedia PDF Downloads 879469 Ideation, Plans, and Attempts for Suicide among Adolescents with Disability
Authors: Nyla Anjum, Humaira Bano
Abstract:
Disability, regardless of its type and nature limits one or two significant life activities. These limitations constitute risk factors for suicide. Rate and intensity of problem upsurges in critical age of adolescence. Researches in the field of mental health over look problem of suicide among persons with disability. Aim of the study was to investigate prevalence and risk factors for suicide among adolescents with disability. The study constitutes purposive sample of 106 elements of both gender with four major categories of disability: hearing impairment, physical impairment, visual impairment and intellectual disabilities. Face to face interview technique was opted for data collection. Other variable are: socio-economic status, social and family support, provision of services for persons with disability, education and employment opportunities. For data analysis independent sample t-test was applied to find out significant differences in gender and One Way Analysis of variance was run to find out differences among four types of disability. Major predictors of suicide were identified with multiple regression analysis. It is concluded that ideation, plans and attempts of suicide among adolescents with disability is a multifaceted and imperative concern in the area of mental health. Urgent research recommendations contains valid measurement of suicide rate and identification of more risk factors for suicide among persons with disability. Study will also guide towards prevention of this pressing problem and will bring message of happy and healthy life not only for persons with disability but also for their families. It will also help to reduce suicide rate in society.Keywords: suicide, risk factors, adolescent, disability, mental health
Procedia PDF Downloads 3829468 Building an Opinion Dynamics Model from Experimental Data
Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule
Procedia PDF Downloads 1099467 Analysis of Shallow Foundation Using Conventional and Finite Element Approach
Authors: Sultan Al Shafian, Mozaher Ul Kabir, Khondoker Istiak Ahmad, Masnun Abrar, Mahfuza Khanum, Hossain M. Shahin
Abstract:
For structural evaluation of shallow foundation, the modulus of subgrade reaction is one of the most widely used and accepted parameter for its ease of calculations. To determine this parameter, one of the most common field method is Plate Load test method. In this field test method, the subgrade modulus is considered for a specific location and according to its application, it is assumed that the displacement occurred in one place does not affect other adjacent locations. For this kind of assumptions, the modulus of subgrade reaction sometimes forced the engineers to overdesign the underground structure, which eventually results in increasing the cost of the construction and sometimes failure of the structure. In the present study, the settlement of a shallow foundation has been analyzed using both conventional and numerical analysis. Around 25 plate load tests were conducted on a sand fill site in Bangladesh to determine the Modulus of Subgrade reaction of ground which is later used to design a shallow foundation considering different depth. After the collection of the field data, the field condition was appropriately simulated in a finite element software. Finally results obtained from both the conventional and numerical approach has been compared. A significant difference has been observed in the case of settlement while comparing the results. A proper correlation has also been proposed at the end of this research work between the two methods of in order to provide the most efficient way to calculate the subgrade modulus of the ground for designing the shallow foundation.Keywords: modulus of subgrade reaction, shallow foundation, finite element analysis, settlement, plate load test
Procedia PDF Downloads 1819466 Susceptibility Assessment and Genetic Diversity of Iranian and CIMMYT Wheat Genotypes to Common Root Rot Disease Bipolaris sorokiniana
Authors: Mehdi Nasr Esfahani, Abdal-Rasool Gholamalian, Abdelfattah A. Dababat
Abstract:
Wheat, Triticum aestivum L. is one of the most important and strategic crops in the human diet. Several diseases threaten this particular crop. Common root rot disease of wheat by a fungal agent, Bipolaris sorokiniana is one of the important diseases, causing considerable losses worldwide. Resistant sources are the only feasible and effective method of control for managing diseases. In this study, the response of 33 domestic and exotic wheat genotypes, including cultivars and promising lines were screened to B. sorokiniana at greenhouse and field conditions, based on five scoring scale indexes of 0 to 100 severity percentage. The screening was continued on resistant wheat genotypes and repeated several times to confirm the greenhouse and field results. Statistical and cluster analysis of data was performed using SAS and SPSS software, respectively. The results showed that, the response of wheat genotypes to the disease in the greenhouse and field conditions was highly significant. The highest rate of common root rot disease infection, B. sorokiniana in the greenhouse and field, was of CVS. Karkheh and Beck Cross-Roshan with 60.83% and 59.16% disease severity respectively, and the lowest one were in cv. Alvand with 18.33%, followed by cv. Baharan with 19.16% disease severity, with a highly significant difference respectively. The remaining wheat genotypes were located in between these two highest and lowest infected groups to B. sorokiniana significantly. There was a high correlation coefficient between the related statistical groups and cluster analysis.Keywords: wheat, rot, root, crown, fungus, genotype, resistance
Procedia PDF Downloads 1349465 The Temperature Degradation Process of Siloxane Polymeric Coatings
Authors: Andrzej Szewczak
Abstract:
Study of the effect of high temperatures on polymer coatings represents an important field of research of their properties. Polymers, as materials with numerous features (chemical resistance, ease of processing and recycling, corrosion resistance, low density and weight) are currently the most widely used modern building materials, among others in the resin concrete, plastic parts, and hydrophobic coatings. Unfortunately, the polymers have also disadvantages, one of which decides about their usage - low resistance to high temperatures and brittleness. This applies in particular thin and flexible polymeric coatings applied to other materials, such a steel and concrete, which degrade under varying thermal conditions. Research about improvement of this state includes methods of modification of the polymer composition, structure, conditioning conditions, and the polymerization reaction. At present, ways are sought to reflect the actual environmental conditions, in which the coating will be operating after it has been applied to other material. These studies are difficult because of the need for adopting a proper model of the polymer operation and the determination of phenomena occurring at the time of temperature fluctuations. For this reason, alternative methods are being developed, taking into account the rapid modeling and the simulation of the actual operating conditions of polymeric coating’s materials in real conditions. The nature of a duration is typical for the temperature influence in the environment. Studies typically involve the measurement of variation one or more physical and mechanical properties of such coating in time. Based on these results it is possible to determine the effects of temperature loading and develop methods affecting in the improvement of coatings’ properties. This paper contains a description of the stability studies of silicone coatings deposited on the surface of a ceramic brick. The brick’s surface was hydrophobized by two types of inorganic polymers: nano-polymer preparation based on dialkyl siloxanes (Series 1 - 5) and an aqueous solution of the silicon (series 6 - 10). In order to enhance the stability of the film formed on the brick’s surface and immunize it to variable temperature and humidity loading, the nano silica was added to the polymer. The right combination of the polymer liquid phase and the solid phase of nano silica was obtained by disintegration of the mixture by the sonification. The changes of viscosity and surface tension of polymers were defined, which are the basic rheological parameters affecting the state and the durability of the polymer coating. The coatings created on the brick’s surfaces were then subjected to a temperature loading of 100° C and moisture by total immersion in water, in order to determine any water absorption changes caused by damages and the degradation of the polymer film. The effect of moisture and temperature was determined by measurement (at specified number of cycles) of changes in the surface hardness (using a Vickers’ method) and the absorption of individual samples. As a result, on the basis of the obtained results, the degradation process of polymer coatings related to their durability changes in time was determined.Keywords: silicones, siloxanes, surface hardness, temperature, water absorption
Procedia PDF Downloads 2439464 Laboratory Calibration of Soil Pressure Transducer for a Specified Field Application
Authors: Mohammad Zahidul Islam Bhuiyan, Shanyong Wang, Scott William Sloan, Daichao Sheng
Abstract:
Nowadays soil pressure transducers are widely used to measure the soil stress states in laboratory and field experiments. The soil pressure transducers, investigated here, are traditional diaphragm-type earth pressure cells (DEPC) based on strain gauge principle. It is found that the output of these sensors varies with the soil conditions as well as the position of a sensor. Therefore, it is highly recommended to calibrate the pressure sensors based on the similar conditions of their intended applications. The factory calibration coefficients of the EPCs are not reliable to use since they are normally calibrated by applying fluid (a special type of oil) pressure only over load sensing zone, which does not represent the actual field conditions. Thus, the calibration of these sensors is utmost important, and they play a pivotal role for assessing earth pressures precisely. In the present study, TML soil pressure sensor is used to compare its sensitivity under different calibration systems, for example, fluid calibration, and static load calibration with or without soil. The results report that the sensor provides higher sensitivity (more accurate results) under soil calibration system.Keywords: calibration, soil pressure, earth pressure cell, sensitivity
Procedia PDF Downloads 2409463 Ghost Frequency Noise Reduction through Displacement Deviation Analysis
Authors: Paua Ketan, Bhagate Rajkumar, Adiga Ganesh, M. Kiran
Abstract:
Low gear noise is an important sound quality feature in modern passenger cars. Annoying gear noise from the gearbox is influenced by the gear design, gearbox shaft layout, manufacturing deviations in the components, assembly errors and the mounting arrangement of the complete gearbox. Geometrical deviations in the form of profile and lead errors are often present on the flanks of the inspected gears. Ghost frequencies of a gear are very challenging to identify in standard gear measurement and analysis process due to small wavelengths involved. In this paper, gear whine noise occurring at non-integral multiples of gear mesh frequency of passenger car gearbox is investigated and the root cause is identified using the displacement deviation analysis (DDA) method. DDA method is applied to identify ghost frequency excitations on the flanks of gears arising out of generation grinding. Frequency identified through DDA correlated with the frequency of vibration and noise on the end-of-line machine as well as vehicle level measurements. With the application of DDA method along with standard lead profile measurement, gears with ghost frequency geometry deviations were identified on the production line to eliminate defective parts and thereby eliminate ghost frequency noise from a vehicle. Further, displacement deviation analysis can be used in conjunction with the manufacturing process simulation to arrive at suitable countermeasures for arresting the ghost frequency.Keywords: displacement deviation analysis, gear whine, ghost frequency, sound quality
Procedia PDF Downloads 1469462 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland
Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski
Abstract:
PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks
Procedia PDF Downloads 1499461 The Performance Improvement of Solar Aided Power Generation System by Introducing the Second Solar Field
Authors: Junjie Wu, Hongjuan Hou, Eric Hu, Yongping Yang
Abstract:
Solar aided power generation (SAPG) technology has been proven as an efficient way to make use of solar energy for power generation purpose. In an SAPG plant, a solar field consisting of parabolic solar collectors is normally used to supply the solar heat in order to displace the high pressure/temperature extraction steam. To understand the performance of such a SAPG plant, a new simulation model was developed by the authors recently, in which the boiler was treated, as a series of heat exchangers unlike other previous models. Through the simulations using the new model, it was found the outlet properties of reheated steam, e.g. temperature, would decrease due to the introduction of the solar heat. The changes make the (lower stage) turbines work under off-design condition. As a result, the whole plant’s performance may not be optimal. In this paper, the second solar filed was proposed to increase the inlet temperature of steam to be reheated, in order to bring the outlet temperature of reheated steam back to the designed condition. A 600MW SAPG plant was simulated as a case study using the new model to understand the impact of the second solar field on the plant performance. It was found in the study, the 2nd solar field would improve the plant’s performance in terms of cycle efficiency and solar-to-electricity efficiency by 1.91% and 6.01%. The solar-generated electricity produced by per aperture area under the design condition was 187.96W/m2, which was 26.14% higher than the previous design.Keywords: solar-aided power generation system, off-design performance, coal-saving performance, boiler modelling, integration schemes
Procedia PDF Downloads 2909460 Evaluating and Prioritizing the Effective Management Factors of Human Resources Empowerment and Efficiency in Manufacturing Companies: A Case Study on Fars’ Livestock and Poultry Manufacturing Companies
Authors: Mohsen Yaghmor, Sima Radmanesh
Abstract:
Rapid environmental changes have been threatening the life of many organizations. Enabling and productivity of human resource should be considered as the most important issue in order to increase performance and ensure survival of the organizations. In this research, the effectiveness of management factory in productivity and inability of human resource have been identified and reviewed at glance. Afterwards, answers were sought to questions "What are the factors effecting productivity and enabling of human resource?" and "What are the priority order based on effective management of human resource in Fars Poultry Complex?". A specified questionnaire has been designed regarding the priorities and effectiveness of the identified factors. Six factors were specified consisting of: individual characteristics, teaching, motivation, partnership management, authority or power submission and job development that have most effect on organization. Then a questionnaire was specified for priority and effect measurement of specified factors that were reached after collecting information and using statistical tests of Keronchbakh alpha coefficient r = 0.792, so that we can say the questionnaire has sufficient reliability. After information analysis of specified six factors by Friedman test their effects were categorized. Measurement on organization respectively consists of individual characteristics, job development or enrichment, authority submission, partnership management, teaching and motivation. Lastly, approaches has been introduced to increase productivity of manpower.Keywords: productivity, empowerment, enrichment, authority submission, partnership management, teaching, motivation
Procedia PDF Downloads 2659459 Identification of Flooding Attack (Zero Day Attack) at Application Layer Using Mathematical Model and Detection Using Correlations
Authors: Hamsini Pulugurtha, V.S. Lakshmi Jagadmaba Paluri
Abstract:
Distributed denial of service attack (DDoS) is one altogether the top-rated cyber threats presently. It runs down the victim server resources like a system of measurement and buffer size by obstructing the server to supply resources to legitimate shoppers. Throughout this text, we tend to tend to propose a mathematical model of DDoS attack; we discuss its relevancy to the choices like inter-arrival time or rate of arrival of the assault customers accessing the server. We tend to tend to further analyze the attack model in context to the exhausting system of measurement and buffer size of the victim server. The projected technique uses an associate in nursing unattended learning technique, self-organizing map, to make the clusters of identical choices. Lastly, the abstract applies mathematical correlation and so the standard likelihood distribution on the clusters and analyses their behaviors to look at a DDoS attack. These systems not exclusively interconnect very little devices exchanging personal data, but to boot essential infrastructures news standing of nuclear facilities. Although this interconnection brings many edges and blessings, it to boot creates new vulnerabilities and threats which might be conversant in mount attacks. In such sophisticated interconnected systems, the power to look at attacks as early as accomplishable is of paramount importance.Keywords: application attack, bandwidth, buffer correlation, DDoS distribution flooding intrusion layer, normal prevention probability size
Procedia PDF Downloads 2249458 An Improved Robust Algorithm Based on Cubature Kalman Filter for Single-Frequency Global Navigation Satellite System/Inertial Navigation Tightly Coupled System
Authors: Hao Wang, Shuguo Pan
Abstract:
The Global Navigation Satellite System (GNSS) signal received by the dynamic vehicle in the harsh environment will be frequently interfered with and blocked, which generates gross error affecting the positioning accuracy of the GNSS/Inertial Navigation System (INS) integrated navigation. Therefore, this paper put forward an improved robust Cubature Kalman filter (CKF) algorithm for single-frequency GNSS/INS tightly coupled system ambiguity resolution. Firstly, the dynamic model and measurement model of a single-frequency GNSS/INS tightly coupled system was established, and the method for GNSS integer ambiguity resolution with INS aided is studied. Then, we analyzed the influence of pseudo-range observation with gross error on GNSS/INS integrated positioning accuracy. To reduce the influence of outliers, this paper improved the CKF algorithm and realized an intelligent selection of robust strategies by judging the ill-conditioned matrix. Finally, a field navigation test was performed to demonstrate the effectiveness of the proposed algorithm based on the double-differenced solution mode. The experiment has proved the improved robust algorithm can greatly weaken the influence of separate, continuous, and hybrid observation anomalies for enhancing the reliability and accuracy of GNSS/INS tightly coupled navigation solutions.Keywords: GNSS/INS integrated navigation, ambiguity resolution, Cubature Kalman filter, Robust algorithm
Procedia PDF Downloads 979457 Probability-Based Damage Detection of Structures Using Model Updating with Enhanced Ideal Gas Molecular Movement Algorithm
Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee
Abstract:
Model updating method has received increasing attention in damage detection structures based on measured modal parameters. Therefore, a probability-based damage detection (PBDD) procedure based on a model updating procedure is presented in this paper, in which a one-stage model-based damage identification technique based on the dynamic features of a structure is investigated. The presented framework uses a finite element updating method with a Monte Carlo simulation that considers the uncertainty caused by measurement noise. Enhanced ideal gas molecular movement (EIGMM) is used as the main algorithm for model updating. Ideal gas molecular movement (IGMM) is a multiagent algorithm based on the ideal gas molecular movement. Ideal gas molecules disperse rapidly in different directions and cover all the space inside. This is embedded in the high speed of molecules, collisions between them and with the surrounding barriers. In IGMM algorithm to accomplish the optimal solutions, the initial population of gas molecules is randomly generated and the governing equations related to the velocity of gas molecules and collisions between those are utilized. In this paper, an enhanced version of IGMM, which removes unchanged variables after specified iterations, is developed. The proposed method is implemented on two numerical examples in the field of structural damage detection. The results show that the proposed method can perform well and competitive in PBDD of structures.Keywords: enhanced ideal gas molecular movement (EIGMM), ideal gas molecular movement (IGMM), model updating method, probability-based damage detection (PBDD), uncertainty quantification
Procedia PDF Downloads 2779456 Using Non-Negative Matrix Factorization Based on Satellite Imagery for the Collection of Agricultural Statistics
Authors: Benyelles Zakaria, Yousfi Djaafar, Karoui Moussa Sofiane
Abstract:
Agriculture is fundamental and remains an important objective in the Algerian economy, based on traditional techniques and structures, it generally has a purpose of consumption. Collection of agricultural statistics in Algeria is done using traditional methods, which consists of investigating the use of land through survey and field survey. These statistics suffer from problems such as poor data quality, the long delay between collection of their last final availability and high cost compared to their limited use. The objective of this work is to develop a processing chain for a reliable inventory of agricultural land by trying to develop and implement a new method of extracting information. Indeed, this methodology allowed us to combine data from remote sensing and field data to collect statistics on areas of different land. The contribution of remote sensing in the improvement of agricultural statistics, in terms of area, has been studied in the wilaya of Sidi Bel Abbes. It is in this context that we applied a method for extracting information from satellite images. This method is called the non-negative matrix factorization, which does not consider the pixel as a single entity, but will look for components the pixel itself. The results obtained by the application of the MNF were compared with field data and the results obtained by the method of maximum likelihood. We have seen a rapprochement between the most important results of the FMN and those of field data. We believe that this method of extracting information from satellite data leads to interesting results of different types of land uses.Keywords: blind source separation, hyper-spectral image, non-negative matrix factorization, remote sensing
Procedia PDF Downloads 4239455 Effect of Plant Growth Regulator on Vegetative Growth and Yield Components of Winter Wheat under Different Levels of Irrigation
Authors: Mohammed Ahmed Alghamdi
Abstract:
Field experiment were carried out to investigate the effect of the plant growth regulator on vegetative growth and yield components of reduced height isogenic lines of the wheat (Triticum aestivum L.) cultivar Mercia. The Field experiment compared the growth regulator response of seven isogenic lines of Mercia. Growth regulators reduced plant height significantly in all lines. Growth regulator decreased total dry matter and grain yield with greatest reduction generally for the control and Rht8 lines. Rht1 was the least affected. There were few significant effects of growth regulator on gas exchange and chlorophyll fluorescence but the trend was for greater values with growth regulator. In this field experiment, a rate of 2.0 l ha-1 applied just before the third node detectable stage under non water stressed and water stressed conditions gave slight increases in yield of up to 14% except for line Rht10 which increased significantly in non-stressed conditions. In the second glasshouse experiment, a rate of 2.5 l ha-1 applied at the start of stem elongation under 30% FC and 100% FC gave reductions in yield up to 16% for the growth regulator and 55% under water stress. In the field experiment, rates of 2.5 and 3.0 l ha-1 applied at the start of stem elongation gave reductions in yield up to 20% mainly through individual seed weight. In the final glasshouse experiment, rates of 2.5 and 3.0 l ha-1 applied at 6 leaves unfolded and 1st node detectable both reduced grain yield.Keywords: growth regulator, irrigation, isogenic lines, yield, winter wheat
Procedia PDF Downloads 4599454 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks
Authors: Andrew N. Saylor, James R. Peters
Abstract:
Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging
Procedia PDF Downloads 1299453 A Three Elements Vector Valued Structure’s Ultimate Strength-Strong Motion-Intensity Measure
Authors: A. Nicknam, N. Eftekhari, A. Mazarei, M. Ganjvar
Abstract:
This article presents an alternative collapse capacity intensity measure in the three elements form which is influenced by the spectral ordinates at periods longer than that of the first mode period at near and far source sites. A parameter, denoted by β, is defined by which the spectral ordinate effects, up to the effective period (2T_1), on the intensity measure are taken into account. The methodology permits to meet the hazard-levelled target extreme event in the probabilistic and deterministic forms. A MATLAB code is developed involving OpenSees to calculate the collapse capacities of the 8 archetype RC structures having 2 to 20 stories for regression process. The incremental dynamic analysis (IDA) method is used to calculate the structure’s collapse values accounting for the element stiffness and strength deterioration. The general near field set presented by FEMA is used in a series of performing nonlinear analyses. 8 linear relationships are developed for the 8structutres leading to the correlation coefficient up to 0.93. A collapse capacity near field prediction equation is developed taking into account the results of regression processes obtained from the 8 structures. The proposed prediction equation is validated against a set of actual near field records leading to a good agreement. Implementation of the proposed equation to the four archetype RC structures demonstrated different collapse capacities at near field site compared to those of FEMA. The reasons of differences are believed to be due to accounting for the spectral shape effects.Keywords: collapse capacity, fragility analysis, spectral shape effects, IDA method
Procedia PDF Downloads 2399452 Study on Electromagnetic Plasma Acceleration Using Rotating Magnetic Field Scheme
Authors: Takeru Furuawa, Kohei Takizawa, Daisuke Kuwahara, Shunjiro Shinohara
Abstract:
In the field of a space propulsion, an electric propulsion system has been developed because its fuel efficiency is much higher than a conventional chemical one. However, the practical electric propulsion systems, e.g., an ion engine, have a problem of short lifetime due to a damage of generation and acceleration electrodes of the plasma. A helicon plasma thruster is proposed as a long-lifetime electric thruster which has non-direct contact electrodes. In this system, both generation and acceleration methods of a dense plasma are executed by antennas from the outside of a discharge tube. Development of the helicon plasma thruster has been conducting under the Helicon Electrodeless Advanced Thruster (HEAT) project. Our helicon plasma thruster has two important processes. First, we generate a dense source plasma using a helicon wave with an excitation frequency between an ion and an electron cyclotron frequencies, fci and fce, respectively, applied from the outside of a discharge using a radio frequency (RF) antenna. The helicon plasma source can provide a high-density (~1019 m-3), a high-ionization ratio (up to several tens of percent), and a high particle generation efficiency. Second, in order to achieve high thrust and specific impulse, we accelerate the dense plasma by the axial Lorentz force fz using the product of the induced azimuthal current jθ and the static radial magnetic field Br, shown as fz = jθ × Br. The HEAT project has proposed several kinds of electrodeless acceleration schemes, and in our particular case, a Rotating Magnetic Field (RMF) method has been extensively studied. The RMF scheme was originally developed as a concept to maintain the Field Reversed Configuration (FRC) in a magnetically confined fusion research. Here, RMF coils are expected to generate jθ due to a nonlinear effect shown below. First, the rotating magnetic field Bω is generated by two pairs of RMF coils with AC currents, which have a phase difference of 90 degrees between the pairs. Due to the Faraday’s law, an axial electric field is induced. Second, an axial current is generated by the effects of an electron-ion and an electron-neutral collisions through the Ohm’s law. Third, the azimuthal electric field is generated by the nonlinear term, and the retarding torque generated by the collision effects again. Then, azimuthal current jθ is generated as jθ = - nₑ er ∙ 2π fRMF. Finally, the axial Lorentz force fz for plasma acceleration is generated. Here, jθ is proportional to nₑ and frequency of RMF coil current fRMF, when Bω is fully penetrated into the plasma. Our previous study has achieved 19 % increase of ion velocity using the 5 MHz and 50 A of the RMF coil power supply. In this presentation, we will show the improvement of the ion velocity using the lower frequency and higher current supplied by RMF power supply. In conclusion, helicon high-density plasma production and electromagnetic acceleration by the RMF scheme with a concept of electrodeless condition have been successfully executed.Keywords: electric propulsion, electrodeless thruster, helicon plasma, rotating magnetic field
Procedia PDF Downloads 2619451 A Case Study on the Condition Monitoring of a Critical Machine in a Tyre Manufacturing Plant
Authors: Ramachandra C. G., Amarnath. M., Prashanth Pai M., Nagesh S. N.
Abstract:
The machine's performance level drops down over a period of time due to the wear and tear of its components. The early detection of an emergent fault becomes very vital in order to obtain uninterrupted production in a plant. Maintenance is an activity that helps to keep the machine's performance at an anticipated level, thereby ensuring the availability of the machine to perform its intended function. At present, a number of modern maintenance techniques are available, such as preventive maintenance, predictive maintenance, condition-based maintenance, total productive maintenance, etc. Condition-based maintenance or condition monitoring is one such modern maintenance technique in which the machine's condition or health is checked by the measurement of certain parameters such as sound level, temperature, velocity, displacement, vibration, etc. It can recognize most of the factors restraining the usefulness and efficacy of the total manufacturing unit. This research work is conducted on a Batch Mill in a tire production unit located in the Southern Karnataka region. The health of the mill is assessed using amplitude of vibration as a parameter of measurement. Most commonly, the vibration level is assessed using various points on the machine bearing. The normal or standard level is fixed using reference materials such as manuals or catalogs supplied by the manufacturers and also by referring vibration standards. The Rio-Vibro meter is placed in different locations on the batch-off mill to record the vibration data. The data collected are analyzed to identify the malfunctioning components in the batch off the mill, and corrective measures are suggested.Keywords: availability, displacement, vibration, rio-vibro, condition monitoring
Procedia PDF Downloads 919450 Behavioral and Electroantennographic Responses of the Tea Shot Hole Borer, Euwallacea fornicatus, Eichhoff (Scolytidae: Coleoptera) to Volatiles Compounds of Montanoa bipinnatifida (Compositae: Asteraceae) and Development of a Kairomone Trap
Authors: Sachin Paul James, Selvasundaram Rajagopal, Muraleedharan Nair, Babu Azariah
Abstract:
The shot hole borer (SHB), Euwallacea fornicatus (= Xyleborus fornicatus) (Scolytidae: Coleoptera) is one of the major pests of tea in southern India and Sri Lanka. The partially dried cut stem of a jungle plant, Montanoa bipinnatifida (C.Koch) (Compositae: Asteraceae) reported to attract shot hole borer beetles in the field. Collection, isolation, identification and quantification of the emitted volatiles from the partially dried cut stems of M. bipinnatifida using dynamic head space and GC-MS revealed the presence of seven compounds viz. α- pinene, β- phellandrene, β - pinene, D- limonene, trans-caryophyllene, iso- caryophyllene and germacrene– D. Behavioural bioassays using electroantennogram (EAG) and wind tunnel proved that, among these identified compounds only α - pinene, trans-caryophyllene, β – phellandrene and germacrene-D evoked significant behavioral response and maximum response was obtained to a specific blend of these four compounds @ 10:1:0.1:3. Field trapping experiments of this blend conducted in the SHB infested field using multiple funnel traps further proved the efficiency of the blend with a mean trap catch of 176.7 ± 13.1 beetles. Mass trapping studies in the field helped to develop a kairomone trap for the management of SHB in the tea fields of southern India.Keywords: electroantennogram, kairomone trap, Montanoa bipinnatifida, tea shot hole borer
Procedia PDF Downloads 223