Search results for: field emission electric propulsion
6533 Evaluation of Golden Beam Data for the Commissioning of 6 and 18 MV Photons Beams in Varian Linear Accelerator
Authors: Shoukat Ali, Abdul Qadir Jandga, Amjad Hussain
Abstract:
Objective: The main purpose of this study is to compare the Percent Depth dose (PDD) and In-plane and cross-plane profiles of Varian Golden beam data to the measured data of 6 and 18 MV photons for the commissioning of Eclipse treatment planning system. Introduction: Commissioning of treatment planning system requires an extensive acquisition of beam data for the clinical use of linear accelerators. Accurate dose delivery require to enter the PDDs, Profiles and dose rate tables for open and wedges fields into treatment planning system, enabling to calculate the MUs and dose distribution. Varian offers a generic set of beam data as a reference data, however not recommend for clinical use. In this study, we compared the generic beam data with the measured beam data to evaluate the reliability of generic beam data to be used for the clinical purpose. Methods and Material: PDDs and Profiles of Open and Wedge fields for different field sizes and at different depths measured as per Varian’s algorithm commissioning guideline. The measurement performed with PTW 3D-scanning water phantom with semi-flex ion chamber and MEPHYSTO software. The online available Varian Golden Beam Data compared with the measured data to evaluate the accuracy of the golden beam data to be used for the commissioning of Eclipse treatment planning system. Results: The deviation between measured vs. golden beam data was in the range of 2% max. In PDDs, the deviation increases more in the deeper depths than the shallower depths. Similarly, profiles have the same trend of increasing deviation at large field sizes and increasing depths. Conclusion: Study shows that the percentage deviation between measured and golden beam data is within the acceptable tolerance and therefore can be used for the commissioning process; however, verification of small subset of acquired data with the golden beam data should be mandatory before clinical use.Keywords: percent depth dose, flatness, symmetry, golden beam data
Procedia PDF Downloads 4896532 Assessment of Air Quality Around Western Refinery in Libya: Mobile Monitoring
Authors: A. Elmethnani, A. Jroud
Abstract:
This coastal crude oil refinery is situated north of a big city west of Tripoli; the city then could be highly prone to downwind refinery emissions where the NNE wind direction is prevailing through most seasons of the year. Furthermore, due to the absence of an air quality monitoring network and scarce emission data available for the neighboring community, nearby residents have serious worries about the impacts of the oil refining operations on local air quality. In responding to these concerns, a short term survey has performed for three consecutive days where a semi-continues mobile monitoring approach has developed effectively in this study; the monitoring station (Compact AQM 65 AeroQual) was mounted on a vehicle to move quickly between locations, measurements of 10 minutes averaging of 60 seconds then been taken at each fixed sampling point. The downwind ambient concentration of CO, H₂S, NOₓ, NO₂, SO₂, PM₁, PM₂.₅ PM₁₀, and TSP were measured at carefully chosen sampling locations, ranging from 200m nearby the fence-line passing through the city center up to 4.7 km east to attain best spatial coverage. Results showed worrying levels of PM₂.₅ PM₁₀, and TSP at one sampling location in the city center, southeast of the refinery site, with an average mean of 16.395μg/m³, 33.021μg/m³, and 42.426μg/m³ respectively, which could be attributed to road traffic. No significant concentrations have been detected for other pollutants of interest over the study area, as levels observed for CO, SO₂, H₂S, NOₓ, and NO₂ haven’t respectively exceeded 1.707 ppm, 0.021ppm, 0.134 ppm, 0.4582 ppm, and 0.0018 ppm, which was at the same sampling locations as well. Although it wasn’t possible to compare the results with the Libyan air quality standards due to the difference in the averaging time period, the technique was adequate for the baseline air quality screening procedure. Overall, findings primarily suggest modeling of dispersion of the refinery emissions to assess the likely impact and spatial-temporal distribution of air pollutants.Keywords: air quality, mobil monitoring, oil refinery
Procedia PDF Downloads 966531 Responsibility to Protect in Practice: Libya and Syria
Authors: Guram Esakia, Giorgi Goguadze
Abstract:
The following paper is written due to overview the concept of R2P, this new dimension in International Relations field. Paper contains the general description of previously mentioned concept, its advantages and disadvantages. We also compare each other R2P and“humanitarian intervention“, trying to make clear division between these two approaches in conflict solution. There is also discussed R2P in real action, successful one in Libya and yet failed in Syria. Essay doesn’t claim to be the part of scientific chain and is based only on personal subjection as well on information gathered from various scholars and UN resolutions.Keywords: the concept of R2P, humanitarian intervention, Libya, Syria
Procedia PDF Downloads 2786530 Effect of Cement Amount on California Bearing Ratio Values of Different Soil
Authors: Ayse Pekrioglu Balkis, Sawash Mecid
Abstract:
Due to continued growth and rapid development of road construction in worldwide, road sub-layers consist of soil layers, therefore, identification and recognition of type of soil and soil behavior in different condition help to us to select soil according to specification and engineering characteristic, also if necessary sometimes stabilize the soil and treat undesirable properties of soils by adding materials such as bitumen, lime, cement, etc. If the soil beneath the road is not done according to the standards and construction will need more construction time. In this case, a large part of soil should be removed, transported and sometimes deposited. Then purchased sand and gravel is transported to the site and full depth filled and compacted. Stabilization by cement or other treats gives an opportunity to use the existing soil as a base material instead of removing it and purchasing and transporting better fill materials. Classification of soil according to AASHTOO system and USCS help engineers to anticipate soil behavior and select best treatment method. In this study soil classification and the relation between soil classification and stabilization method is discussed, cement stabilization with different percentages have been selected for soil treatment based on NCHRP. There are different parameters to define the strength of soil. In this study, CBR will be used to define the strength of soil. Cement by percentages, 0%, 3%, 7% and 10% added to soil for evaluation effect of added cement to CBR of treated soil. Implementation of stabilization process by different cement content help engineers to select an economic cement amount for the stabilization process according to project specification and characteristics. Stabilization process in optimum moisture content (OMC) and mixing rate effect on the strength of soil in the laboratory and field construction operation have been performed to see the improvement rate in strength and plasticity. Cement stabilization is quicker than a universal method such as removing and changing field soils. Cement addition increases CBR values of different soil types by the range of 22-69%.Keywords: California Bearing Ratio, cement stabilization, clayey soil, mechanical properties
Procedia PDF Downloads 3976529 An Analysis of Eco-efficiency and GHG Emission of Olive Oil Production in Northeast of Portugal
Authors: M. Feliciano, F. Maia, A. Gonçalves
Abstract:
Olive oil production sector plays an important role in Portuguese economy. It had a major growth over the last decade, increasing its weight in the overall national exports. International market penetration for Mediterranean traditional products is increasingly more demanding, especially in the Northern European markets, where consumers are looking for more sustainable products. Trying to support this growing demand this study addresses olive oil production under the environmental and eco-efficiency perspectives. The analysis considers two consecutive product life cycle stages: olive trees farming; and olive oil extraction in mills. Addressing olive farming, data collection covered two different organizations: a middle-size farm (~12ha) (F1) and a large-size farm (~100ha) (F2). Results from both farms show that olive collection activities are responsible for the largest amounts of Green House Gases (GHG) emissions. In this activities, estimate for the Carbon Footprint per olive was higher in F2 (188g CO2e/kgolive) than in F1 (148g CO2e/kgolive). Considering olive oil extraction, two different mills were considered: one using a two-phase system (2P) and other with a three-phase system (3P). Results from the study of two mills show that there is a much higher use of water in 3P. Energy intensity (EI) is similar in both mills. When evaluating the GHG generated, two conditions are evaluated: a biomass neutral condition resulting on a carbon footprint higher in 3P (184g CO2e/Lolive oil) than in 2P (92g CO2e/Lolive oil); and a non-neutral biomass condition in which 2P increase its carbon footprint to 273g CO2e/Lolive oil. When addressing the carbon footprint of possible combinations among studied subsystems, results suggest that olive harvesting is the major source for GHG.Keywords: carbon footprint, environmental indicators, farming subsystem, industrial subsystem, olive oil
Procedia PDF Downloads 2876528 Downscaling Grace Gravity Models Using Spectral Combination Techniques for Terrestrial Water Storage and Groundwater Storage Estimation
Authors: Farzam Fatolazadeh, Kalifa Goita, Mehdi Eshagh, Shusen Wang
Abstract:
The Gravity Recovery and Climate Experiment (GRACE) is a satellite mission with twin satellites for the precise determination of spatial and temporal variations in the Earth’s gravity field. The products of this mission are monthly global gravity models containing the spherical harmonic coefficients and their errors. These GRACE models can be used for estimating terrestrial water storage (TWS) variations across the globe at large scales, thereby offering an opportunity for surface and groundwater storage (GWS) assessments. Yet, the ability of GRACE to monitor changes at smaller scales is too limited for local water management authorities. This is largely due to the low spatial and temporal resolutions of its models (~200,000 km2 and one month, respectively). High-resolution GRACE data products would substantially enrich the information that is needed by local-scale decision-makers while offering the data for the regions that lack adequate in situ monitoring networks, including northern parts of Canada. Such products could eventually be obtained through downscaling. In this study, we extended the spectral combination theory to simultaneously downscale spatiotemporally the 3o spatial coarse resolution of GRACE to 0.25o degrees resolution and monthly coarse resolution to daily resolution. This method combines the monthly gravity field solution of GRACE and daily hydrological model products in the form of both low and high-frequency signals to produce high spatiotemporal resolution TWSA and GWSA products. The main contribution and originality of this study are to comprehensively and simultaneously consider GRACE and hydrological variables and their uncertainties to form the estimator in the spectral domain. Therefore, it is predicted that we reach downscale products with an acceptable accuracy.Keywords: GRACE satellite, groundwater storage, spectral combination, terrestrial water storage
Procedia PDF Downloads 836527 Quantum Dot – DNA Conjugates for Biological Applications
Authors: A. Banerjee, C. Grazon, B. Nadal, T. Pons, Y. Krishnan, B. Dubertret
Abstract:
Quantum Dots (QDs) have emerged as novel fluorescent probes for biomedical applications. The photophysical properties of QDs such as broad absorption, narrow emission spectrum, reduced blinking, and enhanced photostability make them advantageous over organic fluorophores. However, for some biological applications, QDs need to be first targeted to specific intracellular locations. It parallel, base pairing properties and biocompatibility of DNA has been extensively used for biosensing, targetting and intracellular delivery of numerous bioactive agents. The combination of the photophysical properties of QDs and targettability of DNA has yielded fluorescent, stable and targetable nanosensors. QD-DNA conjugates have used in drug delivery, siRNA, intracellular pH sensing and several other applications; and continue to be an active area of research. In this project, a novel method to synthesise QD-DNA conjugates and their applications in bioimaging are investigated. QDs are first solubilized in water using a thiol based amphiphilic co-polymer and, then conjugated to amine functionalized DNA using a heterobifunctional linker. The conjugates are purified by size exclusion chromatography and characterized by UV-Vis absorption and fluorescence spectroscopy, electrophoresis and microscopy. Parameters that influence the conjugation yield such as reducing agents, the excess of salt and pH have been investigated in detail. In optimized reaction conditions, up to 12 single-stranded DNA (15 mer length) can be conjugated per QD. After conjugation, the QDs retain their colloidal stability and high quantum yield; and the DNA is available for hybridization. The reaction has also been successfully tested on QDs emitting different colors and on Gold nanoparticles and therefore highly generalizable. After extensive characterization and robust synthesis of QD-DNA conjugates in vitro, the physical properties of these conjugates in cellular milieu are being invistigated. Modification of QD surface with DNA appears to remarkably alter the fate of QD inside cells and can have potential implications in therapeutic applications.Keywords: bioimaging, cellular targeting, drug delivery, photostability
Procedia PDF Downloads 4236526 Using Variation Theory in a Design-based Approach to Improve Learning Outcomes of Teachers Use of Video and Live Experiments in Swedish Upper Secondary School
Authors: Andreas Johansson
Abstract:
Conceptual understanding needs to be grounded on observation of physical phenomena, experiences or metaphors. Observation of physical phenomena using demonstration experiments has a long tradition within physics education and students need to develop mental models to relate the observations to concepts from scientific theories. This study investigates how live and video experiments involving an acoustic trap to visualize particle-field interaction, field properties and particle properties can help develop students' mental models and how they can be used differently to realize their potential as teaching tools. Initially, they were treated as analogs and the lesson designs were kept identical. With a design-based approach, the experimental and video designs, as well as best practices for a respective teaching tool, were then developed in iterations. Variation theory was used as a theoretical framework to analyze the planned respective realized pattern of variation and invariance in order to explain learning outcomes as measured by a pre-posttest consisting of conceptual multiple-choice questions inspired by the Force Concept Inventory and the Force and Motion Conceptual Evaluation. Interviews with students and teachers were used to inform the design of experiments and videos in each iteration. The lesson designs and the live and video experiments has been developed to help teachers improve student learning and make school physics more interesting by involving experimental setups that usually are out of reach and to bridge the gap between what happens in classrooms and in science research. As students’ conceptual knowledge also rises their interest in physics the aim is to increase their chances of pursuing careers within science, technology, engineering or mathematics.Keywords: acoustic trap, design-based research, experiments, variation theory
Procedia PDF Downloads 836525 Incorporating Circular Economy into Passive Design Strategies in Tropical Nigeria
Authors: Noah G. Akhimien, Eshrar Latif
Abstract:
The natural environment is in need for an urgent rescue due to dilapidation and recession of resources. Passive design strategies have proven to be one of the effective ways to reduce CO2 emissions and to improve building performance. On the other hand, there is a huge drop in material availability due to poor recycling culture. Consequently, building waste pose environmental hazard due to unrecycled building materials from construction and deconstruction. Buildings are seen to be material banks for a circular economy, therefore incorporating circular economy into passive housing will not only safe guide the climate but also improve resource efficiency. The study focuses on incorporating a circular economy in passive design strategies for an affordable energy and resource efficient residential building in Nigeria. Carbon dioxide (CO2) concentration is still on the increase as buildings are responsible for a significant amount of this emission globally. Therefore, prompt measures need to be taken to combat the effect of global warming and associated threats. Nigeria is rapidly growing in human population, resources on the other hand have receded greatly, and there is an abrupt need for recycling even in the built environment. It is necessary that Nigeria responds to these challenges effectively and efficiently considering building resource and energy. Passive design strategies were assessed using simulations to obtain qualitative and quantitative data which were inferred to case studies as it relates to the Nigeria climate. Building materials were analysed using the ReSOLVE model in order to explore possible recycling phase. This provided relevant information and strategies to illustrate the possibility of circular economy in passive buildings. The study offers an alternative approach, as it is the general principle for the reworking of an economy on ecological lines in passive housing and by closing material loops in circular economy.Keywords: building, circular, efficiency, environment, sustainability
Procedia PDF Downloads 2536524 Numerical Modeling of Turbulent Natural Convection in a Square Cavity
Authors: Mohammadreza Sedighi, Mohammad Said Saidi, Hesamoddin Salarian
Abstract:
A numerical study has been performed to investigate the effect of using different turbulent models on natural convection flow field and temperature distributions in partially heated square cavity compare to benchmark. The temperature of the right vertical wall is lower than that of heater while other walls are insulated. The commercial CFD codes are used to model. Standard k-w model provided good agreement with the experimental data.Keywords: Buoyancy, Cavity, CFD, Heat Transfer, Natural Convection, Turbulence
Procedia PDF Downloads 3416523 Study and Analyze of Metallic Glasses for Biomedical Applications: From Soft to Bone Tissue Engineering
Authors: A. Monfared, S. Faghihi
Abstract:
Metallic glasses (MGs) are newcomers in the field of metals that show great potential for soft and bone tissue engineering due to the amorphous structure that endows unique properties. Up to now, various MGs based on Ti, Zr, Mg, Zn, Fe, Ca, and Sr in the form of a ribbon, bulk, thin-film, and powder have been investigated for biomedical purposes. This article reviews the compositions and biomedical properties of MGs as well as analyzes results in order to guide new approaches and future development of MGs.Keywords: metallic glasses, biomaterials, biocompatibility, biocorrosion
Procedia PDF Downloads 2146522 Time Domain Dielectric Relaxation Microwave Spectroscopy
Authors: A. C. Kumbharkhane
Abstract:
Time domain dielectric relaxation microwave spectroscopy (TDRMS) is a term used to describe a technique of observing the time dependant response of a sample after application of time dependant electromagnetic field. A TDRMS probes the interaction of a macroscopic sample with a time dependent electrical field. The resulting complex permittivity spectrum, characterizes amplitude (voltage) and time scale of the charge-density fluctuations within the sample. These fluctuations may arise from the reorientation of the permanent dipole moments of individual molecules or from the rotation of dipolar moieties in flexible molecules, like polymers. The time scale of these fluctuations depends on the sample and its relative relaxation mechanism. Relaxation times range from some picoseconds in low viscosity liquids to hours in glasses, Therefore the TDRS technique covers an extensive dynamical process. The corresponding frequencies range from 10-4 Hz to 1012 Hz. This inherent ability to monitor the cooperative motion of molecular ensemble distinguishes dielectric relaxation from methods like NMR or Raman spectroscopy, which yield information on the motions of individual molecules. Recently, we have developed and established the TDR technique in laboratory that provides information regarding dielectric permittivity in the frequency range 10 MHz to 30 GHz. The TDR method involves the generation of step pulse with rise time of 20 pico-seconds in a coaxial line system and monitoring the change in pulse shape after reflection from the sample placed at the end of the coaxial line. There is a great interest to study the dielectric relaxation behaviour in liquid systems to understand the role of hydrogen bond in liquid system. The intermolecular interaction through hydrogen bonds in molecular liquids results in peculiar dynamical properties. The dynamics of hydrogen-bonded liquids have been studied. The theoretical model to explain the experimental results will be discussed.Keywords: microwave, time domain reflectometry (TDR), dielectric measurement, relaxation time
Procedia PDF Downloads 3366521 Economical Transformer Selection Implementing Service Lifetime Cost
Authors: Bonginkosi A. Thango, Jacobus A. Jordaan, Agha F. Nnachi
Abstract:
In this day and age, there is a proliferate concern from all governments across the globe to barricade the environment from greenhouse gases, which absorb infrared radiation. As a result, solar photovoltaic (PV) electricity has been an expeditiously growing renewable energy source and will eventually undertake a prominent role in the global energy generation. The selection and purchasing of energy-efficient transformers that meet the operational requirements of the solar photovoltaic energy generation plants then become a part of the Independent Power Producers (IPP’s) investment plan of action. Taking these into account, this paper proposes a procedure that put into effect the intricate financial analysis necessitated to precisely evaluate the transformer service lifetime no-load and load loss factors. This procedure correctly set forth the transformer service lifetime loss factors as a result of a solar PV plant’s sporadic generation profile and related levelized costs of electricity into the computation of the transformer’s total ownership cost. The results are then critically compared with the conventional transformer total ownership cost unaccompanied by the emission costs, and demonstrate the significance of the sporadic energy generation nature of the solar PV plant on the total ownership cost. The findings indicate that the latter play a crucial role for developers and Independent Power Producers (IPP’s) in making the purchase decision during a tender bid where competing offers from different transformer manufactures are evaluated. Additionally, the susceptibility analysis of different factors engrossed in the transformer service lifetime cost is carried out; factors including the levelized cost of electricity, solar PV plant’s generation modes, and the loading profile are examined.Keywords: solar photovoltaic plant, transformer, total ownership cost, loss factors
Procedia PDF Downloads 1306520 Numerical Simulation of Convective and Transport Processes in the Nocturnal Atmospheric Surface Layer
Authors: K. R. Sreenivas, Shaurya Kaushal
Abstract:
After sunset, under calm & clear-sky nocturnal conditions, the air layer near the surface containing aerosols cools through radiative processes to the upper atmosphere. Due to this cooling, surface air-layer temperature can fall 2-6 degrees C lower than the ground-surface temperature. This unstable convection layer, on the top, is capped by a stable inversion-boundary layer. Radiative divergence, along with the convection within the surface layer, governs the vertical transport of heat and moisture. Micro-physics in this layer have implications for the occurrence and growth of the fog layer. This particular configuration, featuring a convective mixed layer beneath a stably stratified inversion layer, exemplifies a classic case of penetrative convection. In this study, we conduct numerical simulations of the penetrative convection phenomenon within the nocturnal atmospheric surface layer and elucidate its relevance to the dynamics of fog layers. We employ field and laboratory measurements of aerosol number density to model the strength of the radiative cooling. Our analysis encompasses horizontally averaged, vertical profiles of temperature, density, and heat flux. The energetic incursion of the air from the mixed layer into the stable inversion layer across the interface results in entrainment and the growth of the mixed layer, modeling of which is the key focus of our investigation. In our research, we ascertain the appropriate length scale to employ in the Richardson number correlation, which allows us to estimate the entrainment rate and model the growth of the mixed layer. Our analysis of the mixed layer and the entrainment zone reveals a close alignment with previously reported laboratory experiments on penetrative convection. Additionally, we demonstrate how aerosol number density influences the growth or decay of the mixed layer. Furthermore, our study suggests that the presence of fog near the ground surface can induce extensive vertical mixing, a phenomenon observed in field experiments.Keywords: inversion layer, penetrative convection, radiative cooling, fog occurrence
Procedia PDF Downloads 726519 A Levinasian Perspective on the Field of Applied Ethics
Authors: Payman Tajalli, Steven Segal
Abstract:
Applied ethics is an area of ethics which is looked upon most favorably as the most appropriate and useful for educational purposes; after all if ethics finds no application would any investment of time, effort and finance by the educational institutions be warranted? The current approaches to ethics in business and management often entail appealing to various types of moral theories and to this end almost every major philosophical approach has been enlisted. In this paper, we look at ethics through the philosophy of Emmanuel Levinas to argue that since ethics is ‘first philosophy’ it can neither be rule-based nor rule-governed, not something that can be worked out first and then applied to a given situation, hence the overwhelming emphasis on ‘applied ethics’ as a field of study in business and management education is unjustified. True ethics is not applied ethics. This assertion does not mean that teaching ethical theories and philosophies need to be abandoned rather it is the acceptance of the fact that an increase in cognitive awareness of such theories and ethical models and frameworks, or the mastering of techniques and procedures for ethical decision making, will not affect the desired ethical transformation in our students. Levinas himself argued for an ethics without a foundation, not one that required us to go ‘beyond good and evil’ as Nietzsche contended, rather an ethics which necessitates going ‘before good and evil'. Such an ethics does not provide us with a set of methods or techniques or a decision tree that enable us determine the rightness of an action and what we ought to do, rather it is about a way of being, an ethical posture or approach one takes in the inter-subjective relationship with the other that holds the promise of ethical conduct. Ethics in this Levinasian sense then is one of infinite and unconditional responsibility for the other person in relationship, an ethics which is not subject to negotiation, calculation or reciprocity, and as such it could neither be applied nor taught through conventional pedagogy with its focus on knowledge transfer from the teacher to student, and to this end Levinas offers a non-maieutic, non-conventional approach to pedagogy. The paper concludes that from a Levinasian perspective on ethics and education, we may need to guide our students to move away from the clear and objective professionalism of the management and applied ethics towards the murky individual spiritualism. For Levinas, this is ‘the Copernican revolution’ in ethics.Keywords: business ethics, ethics education, Levinas, maieutic teaching, ethics without foundation
Procedia PDF Downloads 3236518 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method
Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek
Abstract:
Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow
Procedia PDF Downloads 1336517 Uncovering Geometrical Ideas in Weaving: An Ethnomathematical Approaches to School Pedagogy
Authors: Jaya Bishnu Pradhan
Abstract:
Weaving mat is one of the common activities performed in different community generally in the rural part of Nepal. Mat weavers’ practice mathematical ideas and concepts implicitly in order to perform their job. This study is intended to uncover the mathematical ideas embedded in mat weaving that can help teachers and students for the teaching and learning of school geometry. The ethnographic methodology was used to uncover and describe the beliefs, values, understanding, perceptions, and attitudes of the mat weavers towards mathematical ideas and concepts in the process of mat weaving. A total of 4 mat weavers, two mathematics teachers and 12 students from grade level 6-8, who are used to participate in weaving, were selected for the study. The whole process of the mat weaving was observed in a natural setting. The classroom observation and in-depth interview were taken with the participants with the help of interview guidelines and observation checklist. The data obtained from the field were categorized according to the themes regarding mathematical ideas embedded in the weaving activities, and its possibilities in teaching learning of school geometry. In this study, the mathematical activities in different sectors of their lives, their ways of understanding the natural phenomena, and their ethnomathematical knowledge were analyzed with the notions of pluralism. From the field data, it was found that the mat weaver exhibited sophisticated geometrical ideas in the process of construction of frame of mat. They used x-test method for confirming if the mat is rectangular. Mat also provides a good opportunity to understand the space geometry. A rectangular form of mat may be rolled up when it is not in use and can be converted to a cylindrical form, which usually can be used as larder so as to reserve food grains. From the observation of the situations, this cultural experience enables students to calculate volume, curved surface area and total surface area of the cylinder. The possibilities of incorporation of these cultural activities and its pedagogical use were observed in mathematics classroom. It is argued that it is possible to use mat weaving activities in the teaching and learning of school geometry.Keywords: ethnography, ethnomathematics, geometry, mat weaving, school pedagogy
Procedia PDF Downloads 1576516 Impact of Different Fuel Inlet Diameters onto the NOx Emissions in a Hydrogen Combustor
Authors: Annapurna Basavaraju, Arianna Mastrodonato, Franz Heitmeir
Abstract:
The Advisory Council for Aeronautics Research in Europe (ACARE) is creating awareness for the overall reduction of NOx emissions by 80% in its vision 2020. Hence this promotes the researchers to work on novel technologies, one such technology is the use of alternative fuels. Among these fuels hydrogen is of interest due to its one and only significant pollutant NOx. The influence of NOx formation due to hydrogen combustion depends on various parameters such as air pressure, inlet air temperature, air to fuel jet momentum ratio etc. Appropriately, this research is motivated to investigate the impact of the air to fuel jet momentum ratio onto the NOx formation in a hydrogen combustion chamber for aircraft engines. The air to jet fuel momentum is defined as the ratio of impulse/momentum of air with respect to the momentum of fuel. The experiments were performed in an existing combustion chamber that has been previously tested for methane. Premix of the reactants has not been considered due to the high reactivity of the hydrogen and high risk of a flashback. In order to create a less rich zone of reaction at the burner and to decrease the emissions, a forced internal recirculation flow has been achieved by integrating a plate similar to honeycomb structure, suitable to the geometry of the liner. The liner has been provided with an external cooling system to avoid the increase of local temperatures and in turn the reaction rate of the NOx formation. The injected air has been preheated to aim at so called flameless combustion. The air to fuel jet momentum ratio has been inspected by changing the area of fuel inlets and keeping the number of fuel inlets constant in order to alter the fuel jet momentum, thus maintaining the homogeneity of the flow. Within this analysis, promising results for a flameless combustion have been achieved. For a constant number of fuel inlets, it was seen that the reduction of the fuel inlet diameter resulted in decrease of air to fuel jet momentum ratio in turn lowering the NOx emissions.Keywords: combustion chamber, hydrogen, jet momentum, NOx emission
Procedia PDF Downloads 2926515 Adsorptive Membrane for Hemodialysis: Potential, Future Prospection and Limitation of MOF as Nanofillers
Authors: Musawira Iftikhar
Abstract:
The field of membrane materials is the most dynamic due to the constantly evolving requirements advancement of materials, to address challenges such as biocompatibility, protein-bound uremic toxins, blood coagulation, auto-immune responses, oxidative stress, and poor clearance of uremic toxins. Hemodialysis is a membrane filtration processes that is currently necessary for daily living of the patients with ESRD. Tens of millions of people with ESRD have benefited from hemodialysis over the past 60–70 years, both in terms of safeguarding life and a longer lifespan. Beyond challenges associated with the efficiency and separative properties of the membranes, ensuring hemocompatibility, or the safe circulation of blood outside the body for four hours every two days, remains a persistent challenge. This review explores the ongoing field of metal–Organic Frameworks (MOFs) and their applications in hemodialysis, offering a comprehensive examination of various MOFs employed to address challenges inherent in traditional hemodialysis methodologies. this This review included includes the experimental work done with various MOFs as a filler such as UiO-66, HKUST-1, MIL-101, and ZIF-8, which together lead to improved adsorption capacities for a range of uremic toxins and proteins. Furthermore, this review highlights how effectively MOF-based hemodialysis membranes remove a variety of uremic toxins, including p-cresol, urea, creatinine, and indoxyl sulfate and potential filler choices for the future. Future research efforts should focus on refining synthesis techniques, enhancing toxin selectivity, and investigating the long-term durability of MOF-based membranes. With these considerations, MOFs emerge as transformative materials in the quest to develop advanced and efficient hemodialysis technologies, holding the promise to significantly enhance patient outcomes and redefine the landscape of renal therapy.Keywords: membrane, hemodailysis, metal organic frameworks, seperation, protein adsorbtion
Procedia PDF Downloads 566514 Quantitative Wide-Field Swept-Source Optical Coherence Tomography Angiography and Visual Outcomes in Retinal Artery Occlusion
Authors: Yifan Lu, Ying Cui, Ying Zhu, Edward S. Lu, Rebecca Zeng, Rohan Bajaj, Raviv Katz, Rongrong Le, Jay C. Wang, John B. Miller
Abstract:
Purpose: Retinal artery occlusion (RAO) is an ophthalmic emergency that can lead to poor visual outcome and is associated with an increased risk of cerebral stroke and cardiovascular events. Fluorescein angiography (FA) is the traditional diagnostic tool for RAO; however, wide-field swept-source optical coherence tomography angiography (WF SS-OCTA), as a nascent imaging technology, is able to provide quick and non-invasive angiographic information with a wide field of view. In this study, we looked for associations between OCT-A vascular metrics and visual acuity in patients with prior diagnosis of RAO. Methods: Patients with diagnoses of central retinal artery occlusion (CRAO) or branched retinal artery occlusion (BRAO) were included. A 6mm x 6mm Angio and a 15mm x 15mm AngioPlex Montage OCT-A image were obtained for both eyes in each patient using the Zeiss Plex Elite 9000 WF SS-OCTA device. Each 6mm x 6mm image was divided into nine Early Treatment Diabetic Retinopathy Study (ETDRS) subfields. The average measurement of the central foveal subfield, inner ring, and outer ring was calculated for each parameter. Non-perfusion area (NPA) was manually measured using 15mm x 15mm Montage images. A linear regression model was utilized to identify a correlation between the imaging metrics and visual acuity. A P-value less than 0.05 was considered to be statistically significant. Results: Twenty-five subjects were included in the study. For RAO eyes, there was a statistically significant negative correlation between vision and retinal thickness as well as superficial capillary plexus vessel density (SCP VD). A negative correlation was found between vision and deep capillary plexus vessel density (DCP VD) without statistical significance. There was a positive correlation between vision and choroidal thickness as well as choroidal volume without statistical significance. No statistically significant correlation was found between vision and the above metrics in contralateral eyes. For NPA measurements, no significant correlation was found between vision and NPA. Conclusions: This is the first study to our best knowledge to investigate the utility of WF SS-OCTA in RAO and to demonstrate correlations between various retinal vascular imaging metrics and visual outcomes. Further investigations should explore the associations between these imaging findings and cardiovascular risk as RAO patients are at elevated risk for symptomatic stroke. The results of this study provide a basis to understand the structural changes involved in visual outcomes in RAO. Furthermore, they may help guide management of RAO and prevention of cerebral stroke and cardiovascular accidents in patients with RAO.Keywords: OCTA, swept-source OCT, retinal artery occlusion, Zeiss Plex Elite
Procedia PDF Downloads 1396513 Analyzing the Emergence of Conscious Phenomena by the Process-Based Metaphysics
Authors: Chia-Lin Tu
Abstract:
Towards the end of the 20th century, a reductive picture has dominated in philosophy of science and philosophy of mind. Reductive physicalism claims that all entities and properties in this world are eventually able to be reduced to the physical level. It means that all phenomena in the world are able to be explained by laws of physics. However, quantum physics provides another picture. It says that the world is undergoing change and the energy of change is, in fact, the most important part to constitute world phenomena. Quantum physics provides us another point of view to reconsider the reality of the world. Throughout the history of philosophy of mind, reductive physicalism tries to reduce the conscious phenomena to physical particles as well, meaning that the reality of consciousness is composed by physical particles. However, reductive physicalism is unable to explain conscious phenomena and mind-body causation. Conscious phenomena, e.g., qualia, is not composed by physical particles. The current popular theory for consciousness is emergentism. Emergentism is an ambiguous concept which has not had clear idea of how conscious phenomena are emerged by physical particles. In order to understand the emergence of conscious phenomena, it seems that quantum physics is an appropriate analogy. Quantum physics claims that physical particles and processes together construct the most fundamental field of world phenomena, and thus all natural processes, i.e., wave functions, have occurred within. The traditional space-time description of classical physics is overtaken by the wave-function story. If this methodology of quantum physics works well to explain world phenomena, then it is not necessary to describe the world by the idea of physical particles like classical physics did. Conscious phenomena are one kind of world phenomena. Scientists and philosophers have tried to explain the reality of them, but it has not come out any conclusion. Quantum physics tells us that the fundamental field of the natural world is processed metaphysics. The emergence of conscious phenomena is only possible within this process metaphysics and has clearly occurred. By the framework of quantum physics, we are able to take emergence more seriously, and thus we can account for such emergent phenomena as consciousness. By questioning the particle-mechanistic concept of the world, the new metaphysics offers an opportunity to reconsider the reality of conscious phenomena.Keywords: quantum physics, reduction, emergence, qualia
Procedia PDF Downloads 1646512 Groundwater Treatment of Thailand's Mae Moh Lignite Mine
Authors: A. Laksanayothin, W. Ariyawong
Abstract:
Mae Moh Lignite Mine is the largest open-pit mine in Thailand. The mine serves coal to the power plant about 16 million tons per year. This amount of coal can produce electricity accounting for about 10% of Nation’s electric power generation. The mining area of Mae Moh Mine is about 28 km2. At present, the deepest area of the pit is about 280 m from ground level (+40 m. MSL) and in the future the depth of the pit can reach 520 m from ground level (-200 m.MSL). As the size of the pit is quite large, the stability of the pit is seriously important. Furthermore, the preliminary drilling and extended drilling in year 1989-1996 had found high pressure aquifer under the pit. As a result, the pressure of the underground water has to be released in order to control mine pit stability. The study by the consulting experts later found that 3-5 million m3 per year of the underground water is needed to be de-watered for the safety of mining. However, the quality of this discharged water should meet the standard. Therefore, the ground water treatment facility has been implemented, aiming to reduce the amount of naturally contaminated Arsenic (As) in discharged water lower than the standard limit of 10 ppb. The treatment system consists of coagulation and filtration process. The main components include rapid mixing tanks, slow mixing tanks, sedimentation tank, thickener tank and sludge drying bed. The treatment process uses 40% FeCl3 as a coagulant. The FeCl3 will adsorb with As(V), forming floc particles and separating from the water as precipitate. After that, the sludge is dried in the sand bed and then be disposed in the secured land fill. Since 2011, the treatment plant of 12,000 m3/day has been efficiently operated. The average removal efficiency of the process is about 95%.Keywords: arsenic, coagulant, ferric chloride, groundwater, lignite, coal mine
Procedia PDF Downloads 3106511 Metal Layer Based Vertical Hall Device in a Complementary Metal Oxide Semiconductor Process
Authors: Se-Mi Lim, Won-Jae Jung, Jin-Sup Kim, Jun-Seok Park, Hyung-Il Chae
Abstract:
This paper presents a current-mode vertical hall device (VHD) structure using metal layers in a CMOS process. The proposed metal layer based vertical hall device (MLVHD) utilizes vertical connection among metal layers (from M1 to the top metal) to facilitate hall effect. The vertical metal structure unit flows a bias current Ibias from top to bottom, and an external magnetic field changes the current distribution by Lorentz force. The asymmetric current distribution can be detected by two differential-mode current outputs on each side at the bottom (M1), and each output sinks Ibias/2 ± Ihall. A single vertical metal structure generates only a small amount of hall effect of Ihall due to the short length from M1 to the top metal as well as the low conductivity of the metal, and a series connection between thousands of vertical structure units can solve the problem by providing NxIhall. The series connection between two units is another vertical metal structure flowing current in the opposite direction, and generates negative hall effect. To mitigate the negative hall effect from the series connection, the differential current outputs at the bottom (M1) from one unit merges on the top metal level of the other unit. The proposed MLVHD is simulated in a 3-dimensional model simulator in COMSOL Multiphysics, with 0.35 μm CMOS process parameters. The simulated MLVHD unit size is (W) 10 μm × (L) 6 μm × (D) 10 μm. In this paper, we use an MLVHD with 10 units; the overall hall device size is (W) 10 μm × (L)78 μm × (D) 10 μm. The COMSOL simulation result is as following: the maximum hall current is approximately 2 μA with a 12 μA bias current and 100mT magnetic field; This work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIP) (No.R7117-16-0165, Development of Hall Effect Semiconductor for Smart Car and Device).Keywords: CMOS, vertical hall device, current mode, COMSOL
Procedia PDF Downloads 3036510 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique
Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki
Abstract:
Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector
Procedia PDF Downloads 3356509 Dairy Wastewater Treatment by Electrochemical and Catalytic Method
Authors: Basanti Ekka, Talis Juhna
Abstract:
Dairy industrial effluents originated by the typical processing activities are composed of various organic and inorganic constituents, and these include proteins, fats, inorganic salts, antibiotics, detergents, sanitizers, pathogenic viruses, bacteria, etc. These contaminants are harmful to not only human beings but also aquatic flora and fauna. Because consisting of large classes of contaminants, the specific targeted removal methods available in the literature are not viable solutions on the industrial scale. Therefore, in this on-going research, a series of coagulation, electrochemical, and catalytic methods will be employed. The bulk coagulation and electrochemical methods can wash off most of the contaminants, but some of the harmful chemicals may slip in; therefore, specific catalysts designed and synthesized will be employed for the removal of targeted chemicals. In the context of Latvian dairy industries, presently, work is under progress on the characterization of dairy effluents by total organic carbon (TOC), Inductively Coupled Plasma Mass Spectrometry (ICP-MS)/ Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES), High-Performance Liquid Chromatography (HPLC), Gas Chromatography-Mass Spectrometry (GC-MS), and Mass Spectrometry. After careful evaluation of the dairy effluents, a cost-effective natural coagulant will be employed prior to advanced electrochemical technology such as electrocoagulation and electro-oxidation as a secondary treatment process. Finally, graphene oxide (GO) based hybrid materials will be used for post-treatment of dairy wastewater as graphene oxide has been widely applied in various fields such as environmental remediation and energy production due to the presence of various oxygen-containing groups. Modified GO will be used as a catalyst for the removal of remaining contaminants after the electrochemical process.Keywords: catalysis, dairy wastewater, electrochemical method, graphene oxide
Procedia PDF Downloads 1446508 Intrusion Detection in SCADA Systems
Authors: Leandros A. Maglaras, Jianmin Jiang
Abstract:
The protection of the national infrastructures from cyberattacks is one of the main issues for national and international security. The funded European Framework-7 (FP7) research project CockpitCI introduces intelligent intrusion detection, analysis and protection techniques for Critical Infrastructures (CI). The paradox is that CIs massively rely on the newest interconnected and vulnerable Information and Communication Technology (ICT), whilst the control equipment, legacy software/hardware, is typically old. Such a combination of factors may lead to very dangerous situations, exposing systems to a wide variety of attacks. To overcome such threats, the CockpitCI project combines machine learning techniques with ICT technologies to produce advanced intrusion detection, analysis and reaction tools to provide intelligence to field equipment. This will allow the field equipment to perform local decisions in order to self-identify and self-react to abnormal situations introduced by cyberattacks. In this paper, an intrusion detection module capable of detecting malicious network traffic in a Supervisory Control and Data Acquisition (SCADA) system is presented. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automates SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detects anomalies in the system real time. The module is part of an IDS (intrusion detection system) developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF messages that carry information about the source of the incident, the time and a classification of the alarm.Keywords: cyber-security, SCADA systems, OCSVM, intrusion detection
Procedia PDF Downloads 5526507 Cotton Fiber Quality Improvement by Introducing Sucrose Synthase (SuS) Gene into Gossypium hirsutum L.
Authors: Ahmad Ali Shahid, Mukhtar Ahmed
Abstract:
The demand for long staple fiber having better strength and length is increasing with the introduction of modern spinning and weaving industry in Pakistan. Work on gene discovery from developing cotton fibers has helped to identify dozens of genes that take part in cotton fiber development and several genes have been characterized for their role in fiber development. Sucrose synthase (SuS) is a key enzyme in the metabolism of sucrose in a plant cell, in cotton fiber it catalyzes a reversible reaction, but preferentially converts sucrose and UDP into fructose and UDP-glucose. UDP-glucose (UDPG) is a nucleotide sugar act as a donor for glucose residue in many glycosylation reactions and is essential for the cytosolic formation of sucrose and involved in the synthesis of cell wall cellulose. The study was focused on successful Agrobacterium-mediated stable transformation of SuS gene in pCAMBIA 1301 into cotton under a CaMV35S promoter. Integration and expression of the gene were confirmed by PCR, GUS assay, and real-time PCR. Young leaves of SuS overexpressing lines showed increased total soluble sugars and plant biomass as compared to non-transgenic control plants. Cellulose contents from fiber were significantly increased. SEM analysis revealed that fibers from transgenic cotton were highly spiral and fiber twist number increased per unit length when compared with control. Morphological data from field plants showed that transgenic plants performed better in field conditions. Incorporation of genes related to cotton fiber length and quality can provide new avenues for fiber improvement. The utilization of this technology would provide an efficient import substitution and sustained production of long-staple fiber in Pakistan to fulfill the industrial requirements.Keywords: agrobacterium-mediated transformation, cotton fiber, sucrose synthase gene, staple length
Procedia PDF Downloads 2336506 The Role of Artificial Intelligence Algorithms in Psychiatry: Advancing Diagnosis and Treatment
Authors: Netanel Stern
Abstract:
Artificial intelligence (AI) algorithms have emerged as powerful tools in the field of psychiatry, offering new possibilities for enhancing diagnosis and treatment outcomes. This article explores the utilization of AI algorithms in psychiatry, highlighting their potential to revolutionize patient care. Various AI algorithms, including machine learning, natural language processing (NLP), reinforcement learning, clustering, and Bayesian networks, are discussed in detail. Moreover, ethical considerations and future directions for research and implementation are addressed.Keywords: AI, software engineering, psychiatry, neuroimaging
Procedia PDF Downloads 1166505 Knowledge, Attitude, and Practice Related to Potential Application of Artificial Intelligence in Health Supply Chain
Authors: Biniam Bahiru Tufa, Hana Delil Tesfaye, Seife Demisse Legesse, Manaye Tamire
Abstract:
The healthcare industry is witnessing a digital transformation, with artificial intelligence (AI) offering potential solutions for challenges in health supply chain management (HSCM). However, the adoption of AI in this field remains limited. This research aimed to assess the knowledge, attitude, and practice of AI among students and employees in the health supply chain sector in Ethiopia. Using an explanatory case study research design with a concurrent mixed approach, quantitative and qualitative data were collected simultaneously. The study included 153 participants comprising students and employed health supply chain professionals working in various sectors. The majority had a pharmacy background, and one-third of the participants were male. Most respondents were under 35 years old, and around 68.6% had less than 10 years of experience. The findings revealed that 94.1% of participants had prior knowledge of AI, but only 35.3% were aware of its application in the supply chain. Moreover, the majority indicated that their training curriculum did not cover AI in health supply chain management. Participants generally held positive attitudes toward the necessity of AI for improving efficiency, effectiveness, and cost savings in the supply chain. However, many expressed concerns about its impact on job security and satisfaction, considering it as a burden Graduate students demonstrated higher knowledge of AI compared to employed staff, while graduate students also exhibited a more positive attitude toward AI. The study indicated low previous utilization and potential future utilization of AI in the health supply chain, suggesting untapped opportunities for improvement. Overall, while supply chain experts and graduate students lacked sufficient understanding of AI and its significance, they expressed favorable views regarding its implementation in the sector. The study recommends that the Ethiopian government and international organizations consider introducing AI in the undergraduate pharmacy curriculum and promote its integration into the health supply chain field.Keywords: knowledge, attitude, practice, supply chain, articifial intellegence
Procedia PDF Downloads 916504 Phelipanche Ramosa (L. - Pomel) Control in Field Tomato Crop
Authors: G. Disciglio, F. Lops, A. Carlucci, G. Gatta, A. Tarantino, L. Frabboni, F. Carriero, F. Cibelli, M. L. Raimondo, E. Tarantino
Abstract:
The Phelipanche ramosa is is an important crop whose cultivation in the Mediterranean basin is severely contained the phitoparasitic weed Phelipanche ramose. The semiarid regions of the world are considered the main center of this parasitic weed, where heavy infestation is due to the ability to produce high numbers of seeds (up to 500,000 per plant), that remain viable for extended period (more than 19 years). In this paper 12 treatments of parasitic weed control including chemical, agronomic, biological and biotechnological methods have been carried out. In 2014 a trial was performed at Foggia (southern Italy). on processing tomato (cv Docet), grown in field infested by Phelipanche ramosa, Tomato seedlings were transplant on May 5, 2014 on a clay-loam soil (USDA) fertilized by 100 kg ha-1 of N; 60 kg ha-1 of P2O5 and 20 kg ha-1 of S. Afterwards, top dressing was performed with 70 kg ha-1 of N. The randomized block design with 3 replicates was adopted. During the growing cycle of the tomato, at 56-78 and 92 days after transplantation, the number of parasitic shoots emerged in each pot was detected. At harvesting, on August 18, the major quantity-quality yield parameters were determined (marketable yield, mean weight, dry matter, pH, soluble solids and color of fruits). All data were subjected to analysis of variance (ANOVA), using the JMP software (SAS Institute Inc., Cary, NC, USA), and for comparison of means was used Tukey's test. Each treatment studied did not provide complete control against Phelipanche ramosa. However among the 12 tested methods, Fusarium, gliphosate, radicon biostimulant and Red Setter tomato cv (improved genotypes obtained by Tilling technology) proved to mitigate the virulence of the attacks of Phelipanche ramose. It is assumed that these effects can be improved by combining some of these treatments each other, especially for a gradual and continuing reduction of the “seed bank” of the parasite in the soil.Keywords: control methods, Phelipanche ramosa, tomato crop, mediterranean basin
Procedia PDF Downloads 563