Search results for: interface soil layer
314 Sensory Interventions for Dementia: A Review
Authors: Leigh G. Hayden, Susan E. Shepley, Cristina Passarelli, William Tingo
Abstract:
Introduction: Sensory interventions are popular therapeutic and recreational approaches for people living with all stages of dementia. However, it is unknown which sensory interventions are used to achieve which outcomes across all subtypes of dementia. Methods: To address this gap, we conducted a scoping review of sensory interventions for people living with dementia. We conducted a search of the literature for any article published in English from 1 January 1990 to 1 June 2019, on any sensory or multisensory intervention targeted to people living with any kind of dementia, which reported on patient health outcomes. We did not include complex interventions where only a small aspect was related to sensory stimulation. We searched the databases Medline, CINHAL, and Psych Articles using our institutional discovery layer. We conducted all screening in duplicate to reduce Type 1 and Type 2 errors. The data from all included papers were extracted by one team member, and audited by another, to ensure consistency of extraction and completeness of data. Results: Our initial search captured 7654 articles, and the removal of duplicates (n=5329), those that didn’t pass title and abstract screening (n=1840) and those that didn’t pass full-text screening (n=281) resulted in 174 articles included. The countries with the highest publication in this area were the United States (n=59), the United Kingdom (n=26) and Australia (n=15). The most common type of interventions were music therapy (n=36), multisensory rooms (n=27) and multisensory therapies (n=25). Seven articles were published in the 1990’s, 55 in the 2000’s, and the remainder since 2010 (n=112). Discussion: Multisensory rooms have been present in the literature since the early 1990’s. However, more recently, nature/garden therapy, art therapy, and light therapy have emerged since 2008 in the literature, an indication of the increasingly diverse scholarship in the area. The least popular type of intervention is a traditional food intervention. Taste as a sensory intervention is generally avoided for safety reasons, however it shows potential for increasing quality of life. Agitation, behavior, and mood are common outcomes for all sensory interventions. However, light therapy commonly targets sleep. The majority (n=110) of studies have very small sample sizes (n=20 or less), an indicator of the lack of robust data in the field. Additional small-scale studies of the known sensory interventions will likely do little to advance the field. However, there is a need for multi-armed studies which directly compare sensory interventions, and more studies which investigate the use of layering sensory interventions (for example, adding an aromatherapy component to a lighting intervention). In addition, large scale studies which enroll people at early stages of dementia will help us better understand the potential of sensory and multisensory interventions to slow the progression of the disease.Keywords: sensory interventions, dementia, scoping review
Procedia PDF Downloads 134313 Isolation and Screening of Antagonistic Bacteria against Wheat Pathogenic Fungus Tilletia indica
Authors: Sugandha Asthana, Geetika Vajpayee, Pratibha Kumari, Shanthy Sundaram
Abstract:
An economically important disease of wheat in North Western region of India is Karnal Bunt caused by smut fungus Tilletia indica. This fungal pathogen spreads by air, soil and seed borne sporodia at the time of flowering, which ultimately leads to partial bunting of wheat kernels with fishy odor and taste to wheat flour. It has very serious effects due to quarantine measures which have to be applied for grain exports. Chemical fungicides such as mercurial compounds and Propiconazole applied to the control of Karnal bunt have been only partially successful. Considering the harmful effects of chemical fungicides on man as well as environment, many countries are developing biological control as the superior substitute to chemical control. Repeated use of fungicides can be responsible for the development of resistance in fungal pathogens against certain chemical compounds. The present investigation is based on the isolation and evaluation of antifungal properties of some isolated (from natural manure) and commercial bacterial strains against Tilletia indica. Total 23 bacterial isolates were obtained and antagonistic activity of all isolates and commercial bacterial strains (Bacillus subtilis MTCC8601, Bacillus pumilus MTCC 8743, Pseudomonas aeruginosa) were tested against T. indica by dual culture plate assay (pour plate and streak plate). Test for the production of antifungal volatile organic compounds (VOCs) by antagonistic bacteria was done by sealed plate method. Amongst all s1, s3, s5, and B. subtilis showed more than 80% inhibition. Production of extracellular hydrolytic enzymes such as protease, beta 1, 4 glucanase, HCN and ammonia was studied for confirmation of antifungal activity. s1, s3, s5 and B. subtilis were found to be the best for protease activity and s5 and B. subtilis for beta 1, 4 glucanase activity. Bacillus subtilis was significantly effective for HCN whereas s3, s5 and Bacillus subtilis for ammonia production. Isolates were identified as Pseudomonas aeruginosa (s1) and B. licheniformis (s3, s5) by various biochemical assays and confirmed by16s rRNA sequencing. Use of microorganisms or their secretions as biocontrol agents to avoid plant diseases is ecologically safe and may offer long term of protection to crop. The above study reports the promising effects of these strains in better pathogen free crop production and quality maintenance as well as prevention of the excessive use of synthetic fungicides.Keywords: antagonistic, antifungal, biocontrol, Karnal bunt
Procedia PDF Downloads 283312 Rapid, Direct, Real-Time Method for Bacteria Detection on Surfaces
Authors: Evgenia Iakovleva, Juha Koivisto, Pasi Karppinen, J. Inkinen, Mikko Alava
Abstract:
Preventing the spread of infectious diseases throughout the worldwide is one of the most important tasks of modern health care. Infectious diseases not only account for one fifth of the deaths in the world, but also cause many pathological complications for the human health. Touch surfaces pose an important vector for the spread of infections by varying microorganisms, including antimicrobial resistant organisms. Further, antimicrobial resistance is reply of bacteria to the overused or inappropriate used of antibiotics everywhere. The biggest challenges in bacterial detection by existing methods are non-direct determination, long time of analysis, the sample preparation, use of chemicals and expensive equipment, and availability of qualified specialists. Therefore, a high-performance, rapid, real-time detection is demanded in rapid practical bacterial detection and to control the epidemiological hazard. Among the known methods for determining bacteria on the surfaces, Hyperspectral methods can be used as direct and rapid methods for microorganism detection on different kind of surfaces based on fluorescence without sampling, sample preparation and chemicals. The aim of this study was to assess the relevance of such systems to remote sensing of surfaces for microorganisms detection to prevent a global spread of infectious diseases. Bacillus subtilis and Escherichia coli with different concentrations (from 0 to 10x8 cell/100µL) were detected with hyperspectral camera using different filters as visible visualization of bacteria and background spots on the steel plate. A method of internal standards was applied for monitoring the correctness of the analysis results. Distances from sample to hyperspectral camera and light source are 25 cm and 40 cm, respectively. Each sample is optically imaged from the surface by hyperspectral imaging system, utilizing a JAI CM-140GE-UV camera. Light source is BeamZ FLATPAR DMX Tri-light, 3W tri-colour LEDs (red, blue and green). Light colors are changed through DMX USB Pro interface. The developed system was calibrated following a standard procedure of setting exposure and focused for light with λ=525 nm. The filter is ThorLabs KuriousTM hyperspectral filter controller with wavelengths from 420 to 720 nm. All data collection, pro-processing and multivariate analysis was performed using LabVIEW and Python software. The studied human eye visible and invisible bacterial stains clustered apart from a reference steel material by clustering analysis using different light sources and filter wavelengths. The calculation of random and systematic errors of the analysis results proved the applicability of the method in real conditions. Validation experiments have been carried out with photometry and ATP swab-test. The lower detection limit of developed method is several orders of magnitude lower than for both validation methods. All parameters of the experiments were the same, except for the light. Hyperspectral imaging method allows to separate not only bacteria and surfaces, but also different types of bacteria, such as Gram-negative Escherichia coli and Gram-positive Bacillus subtilis. Developed method allows skipping the sample preparation and the use of chemicals, unlike all other microbiological methods. The time of analysis with novel hyperspectral system is a few seconds, which is innovative in the field of microbiological tests.Keywords: Escherichia coli, Bacillus subtilis, hyperspectral imaging, microorganisms detection
Procedia PDF Downloads 223311 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube
Authors: Nirjhar Dhang, S. Vinay Kumar
Abstract:
Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.Keywords: concrete, image processing, plane strain, interfacial transition zone
Procedia PDF Downloads 239310 Cellulolytic and Xylanolytic Enzymes from Mycelial Fungi
Authors: T. Sadunishvili, L. Kutateladze, T. Urushadze, R. Khvedelidze, N. Zakariashvili, M. Jobava, G. Kvesitadze
Abstract:
Multiple repeated soil-climatic zones in Georgia determines the diversity of microorganisms. Hundreds of microscopic fungi of different genera have been isolated from different ecological niches, including some extreme environments. Biosynthetic ability of microscopic fungi has been studied. Trichoderma ressei, representative of the Ascomycetes secrete cellulolytic and xylanolytic enzymes that act in synergy to hydrolyze polysaccharide polymers to glucose, xylose and arabinose, which can be fermented to biofuels. The other mesophilic strains producing cellulases are Allesheria terrestris, Chaetomium thermophile, Fusarium oxysporium, Piptoporus betulinus, Penicillium echinulatum, P. purpurogenum, Aspergillus niger, A. wentii, A. versicolor, A. fumigatus etc. In the majority of the cases the cellulases produced by strains of genus Aspergillus usually have high β-glucosidase activity and average endoglucanases levels (with some exceptions), whereas strains representing Trichoderma have high endo enzyme and low β-glucosidase, and hence has limited efficiency in cellulose hydrolysis. Six producers of stable cellulases and xylanases from mesophilic and thermophilic fungi have been selected. By optimization of submerged cultivation conditions, high activities of cellulases and xylanases were obtained. For enzymes purification, their sedimentation by organic solvents such as ethyl alcohol, acetone, isopropanol and by ammonium sulphate in different ratios have been carried out. Best results were obtained with precipitation by ethyl alcohol (1:3.5) and ammonium sulphate. The yields of enzyme according to cellulase activities were 80-85% in both cases. Cellulase activity of enzyme preparation obtained from the strain Trichoderma viride X 33 is 126 U/g, from the strain Penicillium canescence D 85–185U/g and from the strain Sporotrichum pulverulentum T 5-0 110 U/g. Cellulase activity of enzyme preparation obtained from the strain Aspergillus sp. Av10 is 120 U/g, xylanase activity of enzyme preparation obtained from the strain Aspergillus niger A 7-5–1155U/g and from the strain Aspergillus niger Aj 38-1250 U/g. Optimum pH and temperature of operation and thermostability, of the enzyme preparations, were established. The efficiency of hydrolyses of different agricultural residues by the microscopic fungi cellulases has been studied. The glucose yield from the residues as a result of enzymatic hydrolysis is highly determined by the ratio of enzyme to substrate, pH, temperature, and duration of the process. Hydrolysis efficiency was significantly increased as a result of different pretreatment of the residues by different methods. Acknowledgement: The Study was supported by the ISTC project G-2117, funded by Korea.Keywords: cellulase, xylanase, microscopic fungi, enzymatic hydrolysis
Procedia PDF Downloads 392309 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors
Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin
Abstract:
IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)
Procedia PDF Downloads 139308 Evaluation of Cyclic Steam Injection in Multi-Layered Heterogeneous Reservoir
Authors: Worawanna Panyakotkaew, Falan Srisuriyachai
Abstract:
Cyclic steam injection (CSI) is a thermal recovery technique performed by injecting periodically heated steam into heavy oil reservoir. Oil viscosity is substantially reduced by means of heat transferred from steam. Together with gas pressurization, oil recovery is greatly improved. Nevertheless, prediction of effectiveness of the process is difficult when reservoir contains degree of heterogeneity. Therefore, study of heterogeneity together with interest reservoir properties must be evaluated prior to field implementation. In this study, thermal reservoir simulation program is utilized. Reservoir model is firstly constructed as multi-layered with coarsening upward sequence. The highest permeability is located on top layer with descending of permeability values in lower layers. Steam is injected from two wells located diagonally in quarter five-spot pattern. Heavy oil is produced by adjusting operating parameters including soaking period and steam quality. After selecting the best conditions for both parameters yielding the highest oil recovery, effects of degree of heterogeneity (represented by Lorenz coefficient), vertical permeability and permeability sequence are evaluated. Surprisingly, simulation results show that reservoir heterogeneity yields benefits on CSI technique. Increasing of reservoir heterogeneity impoverishes permeability distribution. High permeability contrast results in steam intruding in upper layers. Once temperature is cool down during back flow period, condense water percolates downward, resulting in high oil saturation on top layers. Gas saturation appears on top after while, causing better propagation of steam in the following cycle due to high compressibility of gas. Large steam chamber therefore covers most of the area in upper zone. Oil recovery reaches approximately 60% which is of about 20% higher than case of heterogeneous reservoir. Vertical permeability exhibits benefits on CSI. Expansion of steam chamber occurs within shorter time from upper to lower zone. For fining upward permeability sequence where permeability values are reversed from the previous case, steam does not override to top layers due to low permeability. Propagation of steam chamber occurs in middle of reservoir where permeability is high enough. Rate of oil recovery is slower compared to coarsening upward case due to lower permeability at the location where propagation of steam chamber occurs. Even CSI technique produces oil quite slowly in early cycles, once steam chamber is formed deep in the reservoir, heat is delivered to formation quickly in latter cycles. Since reservoir heterogeneity is unavoidable, a thorough understanding of its effect must be considered. This study shows that CSI technique might be one of the compatible solutions for highly heterogeneous reservoir. This competitive technique also shows benefit in terms of heat consumption as steam is injected periodically.Keywords: cyclic steam injection, heterogeneity, reservoir simulation, thermal recovery
Procedia PDF Downloads 458307 Consensual A-Monogamous Relationships: Challenges and Ways of Coping
Authors: Tal Braverman Uriel, Tal Litvak Hirsch
Abstract:
Background and Objectives: Little or only partial emphasis has been placed on exploring the complexity of consensual non-monogamous relationships. The term "polyamory" refers to consensual non-monogamy, and it is defined as having emotional and/or sexual relations simultaneously with two or more people, the consent and knowledge of all the partners concerned. Managing multiple romantic relationships with different people evokes more emotions, leads to more emotional conflicts arising from different interests, and demands practical strategies. An individual's transition from a monogamous lifestyle to a consensual non-monogamous lifestyle yields new challenges, accompanied by stress, uncertainty, and question marks, as do other life-changing events, such as divorce or transition to parenthood. The study examines both the process of transition and adaptation to a consensually non-monogamous relationship, as well as the coping mechanism involved in the daily conduct of this lifestyle. The research focuses on understanding the consequences, challenges, and coping methods from a personal, marital, and familial point of view and focuses on 40 middle-aged individuals (20 men and 20 women ages 40-60). The research sheds light on a way of life that has not been previously studied in Israel and is still considered unacceptable. Theories of crisis (e.g., as Folkman and Lazarus) were applied, and as a result, a deeper understanding of the subject was reached, all while focusing on multiple aspects of dealing with stress. The basic research question examines the consequences of entering a polyamorous life from a personal point of view as an individual, partner, and parent and the ways of coping with these consequences. Method: The research is conducted with a narrative qualitative approach in the interpretive paradigm, including semi-structured in-depth interviews. The method of analysis is thematic. Results: The findings indicate that in most cases, an individual's motivation to open the relationship is mainly a longing for better sexuality and for an added layer of excitement to their lives. Most of the interviewees were assisted by their spouses in the process, as well as by social networks and podcasts on the subject. Some of them therapeutic professionals from the field are helpful. It also clearly emerged that among those who experienced acute emotional crises with the primary partner or painful separations from secondary partners, all believed polyamory to be the adequate way of life for them. Finally, a key resource for managing tension and stress is the ability to share and communicate with the primary partner. Conclusions: The study points to the challenges and benefits of a non-monogamous lifestyle as well as the use of coping mechanisms and resources that are consistent with the existing theory and research in the field in the context of life changes. The study indicates the need to expand the research canvas in the future in the context of parenting and the consequences for children.Keywords: a-monogamy, consent, family, stress, tension
Procedia PDF Downloads 76306 Ground Improvement Using Deep Vibro Techniques at Madhepura E-Loco Project
Authors: A. Sekhar, N. Ramakrishna Raju
Abstract:
This paper is a result of ground improvement using deep vibro techniques with combination of sand and stone columns performed on a highly liquefaction susceptible site (70 to 80% sand strata and balance silt) with low bearing capacities due to high settlements located (earth quake zone V as per IS code) at Madhepura, Bihar state in northern part of India. Initially, it was envisaged with bored cast in-situ/precast piles, stone/sand columns. However, after detail analysis to address both liquefaction and improve bearing capacities simultaneously, it was analyzed the deep vibro techniques with combination of sand and stone columns is excellent solution for given site condition which may be first time in India. First after detail soil investigation, pre eCPT test was conducted to evaluate the potential depth of liquefaction to densify silty sandy soils to improve factor of safety against liquefaction. Then trail test were being carried out at site by deep vibro compaction technique with sand and stone columns combination with different spacings of columns in triangular shape with different timings during each lift of vibro up to ground level. Different spacings and timing was done to obtain the most effective spacing and timing with vibro compaction technique to achieve maximum densification of saturated loose silty sandy soils uniformly for complete treated area. Then again, post eCPT test and plate load tests were conducted at all trail locations of different spacings and timing of sand and stone columns to evaluate the best results for obtaining the required factor of safety against liquefaction and the desired bearing capacities with reduced settlements for construction of industrial structures. After reviewing these results, it was noticed that the ground layers are densified more than the expected with improved factor of safety against liquefaction and achieved good bearing capacities for a given settlements as per IS codal provisions. It was also worked out for cost-effectiveness of lightly loaded single storied structures by using deep vibro technique with sand column avoiding stone. The results were observed satisfactory for resting the lightly loaded foundations. In this technique, the most important is to mitigating liquefaction with improved bearing capacities and reduced settlements to acceptable limits as per IS: 1904-1986 simultaneously up to a depth of 19M. To our best knowledge it was executed first time in India.Keywords: ground improvement, deep vibro techniques, liquefaction, bearing capacity, settlement
Procedia PDF Downloads 197305 Effect of Oxygen Ion Irradiation on the Structural, Spectral and Optical Properties of L-Arginine Acetate Single Crystals
Authors: N. Renuka, R. Ramesh Babu, N. Vijayan
Abstract:
Ion beams play a significant role in the process of tuning the properties of materials. Based on the radiation behavior, the engineering materials are categorized into two different types. The first one comprises organic solids which are sensitive to the energy deposited in their electronic system and the second one comprises metals which are insensitive to the energy deposited in their electronic system. However, exposure to swift heavy ions alters this general behavior. Depending on the mass, kinetic energy and nuclear charge, an ion can produce modifications within a thin surface layer or it can penetrate deeply to produce long and narrow distorted area along its path. When a high energetic ion beam impinges on a material, it causes two different types of changes in the material due to the columbic interaction between the target atom and the energetic ion beam: (i) inelastic collisions of the energetic ion with the atomic electrons of the material; and (ii) elastic scattering from the nuclei of the atoms of the material, which is extremely responsible for relocating the atoms of matter from their lattice position. The exposure of the heavy ions renders the material return to equilibrium state during which the material undergoes surface and bulk modifications which depends on the mass of the projectile ion, physical properties of the target material, its energy, and beam dimension. It is well established that electronic stopping power plays a major role in the defect creation mechanism provided it exceeds a threshold which strongly depends on the nature of the target material. There are reports available on heavy ion irradiation especially on crystalline materials to tune their physical and chemical properties. L-Arginine Acetate [LAA] is a potential semi-organic nonlinear optical crystal and its optical, mechanical and thermal properties have already been reported The main objective of the present work is to enhance or tune the structural and optical properties of LAA single crystals by heavy ion irradiation. In the present study, a potential nonlinear optical single crystal, L-arginine acetate (LAA) was grown by slow evaporation solution growth technique. The grown LAA single crystal was irradiated with oxygen ions at the dose rate of 600 krad and 1M rad in order to tune the structural and optical properties. The structural properties of pristine and oxygen ions irradiated LAA single crystals were studied using Powder X- ray diffraction and Fourier Transform Infrared spectral studies which reveal the structural changes that are generated due to irradiation. Optical behavior of pristine and oxygen ions irradiated crystals is studied by UV-Vis-NIR and photoluminescence analyses. From this investigation we can concluded that oxygen ions irradiation modifies the structural and optical properties of LAA single crystals.Keywords: heavy ion irradiation, NLO single crystal, photoluminescence, X-ray diffractometer
Procedia PDF Downloads 254304 Environmental Performance of Different Lab Scale Chromium Removal Processes
Authors: Chiao-Cheng Huang, Pei-Te Chiueh, Ya-Hsuan Liou
Abstract:
Chromium-contaminated wastewater from electroplating industrial activity has been a long-standing environmental issue, as it can degrade surface water quality and is harmful to soil ecosystems. The traditional method of treating chromium-contaminated wastewater has been to use chemical coagulation processes. However, this method consumes large amounts of chemicals such as sulfuric acid, sodium hydroxide, and sodium bicarbonate in order to remove chromium. However, a series of new methods for treating chromium-containing wastewater have been developed. This study aimed to compare the environmental impact of four different lab scale chromium removal processes: 1.) chemical coagulation process (the most common and traditional method), in which sodium metabisulfite was used as reductant, 2.) electrochemical process using two steel sheets as electrodes, 3.) reduction by iron-copper bimetallic powder, and 4.) photocatalysis process by TiO2. Each process was run in the lab, and was able to achieve 100% removal of chromium in solution. Then a Life Cycle Assessment (LCA) study was conducted based on the experimental data obtained from four different case studies to identify the environmentally preferable alternative to treat chromium wastewater. The model used for calculating the environmental impact was TRACi, and the system scope includes the production phase and use phase of chemicals and electricity consumed by the chromium removal processes, as well as the final disposal of chromium containing sludge. The functional unit chosen in this study was the removal of 1 mg of chromium. Solution volume of each case study was adjusted to 1 L in advance and the chemicals and energy consumed were proportionally adjusted. The emissions and resources consumed were identified and characterized into 15 categories of midpoint impacts. The impact assessment results show that the human ecotoxicity category accounts for 55 % of environmental impact in Case 1, which can be attributed to the sulfuric acid used for pH adjustment. In Case 2, production of steel sheet electrodes is an energy-intensive process, thus contributed to 20 % of environmental impact. In Case 3, sodium bicarbonate is used as an anti-corrosion additive, which results mainly in 1.02E-05 Comparative Toxicity Unit (CTU) in the human toxicity category and 0.54E-05 (CTU) in acidification of air. In Case 4, electricity consumption for power supply of UV lamp gives 5.25E-05 (CTU) in human toxicity category, 1.15E-05 (kg Neq) in eutrophication. In conclusion, Case 3 and Case 4 have higher environmental impacts than Case 1 and Case 2, which can be attributed mostly to higher energy and chemical consumption, leading to high impacts in the global warming and ecotoxicity categories.Keywords: chromium, lab scale, life cycle assessment, wastewater
Procedia PDF Downloads 265303 A Comprehensive Key Performance Indicators Dashboard for Emergency Medical Services
Authors: Giada Feletti, Daniela Tedesco, Paolo Trucco
Abstract:
The present study aims to develop a dashboard of Key Performance Indicators (KPI) to enhance information and predictive capabilities in Emergency Medical Services (EMS) systems, supporting both operational and strategic decisions of different actors. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning the indicators currently used for the performance measurement of EMS systems. From this literature analysis, it emerged that current studies focus on two distinct perspectives: the ambulance service, a fundamental component of pre-hospital health treatment, and the patient care in the Emergency Department (ED). The perspective proposed by this study is to consider an integrated view of the ambulance service process and the ED process, both essential to ensure high quality of care and patient safety. Thus, the proposal focuses on the entire healthcare service process and, as such, allows considering the interconnection between the two EMS processes, the pre-hospital and hospital ones, connected by the assignment of the patient to a specific ED. In this way, it is possible to optimize the entire patient management. Therefore, attention is paid to the dependency of decisions that in current EMS management models tend to be neglected or underestimated. In particular, the integration of the two processes enables the evaluation of the advantage of an ED selection decision having visibility on EDs’ saturation status and therefore considering the distance, the available resources and the expected waiting times. Starting from a critical review of the KPIs proposed in the extant literature, the design of the dashboard was carried out: the high number of analyzed KPIs was reduced by eliminating the ones firstly not in line with the aim of the study and then the ones supporting a similar functionality. The KPIs finally selected were tested on a realistic dataset, which draws us to exclude additional indicators due to the unavailability of data required for their computation. The final dashboard, which was discussed and validated by experts in the field, includes a variety of KPIs able to support operational and planning decisions, early warning, and citizens’ awareness of EDs accessibility in real-time. By associating each KPI to the EMS phase it refers to, it was also possible to design a well-balanced dashboard covering both efficiency and effective performance of the entire EMS process. Indeed, just the initial phases related to the interconnection between ambulance service and patient’s care are covered by traditional KPIs compared to the subsequent phases taking place in the hospital ED. This could be taken into consideration for the potential future development of the dashboard. Moreover, the research could proceed by building a multi-layer dashboard composed of the first level with a minimal set of KPIs to measure the basic performance of the EMS system at an aggregate level and further levels with KPIs that can bring additional and more detailed information.Keywords: dashboard, decision support, emergency medical services, key performance indicators
Procedia PDF Downloads 112302 The Threats of Deforestation, Forest Fire and CO2 Emission toward Giam Siak Kecil Bukit Batu Biosphere Reserve in Riau, Indonesia
Authors: Siti Badriyah Rushayati, Resti Meilani, Rachmad Hermawan
Abstract:
A biosphere reserve is developed to create harmony amongst economic development, community development, and environmental protection, through partnership between human and nature. Giam Siak Kecil Bukit Batu Biosphere Reserve (GSKBB BR) in Riau Province, Indonesia, is unique in that it has peat soil dominating the area, many springs essential for human livelihood, high biodiversity. Furthermore, it is the only biosphere reserve covering privately managed production forest areas. The annual occurrences of deforestation and forest fire pose a threat toward such unique biosphere reserve. Forest fire produced smokes that along with mass airflow reached neighboring countries, particularly Singapore and Malaysia. In this research, we aimed at analyzing the threat of deforestation and forest fire, and the potential of CO2 emission at GSKBB BR. We used Landsat image, arcView software, and ERDAS IMAGINE 8.5 Software to conduct spatial analysis of land cover and land use changes, calculated CO2 emission based on emission potential from each land cover and land use type, and exercised simple linear regression to demonstrate the relation between CO2 emission potential and deforestation. The result showed that, beside in the buffer zone and transition area, deforestation also occurred in the core area. Spatial analysis of land cover and land use changes from years 2010, 2012, and 2014 revealed that there were changes of land cover and land use from natural forest and industrial plantation forest to other land use types, such as garden, mixed garden, settlement, paddy fields, burnt areas, and dry agricultural land. Deforestation in core area, particularly at the Giam Siak Kecil Wildlife Reserve and Bukit Batu Wildlife Reserve, occurred in the form of changes from natural forest in to garden, mixed garden, shrubs, swamp shrubs, dry agricultural land, open area, and burnt area. In the buffer zone and transition area, changes also happened, what once swamp forest changed into garden, mixed garden, open area, shrubs, swamp shrubs, and dry agricultural land. Spatial analysis on land cover and land use changes indicated that deforestation rate in the biosphere reserve from 2010 to 2014 had reached 16 119 ha/year. Beside deforestation, threat toward the biosphere reserve area also came from forest fire. The occurrence of forest fire in 2014 had burned 101 723 ha of the area, in which 9 355 ha of core area, and 92 368 ha of buffer zone and transition area. Deforestation and forest fire had increased CO2 emission as much as 24 903 855 ton/year.Keywords: biosphere reserve, CO2 emission, deforestation, forest fire
Procedia PDF Downloads 487301 Digitization and Morphometric Characterization of Botanical Collection of Indian Arid Zones as Informatics Initiatives Addressing Conservation Issues in Climate Change Scenario
Authors: Dipankar Saha, J. P. Singh, C. B. Pandey
Abstract:
Indian Thar desert being the seventh largest in the world is the main hot sand desert occupies nearly 385,000km2 and about 9% of the area of the country harbours several species likely the flora of 682 species (63 introduced species) belonging to 352 genera and 87 families. The degree of endemism of plant species in the Thar desert is 6.4 percent, which is relatively higher than the degree of endemism in the Sahara desert which is very significant for the conservationist to envisage. The advent and development of computer technology for digitization and data base management coupled with the rapidly increasing importance of biodiversity conservation resulted in the invention of biodiversity informatics as discipline of basic sciences with multiple applications. Aichi Target 19 as an outcome of Convention of Biological Diversity (CBD) specifically mandates the development of an advanced and shared biodiversity knowledge base. Information on species distributions in space is the crux of effective management of biodiversity in the rapidly changing world. The efficiency of biodiversity management is being increased rapidly by various stakeholders like researchers, policymakers, and funding agencies with the knowledge and application of biodiversity informatics. Herbarium specimens being a vital repository for biodiversity conservation especially in climate change scenario the digitization process usually aims to improve access and to preserve delicate specimens and in doing so creating large sets of images as a part of the existing repository as arid plant information facility for long-term future usage. As the leaf characters are important for describing taxa and distinguishing between them and they can be measured from herbarium specimens as well. As a part of this activity, laminar characterization (leaves being the most important characters in assessing climate change impact) initially resulted in classification of more than thousands collections belonging to ten families like Acanthaceae, Aizoaceae, Amaranthaceae, Asclepiadaceae, Anacardeaceae, Apocynaceae, Asteraceae, Aristolochiaceae, Berseraceae and Bignoniaceae etc. Taxonomic diversity indices has also been worked out being one of the important domain of biodiversity informatics approaches. The digitization process also encompasses workflows which incorporate automated systems to enable us to expand and speed up the digitisation process. The digitisation workflows used to be on a modular system which has the potential to be scaled up. As they are being developed with a geo-referencing tool and additional quality control elements and finally placing specimen images and data into a fully searchable, web-accessible database. Our effort in this paper is to elucidate the role of BIs, present effort of database development of the existing botanical collection of institute repository. This effort is expected to be considered as a part of various global initiatives having an effective biodiversity information facility. This will enable access to plant biodiversity data that are fit-for-use by scientists and decision makers working on biodiversity conservation and sustainable development in the region and iso-climatic situation of the world.Keywords: biodiversity informatics, climate change, digitization, herbarium, laminar characters, web accessible interface
Procedia PDF Downloads 229300 Precursor Synthesis of Carbon Materials with Different Aggregates Morphologies
Authors: Nikolai A. Khlebnikov, Vladimir N. Krasilnikov, Evgenii V. Polyakov, Anastasia A. Maltceva
Abstract:
Carbon materials with advanced surfaces are widely used both in modern industry and in environmental protection. The physical-chemical nature of these materials is determined by the morphology of primary atomic and molecular carbon structures, which are the basis for synthesizing the following materials: zero-dimensional (fullerenes), one-dimensional (fiber, tubes), two-dimensional (graphene) carbon nanostructures, three-dimensional (multi-layer graphene, graphite, foams) with unique physical-chemical and functional properties. Experience shows that the microscopic morphological level is the basis for the creation of the next mesoscopic morphological level. The dependence of the morphology on the chemical way and process prehistory (crystallization, colloids formation, liquid crystal state and other) is the peculiarity of the last called level. These factors determine the consumer properties of carbon materials, such as specific surface area, porosity, chemical resistance in corrosive environments, catalytic and adsorption activities. Based on the developed ideology of thin precursor synthesis, the authors discuss one of the approaches of the porosity control of carbon-containing materials with a given aggregates morphology. The low-temperature thermolysis of precursors in a gas environment of a given composition is the basis of the above-mentioned idea. The processes of carbothermic precursor synthesis of two different compounds: tungsten carbide WC:nC and zinc oxide ZnO:nC containing an impurity phase in the form of free carbon were selected as subjects of the research. In the first case, the transition metal (tungsten) forming carbides was the object of the synthesis. In the second case, there was selected zinc that does not form carbides. The synthesis of both kinds of transition metals compounds was conducted by the method of precursor carbothermic synthesis from the organic solution. ZnO:nC composites were obtained by thermolysis of succinate Zn(OO(CH2)2OO), formate glycolate Zn(HCOO)(OCH2CH2O)1/2, glycerolate Zn(OCH2CHOCH2OH), and tartrate Zn(OOCCH(OH)CH(OH)COO). WC:nC composite was synthesized from ammonium paratungstate and glycerol. In all cases, carbon structures that are specific for diamond- like carbon forms appeared on the surface of WC and ZnO particles after the heat treatment. Tungsten carbide and zinc oxide were removed from the composites by selective chemical dissolution preserving the amorphous carbon phase. This work presents the results of investigating WC:nC and ZnO:nC composites and carbon nanopowders with tubular, tape, plate and onion morphologies of aggregates that are separated by chemical dissolution of WC and ZnO from the composites by the following methods: SEM, TEM, XPA, Raman spectroscopy, and BET. The connection between the carbon morphology under the conditions of synthesis and chemical nature of the precursor and the possibility of regulation of the morphology with the specific surface area up to 1700-2000 m2/g of carbon-structured materials are discussed.Keywords: carbon morphology, composite materials, precursor synthesis, tungsten carbide, zinc oxide
Procedia PDF Downloads 335299 Airon Project: IoT-Based Agriculture System for the Optimization of Irrigation Water Consumption
Authors: África Vicario, Fernando J. Álvarez, Felipe Parralejo, Fernando Aranda
Abstract:
The irrigation systems of traditional agriculture, such as gravity-fed irrigation, produce a great waste of water because, generally, there is no control over the amount of water supplied in relation to the water needed. The AIRON Project tries to solve this problem by implementing an IoT-based system to sensor the irrigation plots so that the state of the crops and the amount of water used for irrigation can be known remotely. The IoT system consists of a sensor network that measures the humidity of the soil, the weather conditions (temperature, relative humidity, wind and solar radiation) and the irrigation water flow. The communication between this network and a central gateway is conducted by means of long-range wireless communication that depends on the characteristics of the irrigation plot. The main objective of the AIRON project is to deploy an IoT sensor network in two different plots of the irrigation community of Aranjuez in the Spanish region of Madrid. The first plot is 2 km away from the central gateway, so LoRa has been used as the base communication technology. The problem with this plot is the absence of mains electric power, so devices with energy-saving modes have had to be used to maximize the external batteries' use time. An ESP32 SOC board with a LoRa module is employed in this case to gather data from the sensor network and send them to a gateway consisting of a Raspberry Pi with a LoRa hat. The second plot is located 18 km away from the gateway, a range that hampers the use of LoRa technology. In order to establish reliable communication in this case, the long-term evolution (LTE) standard is used, which makes it possible to reach much greater distances by using the cellular network. As mains electric power is available in this plot, a Raspberry Pi has been used instead of the ESP32 board to collect sensor data. All data received from the two plots are stored on a proprietary server located at the irrigation management company's headquarters. The analysis of these data by means of machine learning algorithms that are currently under development should allow a short-term prediction of the irrigation water demand that would significantly reduce the waste of this increasingly valuable natural resource. The major finding of this work is the real possibility of deploying a remote sensing system for irrigated plots by using Commercial-Off-The-Shelf (COTS) devices, easily scalable and adaptable to design requirements such as the distance to the control center or the availability of mains electrical power at the site.Keywords: internet of things, irrigation water control, LoRa, LTE, smart farming
Procedia PDF Downloads 84298 An Experimental Study on Greywater Reuse for Irrigating a Green Wall System
Authors: Mishadi Herath, Amin Talei, Andreas Hermawan, Clarina Chua
Abstract:
Green walls are vegetated structures on building’s wall that are considered as part of sustainable urban design. They are proved to have many micro-climate benefits such as reduction in indoor temperature, noise attenuation, and improvement in air quality. On the other hand, several studies have also been conducted on potential reuse of greywater in urban water management. Greywater is relatively clean when compared to blackwater; therefore, this study was aimed to assess the potential reuse of it for irrigating a green wall system. In this study, the campus of Monash University Malaysia located in Selangor state was considered as the study site where total 48 samples of greywater were collected from 7 toilets hand-wash and 5 pantries during 3 months period. The samples were tested to characterize the quality of greywater in the study site and compare it with local standard for irrigation water. PH and concentration of heavy metals, nutrients, Total Suspended Solids (TSS), Biochemical Oxygen Demand (BOD), Chemical Oxygen Demand (COD), total Coliform and E.coli were measured. Results showed that greywater could be directly used for irrigation with minimal treatment. Since the effluent of the system was supposed to be drained to stormwater drainage system, the effluent needed to meet certain quality requirement. Therefore, a biofiltration system was proposed to host the green wall plants and also treat the greywater (which is used as irrigation water) to the required level. To assess the performance of the proposed system, an experimental setup consisting of Polyvinyl Chloride (PVC) soil columns with sand-based filter media were prepared. Two different local creeper plants were chosen considering several factors including fast growth, low maintenance requirement, and aesthetic aspects. Three replicates of each plants were used to ensure the validity of the findings. The growth of creeping plants and their survivability was monitored for 6 months while monthly sampling and testing of effluent was conducted to evaluate effluent quality. An analysis was also conducted to estimate the potential cost and benefit of such system considering water and energy saving in the system. Results showed that the proposed system can work efficiently throughout a long period of time with minimal maintenance requirement. Moreover, the biofiltration-green wall system was found to be successful in reusing greywater as irrigating water while the effluent was meeting all the requirements for being drained to stormwater drainage system.Keywords: biofiltration, green wall, greywater, sustainability
Procedia PDF Downloads 214297 Lifespan Assessment of the Fish Crossing System of Itaipu Power Plant (Brazil/Paraguay) Based on the Reaching of Its Sedimentological Equilibrium Computed by 3D Modeling and Churchill Trapping Efficiency
Authors: Anderson Braga Mendes, Wallington Felipe de Almeida, Cicero Medeiros da Silva
Abstract:
This study aimed to assess the lifespan of the fish transposition system of the Itaipu Power Plant (Brazil/Paraguay) by using 3D hydrodynamic modeling and Churchill trapping effiency in order to identify the sedimentological equilibrium configuration in the main pond of the Piracema Channel, which is part of a 10 km hydraulic circuit that enables fish migration from downstream to upstream (and vice-versa) the Itaipu Dam, overcoming a 120 m water drop. For that, bottom data from 2002 (its opening year) and 2015 were collected and analyzed, besides bed material at 12 stations to the purpose of identifying their granulometric profiles. The Shields and Yalin and Karahan diagrams for initiation of motion of bed material were used to determine the critical bed shear stress for the sedimentological equilibrium state based on the sort of sediment (grain size) to be found at the bottom once the balance is reached. Such granulometry was inferred by analyzing the grosser material (fine and medium sands) which inflows the pond and deposits in its backwater zone, being adopted a range of diameters within the upper and lower limits of that sand stratification. The software Delft 3D was used in an attempt to compute the bed shear stress at every station under analysis. By modifying the input bathymetry of the main pond of the Piracema Channel so as to the computed bed shear stress at each station fell within the intervals of acceptable critical stresses simultaneously, it was possible to foresee the bed configuration of the main pond when the sedimentological equilibrium is reached. Under such condition, 97% of the whole pond capacity will be silted, and a shallow water course with depths ranging from 0.2 m to 1.5 m will be formed; in 2002, depths ranged from 2 m to 10 m. Out of that water path, the new bottom will be practically flat and covered by a layer of water 0.05 m thick. Thus, in the future the main pond of the Piracema Channel will lack its purpose of providing a resting place for migrating fish species, added to the fact that it may become an insurmountable barrier for medium and large sized specimens. Everything considered, it was estimated that its lifespan, from the year of its opening to the moment of the sedimentological equilibrium configuration, will be approximately 95 years–almost half of the computed lifespan of Itaipu Power Plant itself. However, it is worth mentioning that drawbacks concerning the silting in the main pond will start being noticed much earlier than such time interval owing to the reasons previously mentioned.Keywords: 3D hydrodynamic modeling, Churchill trapping efficiency, fish crossing system, Itaipu power plant, lifespan, sedimentological equilibrium
Procedia PDF Downloads 233296 Grassland Development on Evacuated Sites for Wildlife Conservation in Satpura Tiger Reserve, India
Authors: Anjana Rajput, Sandeep Chouksey, Bhaskar Bhandari, Shimpi Chourasia
Abstract:
Ecologically, grassland is any plant community dominated by grasses, whether they exist naturally or because of management practices. Most forest grasslands are anthropogenic and established plant communities planted for forage production, though some are established for soil and water conservation and wildlife habitat. In Satpura Tiger Reserve, Madhya Pradesh, India, most of the grasslands have been established on evacuated village sites. Total of 42 villages evacuated, and study was carried out in 23 sites to evaluate habitat improvement. Grasslands were classified into three categories, i.e., evacuated sites, established sites, and controlled sites. During the present study impact of various management interventions on grassland health was assessed. Grasslands assessment was done for its composition, status of palatable and non-palatable grasses, the status of herbs and legumes, status of weeds species, and carrying capacity of particular grassland. Presence of wild herbivore species in the grasslands with their abundance, availability of water resources was also assessed. Grassland productivity is dependent mainly on the biotic and abiotic components of the area, but management interventions may also play an important role in grassland composition and productivity. Variation in the status of palatable and non-palatable grasses, legumes, and weeds was recorded and found effected by management intervention practices. Overall in all the studied grasslands, the most dominant grasses recorded are Themeda quadrivalvis, Dichanthium annulatum, Ischaemum indicum, Oplismenus burmanii, Setaria pumilla, Cynodon dactylon, Heteropogon contortus, and Eragrostis tenella. Presence of wild herbivores, i.e., Chital, Sambar, Bison, Bluebull, Chinkara, Barking deer in the grassland area has been recorded through the installation of camera traps and estimated their abundance. Assessment of developed grasslands was done in terms of habitat suitability for Chital (Axis axis) and Sambar (Rusa unicolor). The parameters considered for suitability modeling are biotic and abiotic life requisite components existing in the area, i.e., density of grasses, density of legumes, availability of water, site elevation, site distance from human habitation. Findings of the present study would be useful for further grassland management and animal translocation programmes.Keywords: carrying capacity, dominant grasses, grassland, habitat suitability, management intervention, wild herbivore
Procedia PDF Downloads 127295 Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator
Authors: Jaeyoung Lee
Abstract:
Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set.Keywords: edge network, embedded network, MMA, matrix multiplication accelerator, semantic segmentation network
Procedia PDF Downloads 129294 Spatial Analysis in the Impact of Aquifer Capacity Reduction on Land Subsidence Rate in Semarang City between 2014-2017
Authors: Yudo Prasetyo, Hana Sugiastu Firdaus, Diyanah Diyanah
Abstract:
The phenomenon of the lack of clean water supply in several big cities in Indonesia is a major problem in the development of urban areas. Moreover, in the city of Semarang, the population density and growth of physical development is very high. Continuous and large amounts of underground water (aquifer) exposure can result in a drastically aquifer supply declining in year by year. Especially, the intensity of aquifer use in the fulfilment of household needs and industrial activities. This is worsening by the land subsidence phenomenon in some areas in the Semarang city. Therefore, special research is needed to know the spatial correlation of the impact of decreasing aquifer capacity on the land subsidence phenomenon. This is necessary to give approve that the occurrence of land subsidence can be caused by loss of balance of pressure on below the land surface. One method to observe the correlation pattern between the two phenomena is the application of remote sensing technology based on radar and optical satellites. Implementation of Differential Interferometric Synthetic Aperture Radar (DINSAR) or Small Baseline Area Subset (SBAS) method in SENTINEL-1A satellite image acquisition in 2014-2017 period will give a proper pattern of land subsidence. These results will be spatially correlated with the aquifer-declining pattern in the same time period. Utilization of survey results to 8 monitoring wells with depth in above 100 m to observe the multi-temporal pattern of aquifer change capacity. In addition, the pattern of aquifer capacity will be validated with 2 underground water cavity maps from observation of ministries of energy and natural resources (ESDM) in Semarang city. Spatial correlation studies will be conducted on the pattern of land subsidence and aquifer capacity using overlapping and statistical methods. The results of this correlation will show how big the correlation of decrease in underground water capacity in influencing the distribution and intensity of land subsidence in Semarang city. In addition, the results of this study will also be analyzed based on geological aspects related to hydrogeological parameters, soil types, aquifer species and geological structures. The results of this study will be a correlation map of the aquifer capacity on the decrease in the face of the land in the city of Semarang within the period 2014-2017. So hopefully the results can help the authorities in spatial planning and the city of Semarang in the future.Keywords: aquifer, differential interferometric synthetic aperture radar (DINSAR), land subsidence, small baseline area subset (SBAS)
Procedia PDF Downloads 182293 Fire Safe Medical Oxygen Delivery for Aerospace Environments
Authors: M. A. Rahman, A. T. Ohta, H. V. Trinh, J. Hyvl
Abstract:
Atmospheric pressure and oxygen (O2) concentration are critical life support parameters for human-occupied aerospace vehicles and habitats. Various medical conditions may require medical O2; for example, the American Medical Association has determined that commercial air travel exposes passengers to altitude-related hypoxia and gas expansion. It may cause some passengers to experience significant symptoms and medical complications during the flight, requiring supplemental medical-grade O2 to maintain adequate tissue oxygenation and prevent hypoxemic complications. Although supplemental medical grade O2 is a successful lifesaver for respiratory and cardiac failure, O2-enriched exhaled air can contain more than 95 % O2, increasing the likelihood of a fire. In an aerospace environment, a localized high concentration O2 bubble forms around a patient being treated for hypoxia, increasing the cabin O2 beyond the safe limit. To address this problem, this work describes a medical O2 delivery system that can reduce the O2 concentration from patient-exhaled O2-rich air to safe levels while maintaining the prescribed O2 administration to the patient. The O2 delivery system is designed to be a part of the medical O2 kit. The system uses cationic multimetallic cobalt complexes to reversibly, selectively, and stoichiometrically chemisorb O2 from the exhaled air. An air-release sub-system monitors the exhaled air, and as soon the O2 percentage falls below 21%, the air is released to the room air. The O2-enriched exhaled air is channeled through a layer of porous, thin-film heaters coated with the cobalt complex. The complex absorbs O2, and when saturated, the complex is heated to 100°C using the thin-film heater. Upon heating, the complex desorbs O2 and is once again ready to absorb or remove the excess O2 from exhaled air. The O2 absorption is a sub-second process, and desorption is a multi-second process. While heating at 0.685 °C/sec, the complex desorbs ~90% O2 in 110 sec. These fast reaction times mean that a simultaneous absorb/desorb process in the O2 delivery system will create a continuous absorption of O2. Moreover, the complex can concentrate O2 by a factor of 160 times that in air and desorb over 90% of the O2 at 100°C. Over 12 cycles of thermogravimetry measurement, less than 0.1% decrease in reversibility in O2 uptake was observed. The 1 kg complex can desorb over 20L of O2, so simultaneous O2 desorption by 0.5 kg of complex and absorption by 0.5 kg of complex can potentially continuously remove 9L/min O2 (~90% desorbed at 100°C) from exhaled air. The complex is synthesized and characterized for reversible O2 absorption and efficacy. The complex changes its color from dark brown to light gray after O2 desorption. In addition to thermogravimetric analysis, the O2 absorption/desorption cycle is characterized using optical imaging, showing stable color changes over ten cycles. The complex was also tested at room temperature in a low O2 environment in its O2 desorbed state, and observed to hold the deoxygenated state under these conditions. The results show the feasibility of using the complex for reversible O2 absorption in the proposed fire safe medical O2 delivery system.Keywords: fire risk, medical oxygen, oxygen removal, reversible absorption
Procedia PDF Downloads 104292 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling
Authors: Danlei Yang, Luofeng Huang
Abstract:
The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence
Procedia PDF Downloads 6291 Improvement of the Traditional Techniques of Artistic Casting through the Development of Open Source 3D Printing Technologies Based on Digital Ultraviolet Light Processing
Authors: Drago Diaz Aleman, Jose Luis Saorin Perez, Cecile Meier, Itahisa Perez Conesa, Jorge De La Torre Cantero
Abstract:
Traditional manufacturing techniques used in artistic contexts compete with highly productive and efficient industrial procedures. The craft techniques and associated business models tend to disappear under the pressure of the appearance of mass-produced products that compete in all niche markets, including those traditionally reserved for the work of art. The surplus value derived from the prestige of the author, the exclusivity of the product or the mastery of the artist, do not seem to be sufficient reasons to preserve this productive model. In the last years, the adoption of open source digital manufacturing technologies in small art workshops can favor their permanence by assuming great advantages such as easy accessibility, low cost, and free modification, adapting to specific needs of each workshop. It is possible to use pieces modeled by computer and made with FDM (Fused Deposition Modeling) 3D printers that use PLA (polylactic acid) in the procedures of artistic casting. Models printed by PLA are limited to approximate minimum sizes of 3 cm, and optimal layer height resolution is 0.1 mm. Due to these limitations, it is not the most suitable technology for artistic casting processes of smaller pieces. An alternative to solve size limitation, are printers from the type (SLS) "selective sintering by laser". And other possibility is a laser hardens, by layers, metal powder and called DMLS (Direct Metal Laser Sintering). However, due to its high cost, it is a technology that is difficult to introduce in small artistic foundries. The low-cost DLP (Digital Light Processing) type printers can offer high resolutions for a reasonable cost (around 0.02 mm on the Z axis and 0.04 mm on the X and Y axes), and can print models with castable resins that allow the subsequent direct artistic casting in precious metals or their adaptation to processes such as electroforming. In this work, the design of a DLP 3D printer is detailed, using backlit LCD screens with ultraviolet light. Its development is totally "open source" and is proposed as a kit made up of electronic components, based on Arduino and easy to access mechanical components in the market. The CAD files of its components can be manufactured in low-cost FDM 3D printers. The result is less than 500 Euros, high resolution and open-design with free access that allows not only its manufacture but also its improvement. In future works, we intend to carry out different comparative analyzes, which allow us to accurately estimate the print quality, as well as the real cost of the artistic works made with it.Keywords: traditional artistic techniques, DLP 3D printer, artistic casting, electroforming
Procedia PDF Downloads 142290 Time to Retire Rubber Crumb: How Soft Fall Playgrounds are Threatening Australia’s Great Barrier Reef
Authors: Michelle Blewitt, Scott P. Wilson, Heidi Tait, Juniper Riordan
Abstract:
Rubber crumb is a physical and chemical pollutant of concern for the environment and human health, warranting immediate investigations into its pathways to the environment and potential impacts. This emerging microplastic is created by shredding end-of-life tyres into ‘rubber crumb’ particles between 1-5mm used on synthetic turf fields and soft-fall playgrounds as a solution to intensifying tyre waste worldwide. Despite having known toxic and carcinogenic properties, studies into the transportation pathways and movement patterns of rubber crumbs from these surfaces remain in their infancy. To address this deficit, AUSMAP, the Australian Microplastic Assessment Project, in partnership with the Tangaroa Blue Foundation, conducted a study to quantify crumb loss from soft-fall surfaces. To our best knowledge, this is the first of its kind, with funding for the audits being provided by the Australian Government’s Reef Trust. Sampling occurred at 12 soft-fall playgrounds within the Great Barrier Reef Catchment Area on Australia’s North-East coast, in close proximity to the United Nations World Heritage Listed Reef. Samples were collected over a 12-month period using randomized sediment cores at 0, 2 and 4 meters away from the playground edge along a 20-meter transect. This approach facilitated two objectives pertaining to particle movement: to establish that crumb loss is occurring and that it decreases with distance from the soft-fall surface. Rubber crumb abundance was expressed as a total value and used to determine an expected average of rubber crumb loss per m2. An Analysis of Variance (ANOVA) was used to compare the differences in crumb abundance at each interval from the playground. Site characteristics, including surrounding sediment type, playground age, degree of ultra-violet exposure and amount of foot traffic, were additionally recorded for the comparison. Preliminary findings indicate that crumb is being lost at considerable rates from soft-fall playgrounds in the region, emphasizing an urgent need to further examine it as a potential source of aquatic pollution, soil contamination and threat to individuals who regularly utilize these surfaces. Additional implications for the future of rubber crumbs as a fit-for-purpose recycling initiative will be discussed with regard to industry, governments and the economic burden of surface maintenance and/ or replacement.Keywords: microplastics, toxic rubber crumb, litter pathways, marine environment
Procedia PDF Downloads 91289 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence
Authors: Gus Calderon, Richard McCreight, Tammy Schwartz
Abstract:
Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.
Procedia PDF Downloads 108288 Vulnerability Assessment of Groundwater Quality Deterioration Using PMWIN Model
Authors: A. Shakoor, M. Arshad
Abstract:
The utilization of groundwater resources in irrigation has significantly increased during the last two decades due to constrained canal water supplies. More than 70% of the farmers in the Punjab, Pakistan, depend directly or indirectly on groundwater to meet their crop water demands and hence, an unchecked paradigm shift has resulted in aquifer depletion and deterioration. Therefore, a comprehensive research was carried at central Punjab-Pakistan, regarding spatiotemporal variation in groundwater level and quality. Processing MODFLOW for window (PMWIN) and MT3D (solute transport model) models were used for existing and future prediction of groundwater level and quality till 2030. The comprehensive data set of aquifer lithology, canal network, groundwater level, groundwater salinity, evapotranspiration, groundwater abstraction, recharge etc. were used in PMWIN model development. The model was thus, successfully calibrated and validated with respect to groundwater level for the periods of 2003 to 2007 and 2008 to 2012, respectively. The coefficient of determination (R2) and model efficiency (MEF) for calibration and validation period were calculated as 0.89 and 0.98, respectively, which argued a high level of correlation between the calculated and measured data. For solute transport model (MT3D), the values of advection and dispersion parameters were used. The model used for future scenario up to 2030, by assuming that there would be no uncertain change in climate and groundwater abstraction rate would increase gradually. The model predicted results revealed that the groundwater would decline from 0.0131 to 1.68m/year during 2013 to 2030 and the maximum decline would be on the lower side of the study area, where infrastructure of canal system is very less. This lowering of groundwater level might cause an increase in the tubewell installation and pumping cost. Similarly, the predicted total dissolved solids (TDS) of the groundwater would increase from 6.88 to 69.88mg/L/year during 2013 to 2030 and the maximum increase would be on lower side. It was found that in 2030, the good quality would reduce by 21.4%, while marginal and hazardous quality water increased by 19.28 and 2%, respectively. It was found from the simulated results that the salinity of the study area had increased due to the intrusion of salts. The deterioration of groundwater quality would cause soil salinity and ultimately the reduction in crop productivity. It was concluded from the predicted results of groundwater model that the groundwater deteriorated with the depth of water table i.e. TDS increased with declining groundwater level. It is recommended that agronomic and engineering practices i.e. land leveling, rainwater harvesting, skimming well, ASR (Aquifer Storage and Recovery Wells) etc. should be integrated to meliorate management of groundwater for higher crop production in salt affected soils.Keywords: groundwater quality, groundwater management, PMWIN, MT3D model
Procedia PDF Downloads 378287 Improved Signal-To-Noise Ratio by the 3D-Functionalization of Fully Zwitterionic Surface Coatings
Authors: Esther Van Andel, Stefanie C. Lange, Maarten M. J. Smulders, Han Zuilhof
Abstract:
False outcomes of diagnostic tests are a major concern in medical health care. To improve the reliability of surface-based diagnostic tests, it is of crucial importance to diminish background signals that arise from the non-specific binding of biomolecules, a process called fouling. The aim is to create surfaces that repel all biomolecules except the molecule of interest. This can be achieved by incorporating antifouling protein repellent coatings in between the sensor surface and it’s recognition elements (e.g. antibodies, sugars, aptamers). Zwitterionic polymer brushes are considered excellent antifouling materials, however, to be able to bind the molecule of interest, the polymer brushes have to be functionalized and so far this was only achieved at the expense of either antifouling or binding capacity. To overcome this limitation, we combined both features into one single monomer: a zwitterionic sulfobetaine, ensuring antifouling capabilities, equipped with a clickable azide moiety which allows for further functionalization. By copolymerizing this monomer together with a standard sulfobetaine, the number of azides (and with that the number of recognition elements) can be tuned depending on the application. First, the clickable azido-monomer was synthesized and characterized, followed by copolymerizing this monomer to yield functionalizable antifouling brushes. The brushes were fully characterized using surface characterization techniques like XPS, contact angle measurements, G-ATR-FTIR and XRR. As a proof of principle, the brushes were subsequently functionalized with biotin via strain-promoted alkyne azide click reactions, which yielded a fully zwitterionic biotin-containing 3D-functionalized coating. The sensing capacity was evaluated by reflectometry using avidin and fibrinogen containing protein solutions. The surfaces showed excellent antifouling properties as illustrated by the complete absence of non-specific fibrinogen binding, while at the same time clear responses were seen for the specific binding of avidin. A great increase in signal-to-noise ratio was observed, even when the amount of functional groups was lowered to 1%, compared to traditional modification of sulfobetaine brushes that rely on a 2D-approach in which only the top-layer can be functionalized. This study was performed on stoichiometric silicon nitride surfaces for future microring resonator based assays, however, this methodology can be transferred to other biosensor platforms which are currently being investigated. The approach presented herein enables a highly efficient strategy for selective binding with retained antifouling properties for improved signal-to-noise ratios in binding assays. The number of recognition units can be adjusted to a specific need, e.g. depending on the size of the analyte to be bound, widening the scope of these functionalizable surface coatings.Keywords: antifouling, signal-to-noise ratio, surface functionalization, zwitterionic polymer brushes
Procedia PDF Downloads 306286 Fueling Efficient Reporting And Decision-Making In Public Health With Large Data Automation In Remote Areas, Neno Malawi
Authors: Wiseman Emmanuel Nkhomah, Chiyembekezo Kachimanga, Julia Huggins, Fabien Munyaneza
Abstract:
Background: Partners In Health – Malawi introduced one of Operational Researches called Primary Health Care (PHC) Surveys in 2020, which seeks to assess progress of delivery of care in the district. The study consists of 5 long surveys, namely; Facility assessment, General Patient, Provider, Sick Child, Antenatal Care (ANC), primarily conducted in 4 health facilities in Neno district. These facilities include Neno district hospital, Dambe health centre, Chifunga and Matope. Usually, these annual surveys are conducted from January, and the target is to present final report by June. Once data is collected and analyzed, there are a series of reviews that take place before reaching final report. In the first place, the manual process took over 9 months to present final report. Initial findings reported about 76.9% of the data that added up when cross-checked with paper-based sources. Purpose: The aim of this approach is to run away from manually pulling the data, do fresh analysis, and reporting often associated not only with delays in reporting inconsistencies but also with poor quality of data if not done carefully. This automation approach was meant to utilize features of new technologies to create visualizations, reports, and dashboards in Power BI that are directly fished from the data source – CommCare hence only require a single click of a ‘refresh’ button to have the updated information populated in visualizations, reports, and dashboards at once. Methodology: We transformed paper-based questionnaires into electronic using CommCare mobile application. We further connected CommCare Mobile App directly to Power BI using Application Program Interface (API) connection as data pipeline. This provided chance to create visualizations, reports, and dashboards in Power BI. Contrary to the process of manually collecting data in paper-based questionnaires, entering them in ordinary spreadsheets, and conducting analysis every time when preparing for reporting, the team utilized CommCare and Microsoft Power BI technologies. We utilized validations and logics in CommCare to capture data with less errors. We utilized Power BI features to host the reports online by publishing them as cloud-computing process. We switched from sharing ordinary report files to sharing the link to potential recipients hence giving them freedom to dig deep into extra findings within Power BI dashboards and also freedom to export to any formats of their choice. Results: This data automation approach reduced research timelines from the initial 9 months’ duration to 5. It also improved the quality of the data findings from the original 76.9% to 98.9%. This brought confidence to draw conclusions from the findings that help in decision-making and gave opportunities for further researches. Conclusion: These results suggest that automating the research data process has the potential of reducing overall amount of time spent and improving the quality of the data. On this basis, the concept of data automation should be taken into serious consideration when conducting operational research for efficiency and decision-making.Keywords: reporting, decision-making, power BI, commcare, data automation, visualizations, dashboards
Procedia PDF Downloads 116285 Detection the Ice Formation Processes Using Multiple High Order Ultrasonic Guided Wave Modes
Authors: Regina Rekuviene, Vykintas Samaitis, Liudas Mažeika, Audrius Jankauskas, Virginija Jankauskaitė, Laura Gegeckienė, Abdolali Sadaghiani, Shaghayegh Saeidiharzand
Abstract:
Icing brings significant damage to aviation and renewable energy installations. Air-conditioning, refrigeration, wind turbine blades, airplane and helicopter blades often suffer from icing phenomena, which cause severe energy losses and impair aerodynamic performance. The icing process is a complex phenomenon with many different causes and types. Icing mechanisms, distributions, and patterns are still relevant to research topics. The adhesion strength between ice and surfaces differs in different icing environments. This makes the task of anti-icing very challenging. The techniques for various icing environments must satisfy different demands and requirements (e.g., efficient, lightweight, low power consumption, low maintenance and manufacturing costs, reliable operation). It is noticeable that most methods are oriented toward a particular sector and adapting them to or suggesting them for other areas is quite problematic. These methods often use various technologies and have different specifications, sometimes with no clear indication of their efficiency. There are two major groups of anti-icing methods: passive and active. Active techniques have high efficiency but, at the same time, quite high energy consumption and require intervention in the structure’s design. It’s noticeable that vast majority of these methods require specific knowledge and personnel skills. The main effect of passive methods (ice-phobic, superhydrophobic surfaces) is to delay ice formation and growth or reduce the adhesion strength between the ice and the surface. These methods are time-consuming and depend on forecasting. They can be applied on small surfaces only for specific targets, and most are non-biodegradable (except for anti-freezing proteins). There is some quite promising information on ultrasonic ice mitigation methods that employ UGW (Ultrasonic Guided Wave). These methods are have the characteristics of low energy consumption, low cost, lightweight, and easy replacement and maintenance. However, fundamental knowledge of ultrasonic de-icing methodology is still limited. The objective of this work was to identify the ice formation processes and its progress by employing ultrasonic guided wave technique. Throughout this research, the universal set-up for acoustic measurement of ice formation in a real condition (temperature range from +240 C to -230 C) was developed. Ultrasonic measurements were performed by using high frequency 5 MHz transducers in a pitch-catch configuration. The selection of wave modes suitable for detection of ice formation phenomenon on copper metal surface was performed. Interaction between the selected wave modes and ice formation processes was investigated. It was found that selected wave modes are sensitive to temperature changes. It was demonstrated that proposed ultrasonic technique could be successfully used for the detection of ice layer formation on a metal surface.Keywords: ice formation processes, ultrasonic GW, detection of ice formation, ultrasonic testing
Procedia PDF Downloads 64