Search results for: unsteady non-equilibrium distribution functions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7508

Search results for: unsteady non-equilibrium distribution functions

488 A World Map of Seabed Sediment Based on 50 Years of Knowledge

Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès

Abstract:

Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.

Keywords: marine sedimentology, seabed map, sediment classification, world ocean

Procedia PDF Downloads 230
487 Role of Platelet Volume Indices in Diabetes Related Vascular Angiopathies

Authors: Mitakshara Sharma, S. K. Nema, Sanjeev Narang

Abstract:

Diabetes mellitus (DM) is a group of metabolic disorders characterized by metabolic abnormalities, chronic hyperglycaemia and long term macrovascular & microvascular complications. Vascular complications are due to platelet hyperactivity and dysfunction, increased inflammation, altered coagulation and endothelial dysfunction. Large proportion of patients with Type II DM suffers from preventable vascular angiopathies, and there is need to develop risk factor modifications and interventions to reduce impact of complications. These complications are attributed to platelet activation, recognised by increase in Platelet Volume Indices (PVI) including Mean Platelet Volume (MPV) and Platelet Distribution Width (PDW). The current study is prospective analytical study conducted over 2 years. Out of 1100 individuals, 930 individuals fulfilled inclusion criteria and were segregated into three groups on basis of glycosylated haemoglobin (HbA1C): - (a) Diabetic, (b) Non-Diabetic and (c) Subjects with Impaired fasting glucose (IFG) with 300 individuals in IFG and non-diabetic groups & 330 individuals in diabetic group. Further, diabetic group was divided into two groups on the basis of presence or absence of known diabetes related vascular complications. Samples for HbA1c and PVI were collected using Ethylene diamine tetraacetic acid (EDTA) as anticoagulant and processed on SYSMEX-X-800i autoanalyser. The study revealed gradual increase in PVI from non-diabetics to IFG to diabetics. PVI were markedly increased in diabetic patients. MPV and PDW of diabetics, IFG and non diabetics were (17.60 ± 2.04)fl, (11.76 ± 0.73)fl, (9.93 ± 0.64)fl and (19.17 ± 1.48)fl, (15.49 ± 0.67)fl, (10.59 ± 0.67)fl respectively with a significant p value 0.00 and a significant positive correlation (MPV-HbA1c r = 0.951; PDW-HbA1c r = 0.875). MPV & PDW of subjects with diabetes related complications were higher as compared to those without them and were (17.51±0.39)fl & (15.14 ± 1.04)fl and (20.09 ± 0.98) fl & (18.96 ± 0.83)fl respectively with a significant p value 0.00. There was a significant positive correlation between PVI and duration of diabetes across the groups (MPV-HbA1c r = 0.951; PDW-HbA1c r = 0.875). However, a significant negative correlation was found between glycaemic levels and total platelet count (PC- HbA1c r =-0.164). This is multi-parameter and comprehensive study with an adequately powered study design. It can be concluded from our study that PVI are extremely useful and important indicators of impending vascular complications in all patients with deranged glycaemic control. Introduction of automated cell counters has facilitated the availability of PVI as routine parameters. PVI is a useful means for identifying larger & active platelets which play important role in development of micro and macro angiopathic complications of diabetes leading to mortality and morbidity. PVI can be used as cost effective markers to predict and prevent impending vascular events in patients with Diabetes mellitus especially in developing countries like India. PVI, if incorporated into protocols for management of diabetes, could revolutionize care and curtail the ever increasing cost of patient management.

Keywords: diabetes, IFG, HbA1C, MPV, PDW, PVI

Procedia PDF Downloads 256
486 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 131
485 Effect of Cerebellar High Frequency rTMS on the Balance of Multiple Sclerosis Patients with Ataxia

Authors: Shereen Ismail Fawaz, Shin-Ichi Izumi, Nouran Mohamed Salah, Heba G. Saber, Ibrahim Mohamed Roushdi

Abstract:

Background: Multiple sclerosis (MS) is a chronic, inflammatory, mainly demyelinating disease of the central nervous system, more common in young adults. Cerebellar involvement is one of the most disabling lesions in MS and is usually a sign of disease progression. It plays a major role in the planning, initiation, and organization of movement via its influence on the motor cortex and corticospinal outputs. Therefore, it contributes to controlling movement, motor adaptation, and motor learning, in addition to its vast connections with other major pathways controlling balance, such as the cerebellopropriospinal pathways and cerebellovestibular pathways. Hence, trying to stimulate the cerebellum by facilitatory protocols will add to our motor control and balance function. Non-invasive brain stimulation, both repetitive transcranial magnetic stimulation (rTMS) and transcranial direct current stimulation (tDCS), has recently emerged as effective neuromodulators to influence motor and nonmotor functions of the brain. Anodal tDCS has been shown to improve motor skill learning and motor performance beyond the training period. Similarly, rTMS, when used at high frequency (>5 Hz), has a facilitatory effect on the motor cortex. Objective: Our aim was to determine the effect of high-frequency rTMS over the cerebellum in improving balance and functional ambulation of multiple sclerosis patients with Ataxia. Patients and methods: This was a randomized single-blinded placebo-controlled prospective trial on 40 patients. The active group (N=20) received real rTMS sessions, and the control group (N=20) received Sham rTMS using a placebo program designed for this treatment. Both groups received 12 sessions of high-frequency rTMS over the cerebellum, followed by an intensive exercise training program. Sessions were given three times per week for four weeks. The active group protocol had a frequency of 10 Hz rTMS over the cerebellar vermis, work period 5S, number of trains 25, and intertrain interval 25s. The total number of pulses was 1250 pulses per session. The control group received Sham rTMS using a placebo program designed for this treatment. Both groups of patients received an intensive exercise program, which included generalized strengthening exercises, endurance and aerobic training, trunk abdominal exercises, generalized balance training exercises, and task-oriented training such as Boxing. As a primary outcome measure the Modified ICARS was used. Static Posturography was done with: Patients were tested both with open and closed eyes. Secondary outcome measures included the expanded Disability Status Scale (EDSS) and 8 Meter walk test (8MWT). Results: The active group showed significant improvements in all the functional scales, modified ICARS, EDSS, and 8-meter walk test, in addition to significant differences in static Posturography with open eyes, while the control group did not show such differences. Conclusion: Cerebellar high-frequency rTMS could be effective in the functional improvement of balance in MS patients with ataxia.

Keywords: brain neuromodulation, high frequency rTMS, cerebellar stimulation, multiple sclerosis, balance rehabilitation

Procedia PDF Downloads 89
484 Finite Element Modeling of Mass Transfer Phenomenon and Optimization of Process Parameters for Drying of Paddy in a Hybrid Solar Dryer

Authors: Aprajeeta Jha, Punyadarshini P. Tripathy

Abstract:

Drying technologies for various food processing operations shares an inevitable linkage with energy, cost and environmental sustainability. Hence, solar drying of food grains has become imperative choice to combat duo challenges of meeting high energy demand for drying and to address climate change scenario. But performance and reliability of solar dryers depend hugely on sunshine period, climatic conditions, therefore, offer a limited control over drying conditions and have lower efficiencies. Solar drying technology, supported by Photovoltaic (PV) power plant and hybrid type solar air collector can potentially overpower the disadvantages of solar dryers. For development of such robust hybrid dryers; to ensure quality and shelf-life of paddy grains the optimization of process parameter becomes extremely critical. Investigation of the moisture distribution profile within the grains becomes necessary in order to avoid over drying or under drying of food grains in hybrid solar dryer. Computational simulations based on finite element modeling can serve as potential tool in providing a better insight of moisture migration during drying process. Hence, present work aims at optimizing the process parameters and to develop a 3-dimensional (3D) finite element model (FEM) for predicting moisture profile in paddy during solar drying. COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Furthermore, optimization of process parameters (power level, air velocity and moisture content) was done using response surface methodology in design expert software. 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed and validated with experimental data. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Furthermore, optimized process parameters for drying paddy were found to be 700 W, 2.75 m/s at 13% (wb) with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product. PV-integrated hybrid solar dryers can be employed as potential and cutting edge drying technology alternative for sustainable energy and food security.

Keywords: finite element modeling, moisture migration, paddy grain, process optimization, PV integrated hybrid solar dryer

Procedia PDF Downloads 147
483 Traditional Wisdom of Indigenous Vernacular Architecture as Tool for Climate Resilience Among PVTG Indigenous Communities in Jharkhand, India

Authors: Ankush, Harshit Sosan Lakra, Rachita Kuthial

Abstract:

Climate change poses significant challenges to vulnerable communities, particularly indigenous populations in ecologically sensitive regions. Jharkhand, located in the heart of India, is home to several indigenous communities, including the Particularly Vulnerable Tribal Groups (PVTGs). The Indigenous architecture of the region functions as a significant reservoir of climate adaptation wisdom. It explores the architectural analysis encompassing the construction materials, construction techniques, design principles, climate responsiveness, cultural relevance, adaptation, integration with the environment and traditional wisdom that has evolved through generations, rooted in cultural and socioeconomic traditions, and has allowed these communities to thrive in a variety of climatic zones, including hot and dry, humid, and hilly terrains to withstand the test of time. Despite their historical resilience to adverse climatic conditions, PVTG tribal communities face new and amplified challenges due to the accelerating pace of climate change. There is a significant research void that exists in assimilating their traditional practices and local wisdom into contemporary climate resilience initiatives. Most of the studies place emphasis on technologically advanced solutions, often ignoring the invaluable Indigenous Local knowledge that can complement and enhance these efforts. This research gap highlights the need to bridge the disconnect between indigenous knowledge and contemporary climate adaptation strategies. The study aims to explore and leverage indigenous knowledge of vernacular architecture as a strategic tool for enhancing climatic resilience among PVTGs of the region. The first objective is to understand the traditional wisdom of vernacular architecture by analyzing and documenting distinct architectural practices and cultural significance of PVTG communities, emphasizing construction techniques, materials and spatial planning. The second objective is to develop culturally sensitive climatic resilience strategies based on findings of vernacular architecture by employing a multidisciplinary research approach that encompasses ethnographic fieldwork climate data assessment considering multiple variables such as temperature variations, precipitation patterns, extreme weather events and climate change reports. This will be a tailor-made solution integrating indigenous knowledge with modern technology and sustainable practices. With the involvement of indigenous communities in the process, the research aims to ensure that the developed strategies are practical, culturally appropriate, and accepted. To foster long-term resilience against the global issue of climate change, we can bridge the gap between present needs and future aspirations with Traditional wisdom, offering sustainable solutions that will empower PVTG communities. Moreover, the study emphasizes the significance of preserving and reviving traditional Architectural wisdom for enhancing climatic resilience. It also highlights the need for cooperative endeavors of communities, stakeholders, policymakers, and researchers to encourage integrating traditional Knowledge into Modern sustainable design methods. Through these efforts, this research will contribute not only to the well-being of PVTG communities but also to the broader global effort to build a more resilient and sustainable future. Also, the Indigenous communities like PVTG in the state of Jharkhand can achieve climatic resilience while respecting and safeguarding the cultural heritage and peculiar characteristics of its native population.

Keywords: vernacular architecture, climate change, resilience, PVTGs, Jharkhand, indigenous people, India

Procedia PDF Downloads 73
482 Impact of Terrorism as an Asymmetrical Threat on the State's Conventional Security Forces

Authors: Igor Pejic

Abstract:

The main focus of this research will be on analyzing correlative links between terrorism as an asymmetrical threat and the consequences it leaves on conventional security forces. The methodology behind the research will include qualitative research methods focusing on comparative analysis of books, scientific papers, documents and other sources, in order to deduce, explore and formulate the results of the research. With the coming of the 21st century and the rising multi-polar, new world threats quickly emerged. The realistic approach in international relations deems that relations among nations are in a constant state of anarchy since there are no definitive rules and the distribution of power varies widely. International relations are further characterized by egoistic and self-orientated human nature, anarchy or absence of a higher government, security and lack of morality. The asymmetry of power is also reflected on countries' security capabilities and its abilities to project power. With the coming of the new millennia and the rising multi-polar world order, the asymmetry of power can be also added as an important trait of the global society which consequently brought new threats. Among various others, terrorism is probably the most well-known, well-based and well-spread asymmetric threat. In today's global political arena, terrorism is used by state and non-state actors to fulfill their political agendas. Terrorism is used as an all-inclusive tool for regime change, subversion or a revolution. Although the nature of terrorist groups is somewhat inconsistent, terrorism as a security and social phenomenon has a one constant which is reflected in its political dimension. The state's security apparatus, which was embodied in the form of conventional armed forces, is now becoming fragile, unable to tackle new threats and to a certain extent outdated. Conventional security forces were designed to defend or engage an exterior threat which is more or less symmetric and visible. On the other hand, terrorism as an asymmetrical threat is a part of hybrid, special or asymmetric warfare in which specialized units, institutions or facilities represent the primary pillars of security. In today's global society, terrorism is probably the most acute problem which can paralyze entire countries and their political systems. This problem, however, cannot be engaged on an open field of battle, but rather it requires a different approach in which conventional armed forces cannot be used traditionally and their role must be adjusted. The research will try to shed light on the phenomena of modern day terrorism and to prove its correlation with the state conventional armed forces. States are obliged to adjust their security apparatus to the new realism of global society and terrorism as an asymmetrical threat which is a side-product of the unbalanced world.

Keywords: asymmetrical warfare, conventional forces, security, terrorism

Procedia PDF Downloads 260
481 Physical Model Testing of Storm-Driven Wave Impact Loads and Scour at a Beach Seawall

Authors: Sylvain Perrin, Thomas Saillour

Abstract:

The Grande-Motte port and seafront development project on the French Mediterranean coastline entailed evaluating wave impact loads (pressures and forces) on the new beach seawall and comparing the resulting scour potential at the base of the existing and new seawall. A physical model was built at ARTELIA’s hydraulics laboratory in Grenoble (France) to provide insight into the evolution of scouring overtime at the front of the wall, quasi-static and impulsive wave force intensity and distribution on the wall, and water and sand overtopping discharges over the wall. The beach was constituted of fine sand and approximately 50 m wide above mean sea level (MSL). Seabed slopes were in the range of 0.5% offshore to 1.5% closer to the beach. A smooth concrete structure will replace the existing concrete seawall with an elevated curved crown wall. Prior the start of breaking (at -7 m MSL contour), storm-driven maximum spectral significant wave heights of 2.8 m and 3.2 m were estimated for the benchmark historical storm event dated of 1997 and the 50-year return period storms respectively, resulting in 1 m high waves at the beach. For the wave load assessment, a tensor scale measured wave forces and moments and five piezo / piezo-resistive pressure sensors were placed on the wall. Light-weight sediment physical model and pressure and force measurements were performed with scale 1:18. The polyvinyl chloride light-weight particles used to model the prototype silty sand had a density of approximately 1 400 kg/m3 and a median diameter (d50) of 0.3 mm. Quantitative assessments of the seabed evolution were made using a measuring rod and also a laser scan survey. Testing demonstrated the occurrence of numerous impulsive wave impacts on the reflector (22%), induced not by direct wave breaking but mostly by wave run-up slamming on the top curved part of the wall. Wave forces of up to 264 kilonewtons and impulsive pressure spikes of up to 127 kilonewtons were measured. Maximum scour of -0.9 m was measured for the new seawall versus -0.6 m for the existing seawall, which is imputable to increased wave reflection (coefficient was 25.7 - 30.4% vs 23.4 - 28.6%). This paper presents a methodology for the setup and operation of a physical model in order to assess the hydrodynamic and morphodynamic processes at a beach seawall during storms events. It discusses the pros and cons of such methodology versus others, notably regarding structures peculiarities and model effects.

Keywords: beach, impacts, scour, seawall, waves

Procedia PDF Downloads 148
480 Measures of Reliability and Transportation Quality on an Urban Rail Transit Network in Case of Links’ Capacities Loss

Authors: Jie Liu, Jinqu Cheng, Qiyuan Peng, Yong Yin

Abstract:

Urban rail transit (URT) plays a significant role in dealing with traffic congestion and environmental problems in cities. However, equipment failure and obstruction of links often lead to URT links’ capacities loss in daily operation. It affects the reliability and transport service quality of URT network seriously. In order to measure the influence of links’ capacities loss on reliability and transport service quality of URT network, passengers are divided into three categories in case of links’ capacities loss. Passengers in category 1 are less affected by the loss of links’ capacities. Their travel is reliable since their travel quality is not significantly reduced. Passengers in category 2 are affected by the loss of links’ capacities heavily. Their travel is not reliable since their travel quality is reduced seriously. However, passengers in category 2 still can travel on URT. Passengers in category 3 can not travel on URT because their travel paths’ passenger flow exceeds capacities. Their travel is not reliable. Thus, the proportion of passengers in category 1 whose travel is reliable is defined as reliability indicator of URT network. The transport service quality of URT network is related to passengers’ travel time, passengers’ transfer times and whether seats are available to passengers. The generalized travel cost is a comprehensive reflection of travel time, transfer times and travel comfort. Therefore, passengers’ average generalized travel cost is used as transport service quality indicator of URT network. The impact of links’ capacities loss on transport service quality of URT network is measured with passengers’ relative average generalized travel cost with and without links’ capacities loss. The proportion of the passengers affected by links and betweenness of links are used to determine the important links in URT network. The stochastic user equilibrium distribution model based on the improved logit model is used to determine passengers’ categories and calculate passengers’ generalized travel cost in case of links’ capacities loss, which is solved with method of successive weighted averages algorithm. The reliability and transport service quality indicators of URT network are calculated with the solution result. Taking Wuhan Metro as a case, the reliability and transport service quality of Wuhan metro network is measured with indicators and method proposed in this paper. The result shows that using the proportion of the passengers affected by links can identify important links effectively which have great influence on reliability and transport service quality of URT network; The important links are mostly connected to transfer stations and the passenger flow of important links is high; With the increase of number of failure links and the proportion of capacity loss, the reliability of the network keeps decreasing, the proportion of passengers in category 3 keeps increasing and the proportion of passengers in category 2 increases at first and then decreases; When the number of failure links and the proportion of capacity loss increased to a certain level, the decline of transport service quality is weakened.

Keywords: urban rail transit network, reliability, transport service quality, links’ capacities loss, important links

Procedia PDF Downloads 126
479 Life Cycle Assessment to Study the Acidification and Eutrophication Impacts of Sweet Cherry Production

Authors: G. Bravo, D. Lopez, A. Iriarte

Abstract:

Several organizations and governments have created a demand for information about the environmental impacts of agricultural products. Today, the export oriented fruit sector in Chile is being challenged to quantify and reduce their environmental impacts. Chile is the largest southern hemisphere producer and exporter of sweet cherry fruit. Chilean sweet cherry production reached a volume of 80,000 tons in 2012. The main destination market for the Chilean cherry in 2012 was Asia (including Hong Kong and China), taking in 69% of exported volume. Another important market was the United States with 16% participation, followed by Latin America (7%) and Europe (6%). Concerning geographical distribution, the Chilean conventional cherry production is focused in the center-south area, between the regions of Maule and O’Higgins; both regions represent 81% of the planted surface. The Life Cycle Assessment (LCA) is widely accepted as one of the major methodologies for assessing environmental impacts of products or services. The LCA identifies the material, energy, material, and waste flows of a product or service, and their impact on the environment. There are scant studies that examine the impacts of sweet cherry cultivation, such as acidification and eutrophication. Within this context, the main objective of this study is to evaluate, using the LCA, the acidification and eutrophication impacts of sweet cherry production in Chile. The additional objective is to identify the agricultural inputs that contributed significantly to the impacts of this fruit. The system under study included all the life cycle stages from the cradle to the farm gate (harvested sweet cherry). The data of sweet cherry production correspond to nationwide representative practices and are based on technical-economic studies and field information obtained in several face-to-face interviews. The study takes into account the following agricultural inputs: fertilizers, pesticides, diesel consumption for agricultural operations, machinery and electricity for irrigation. The results indicated that the mineral fertilizers are the most important contributors to the acidification and eutrophication impacts of the sheet cherry cultivation. Improvement options are suggested for the hotspot in order to reduce the environmental impacts. The results allow planning and promoting low impacts procedures across fruit companies, as well as policymakers, and other stakeholders on the subject. In this context, this study is one of the first assessments of the environmental impacts of sweet cherry production. New field data or evaluation of other life cycle stages could further improve the knowledge on the impacts of this fruit. This study may contribute to environmental information in other countries where there is similar agricultural production for sweet cherry.

Keywords: acidification, eutrophication, life cycle assessment, sweet cherry production

Procedia PDF Downloads 266
478 CSPG4 Molecular Target in Canine Melanoma, Osteosarcoma and Mammary Tumors for Novel Therapeutic Strategies

Authors: Paola Modesto, Floriana Fruscione, Isabella Martini, Simona Perga, Federica Riccardo, Mariateresa Camerino, Davide Giacobino, Cecilia Gola, Luca Licenziato, Elisabetta Razzuoli, Katia Varello, Lorella Maniscalco, Elena Bozzetta, Angelo Ferrari

Abstract:

Canine and human melanoma, osteosarcoma (OSA), and mammary carcinomas are aggressive tumors with common characteristics making dogs a good model for comparative oncology. Novel therapeutic strategies against these tumors could be useful to both species. In humans, chondroitin sulphate proteoglycan 4 (CSPG4) is a marker involved in tumor progression and could be a candidate target for immunotherapy. The anti-CSPG4 DNA electrovaccination has shown to be an effective approach for canine malignant melanoma (CMM) [1]. An immunohistochemistry evaluation of CSPG4 expression in tumour tissue is generally performed prior to electrovaccination. To assess the possibility to perform a rapid molecular evaluation and in order to validate these spontaneous canine tumors as the model for human studies, we investigate the CSPG4 gene expression by RT qPCR in CMM, OSA, and canine mammary tumors (CMT). The total RNA was extracted from RNAlater stored tissue samples (CMM n=16; OSA n=13; CMT n=6; five paired normal tissues for CMM, five paired normal tissues for OSA and one paired normal tissue for CMT), retro-transcribed and then analyzed by duplex RT-qPCR using two different TaqMan assays for the target gene CSPG4 and the internal reference gene (RG) Ribosomal Protein S19 (RPS19). RPS19 was selected from a panel of 9 candidate RGs, according to NormFinder analysis following the protocol already described [2]. Relative expression was analyzed by CFX Maestro™ Software. Student t-test and ANOVA were performed (significance set at P<0.05). Results showed that gene expression of CSPG4 in OSA tissues is significantly increased by 3-4 folds when compared to controls. In CMT, gene expression of the target was increased from 1.5 to 19.9 folds. In melanoma, although an increasing trend was observed, no significant differences between the two groups were highlighted. Immunohistochemistry analysis of the two cancer types showed that the expression of CSPG4 within CMM is concentrated in isles of cells compared to OSA, where the distribution of positive cells is homogeneous. This evidence could explain the differences in gene expression results.CSPG4 immunohistochemistry evaluation in mammary carcinoma is in progress. The evidence of CSPG4 expression in a different type of canine tumors opens the way to the possibility of extending the CSPG4 immunotherapy marker in CMM, OSA, and CMT and may have an impact to translate this strategy modality to human oncology.

Keywords: canine melanoma, canine mammary carcinomas, canine osteosarcoma, CSPG4, gene expression, immunotherapy

Procedia PDF Downloads 168
477 Comparison of the Toxicity of Silver and Gold Nanoparticles in Murine Fibroblasts

Authors: Šárka Hradilová, Aleš Panáček, Radek Zbořil

Abstract:

Nanotechnologies are considered the most promising fields with high added value, brings new possibilities in various sectors from industry to medicine. With the growing of interest in nanomaterials and their applications, increasing nanoparticle production leads to increased exposure of people and environment with ‘human made’ nanoparticles. Nanoparticles (NPs) are clusters of atoms in the size range of 1–100 nm. Metal nanoparticles represent one of the most important and frequently used types of NPs due to their unique physical, chemical and biological properties, which significantly differ from those of bulk material. Biological properties including toxicity of metal nanoparticles are generally determined by their size, size distribution, shape, surface area, surface charge, surface chemistry, stability in the environment and ability to release metal ions. Therefore, the biological behavior of NPs and their possible adverse effect cannot be derived from the bulk form of material because nanoparticles show unique properties and interactions with biological systems just due to their nanodimensions. Silver and gold NPs are intensively studied and used. Both can be used for instance in surface enhanced Raman spectroscopy, a considerable number of applications of silver NPs is associated with antibacterial effects, while gold NPs are associated with cancer treatment and bio imaging. Antibacterial effects of silver ions are known for centuries. Silver ions and silver-based compounds are highly toxic to microorganisms. Toxic properties of silver NPs are intensively studied, but the mechanism of cytoxicity is not fully understood. While silver NPs are considered toxic, gold NPs are referred to as toxic but also innocuous for eukaryotic cells. Therefore, gold NPs are used in various biological applications without a risk of cell damaging, even when we want to suppress the growth of cancer cells. Thus, gold NPs are toxic or harmless. Because most studies comparing particles of various sizes prepared in various ways, and testing is performed on different cell lines, it is very difficult to generalize. The novelty and significance of our research is focused to the complex biological effects of silver and gold NPs prepared by the same method, have the same parameters and the same stabilizer. That is why we can compare the biological effects of pure nanometals themselves based on their chemical nature without the influence of other variable. Aim of our study therefore is to compare the cytotoxic effect of two types of noble metal NPs focusing on the mechanisms that contribute to cytotoxicity. The study was conducted on murine fibroblasts by selected common used tests. Each of these tests monitors the selected area related to toxicity and together provides a comprehensive view on the issue of interactions of nanoparticles and living cells.

Keywords: cytotoxicity, gold nanoparticles, mechanism of cytotoxicity, silver nanoparticles

Procedia PDF Downloads 251
476 Glocalization of Journalism and Mass Communication Education: Best Practices from an International Collaboration on Curriculum Development

Authors: Bellarmine Ezumah, Michael Mawa

Abstract:

Glocalization is often defined as the practice of conducting business according to both local and global considerations – this epitomizes the curriculum co-development collaboration between a journalism and mass communications professor from a university in the United States and the Uganda Martyrs University in Uganda where a brand new journalism and mass communications program was recently co-developed. This paper presents the experiences and research result of this initiative which was funded through the Institute of International Education (IIE) under the umbrella of the Carnegie African Diaspora Fellowship Program (CADFP). Vital international and national concerns were addressed. On a global level, scholars have questioned and criticized the general Western-module ingrained in journalism and mass communication curriculum and proposed a decolonization of journalism curricula. Another major criticism is the concept of western-based educators transplanting their curriculum verbatim to other regions of the world without paying greater attention to the local needs. To address these two global concerns, an extensive assessment of local needs was conducted prior to the conceptualization of the new program. The assessment of needs adopted a participatory action model and captured the knowledge and narratives of both internal and external stakeholders. This involved review of pertinent documents including the nation’s constitution, governmental briefs, and promulgations, interviews with governmental officials, media and journalism educators, media practitioners, students, and benchmarking the curriculum of other tertiary institutions in the nation. Information gathered through this process served as blueprint and frame of reference for all design decisions. In the area of local needs, four key factors were addressed. First, the realization that most media personnel in Uganda are both academically and professionally unqualified. Second, the practitioners with academic training were found lacking in experience. Third, the current curricula offered at several tertiary institutions are not comprehensive and lack local relevance. The project addressed these problems thus: first, the program was designed to cater to both traditional and non-traditional students offering opportunities for unqualified media practitioners to get their formal training through evening and weekender programs. Secondly, the challenge of inexperienced graduates was mitigated by designing the program to adopt the experiential learning approach which many refer to as the ‘Teaching Hospital Model’. This entails integrating practice to theory - similar to the way medical students engage in hands-on practice under the supervision of a mentor. The university drew a Memorandum of Understanding (MoU) with reputable media houses for students and faculty to use their studios for hands-on experience and for seasoned media practitioners to guest-teach some courses. With the convergence functions of media industry today, graduates should be trained to have adequate knowledge of other disciplines; therefore, the curriculum integrated cognate courses that would render graduates versatile. Ultimately, this research serves as a template for African colleges and universities to follow in their quest to glocalize their curricula. While the general concept of journalism may remain western, journalism curriculum developers in Africa through extensive assessment of needs, and focusing on those needs and other societal particularities, can adjust the western module to fit their local needs.

Keywords: curriculum co-development, glocalization of journalism education, international journalism, needs assessment

Procedia PDF Downloads 127
475 Neologisms and Word-Formation Processes in Board Game Rulebook Corpus: Preliminary Results

Authors: Athanasios Karasimos, Vasiliki Makri

Abstract:

This research focuses on the design and development of the first text Corpus based on Board Game Rulebooks (BGRC) with direct application on the morphological analysis of neologisms and tendencies in word-formation processes. Corpus linguistics is a dynamic field that examines language through the lens of vast collections of texts. These corpora consist of diverse written and spoken materials, ranging from literature and newspapers to transcripts of everyday conversations. By morphologically analyzing these extensive datasets, morphologists can gain valuable insights into how language functions and evolves, as these extensive datasets can reflect the byproducts of inflection, derivation, blending, clipping, compounding, and neology. This entails scrutinizing how words are created, modified, and combined to convey meaning in a corpus of challenging, creative, and straightforward texts that include rules, examples, tutorials, and tips. Board games teach players how to strategize, consider alternatives, and think flexibly, which are critical elements in language learning. Their rulebooks reflect not only their weight (complexity) but also the language properties of each genre and subgenre of these games. Board games are a captivating realm where strategy, competition, and creativity converge. Beyond the excitement of gameplay, board games also spark the art of word creation. Word games, like Scrabble, Codenames, Bananagrams, Wordcraft, Alice in the Wordland, Once uUpona Time, challenge players to construct words from a pool of letters, thus encouraging linguistic ingenuity and vocabulary expansion. These games foster a love for language, motivating players to unearth obscure words and devise clever combinations. On the other hand, the designers and creators produce rulebooks, where they include their joy of discovering the hidden potential of language, igniting the imagination, and playing with the beauty of words, making these games a delightful fusion of linguistic exploration and leisurely amusement. In this research, more than 150 rulebooks in English from all types of modern board games, either language-independent or language-dependent, are used to create the BGRC. A representative sample of each genre (family, party, worker placement, deckbuilding, dice, and chance games, strategy, eurogames, thematic, role-playing, among others) was selected based on the score from BoardGameGeek, the size of the texts and the level of complexity (weight) of the game. A morphological model with morphological networks, multi-word expressions, and word-creation mechanics based on the complexity of the textual structure, difficulty, and board game category will be presented. In enabling the identification of patterns, trends, and variations in word formation and other morphological processes, this research aspires to make avail of this creative yet strict text genre so as to (a) give invaluable insight into morphological creativity and innovation that (re)shape the lexicon of the English language and (b) test morphological theories. Overall, it is shown that corpus linguistics empowers us to explore the intricate tapestry of language, and morphology in particular, revealing its richness, flexibility, and adaptability in the ever-evolving landscape of human expression.

Keywords: board game rulebooks, corpus design, morphological innovations, neologisms, word-formation processes

Procedia PDF Downloads 93
474 Black-Box-Optimization Approach for High Precision Multi-Axes Forward-Feed Design

Authors: Sebastian Kehne, Alexander Epple, Werner Herfs

Abstract:

A new method for optimal selection of components for multi-axes forward-feed drive systems is proposed in which the choice of motors, gear boxes and ball screw drives is optimized. Essential is here the synchronization of electrical and mechanical frequency behavior of all axes because even advanced controls (like H∞-controls) can only control a small part of the mechanical modes – namely only those of observable and controllable states whose value can be derived from the positions of extern linear length measurement systems and/or rotary encoders on the motor or gear box shafts. Further problems are the unknown processing forces like cutting forces in machine tools during normal operation which make the estimation and control via an observer even more difficult. To start with, the open source Modelica Feed Drive Library which was developed at the Laboratory for Machine Tools, and Production Engineering (WZL) is extended from one axis design to the multi axes design. It is capable to simulate the mechanical, electrical and thermal behavior of permanent magnet synchronous machines with inverters, different gear boxes and ball screw drives in a mechanical system. To keep the calculation time down analytical equations are used for field and torque producing equivalent circuit, heat dissipation and mechanical torque at the shaft. As a first step, a small machine tool with a working area of 635 x 315 x 420 mm is taken apart, and the mechanical transfer behavior is measured with an impulse hammer and acceleration sensors. With the frequency transfer functions, a mechanical finite element model is built up which is reduced with substructure coupling to a mass-damper system which models the most important modes of the axes. The model is modelled with Modelica Feed Drive Library and validated by further relative measurements between machine table and spindle holder with a piezo actor and acceleration sensors. In a next step, the choice of possible components in motor catalogues is limited by derived analytical formulas which are based on well-known metrics to gain effective power and torque of the components. The simulation in Modelica is run with different permanent magnet synchronous motors, gear boxes and ball screw drives from different suppliers. To speed up the optimization different black-box optimization methods (Surrogate-based, gradient-based and evolutionary) are tested on the case. The objective that was chosen is to minimize the integral of the deviations if a step is given on the position controls of the different axes. Small values are good measures for a high dynamic axes. In each iteration (evaluation of one set of components) the control variables are adjusted automatically to have an overshoot less than 1%. It is obtained that the order of the components in optimization problem has a deep impact on the speed of the black-box optimization. An approach to do efficient black-box optimization for multi-axes design is presented in the last part. The authors would like to thank the German Research Foundation DFG for financial support of the project “Optimierung des mechatronischen Entwurfs von mehrachsigen Antriebssystemen (HE 5386/14-1 | 6954/4-1)” (English: Optimization of the Mechatronic Design of Multi-Axes Drive Systems).

Keywords: ball screw drive design, discrete optimization, forward feed drives, gear box design, linear drives, machine tools, motor design, multi-axes design

Procedia PDF Downloads 282
473 The Influence of Nutritional and Immunological Status on the Prognosis of Head and Neck Cancer

Authors: Ching-Yi Yiu, Hui-Chen Hsu

Abstract:

Objectives: Head and neck cancer (HNC) is a big global health problem in the world. Despite the development of diagnosis and treatment, the overall survival of HNC is still low. The well recognition of the interaction of the host immune system and cancer cells has led to realizing the processes of tumor initiation, progression and metastasis. Many systemic inflammatory responses have been shown to play a crucial role in cancer progression. The pre and post-treatment nutritional and immunological status of HNC patients is a reliable prognostic indicator of tumor outcomes and survivors. Methods: Between July 2020 to June 2022, We have enrolled 60 HNC patients, including 59 males and 1 female, in Chi Mei Medical Center, Liouying, Taiwan. The age distribution was from 37 to 81 years old (y/o), with a mean age of 57.6 y/o. We evaluated the pre-and post-treatment nutritional and immunological status of these HNC patients with body weight, body weight loss, body mass index (BMI), whole blood count including hemoglobin (Hb), lymphocyte, neutrophil and platelet counts, biochemistry including prealbumin, albumin, c-reactive protein (CRP), with the time period of before treatment, post-treatment 3 and 6 months. We calculated the neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) to assess how these biomarkers influence the outcomes of HNC patients. Results: We have carcinoma of the hypopharynx in 21 cases with 35%, carcinoma of the larynx in 9 cases, carcinoma of the tonsil and tongue every 6 cases, carcinoma soft palate and tongue base every 5 cases, carcinoma of buccal mucosa, retromolar trigone and mouth floor every 2 cases, carcinoma of the hard palate and low lip each 1 case. There were stage I 15 cases, stage II 13 cases, stage III 6 cases, stage IVA 10 cases, and stage IVB 16 cases. All patients have received surgery, chemoradiation therapy or combined therapy. We have wound infection in 6 cases, 2 cases of pharyngocutaneous fistula, flap necrosis in 2 cases, and mortality in 6 cases. In the wound infection group, the average BMI is 20.4 kg/m2; the average Hb is 12.9 g/dL, the average albumin is 3.5 g/dL, the average NLR is 6.78, and the average PLR is 243.5. In the PC fistula and flap necrosis group, the average BMI is 21.65 kg/m2; the average Hb is 11.7 g/dL, the average albumin is 3.15 g/dL, average NLR is 13.28, average PLR is 418.84. In the mortality group, the average BMI is 22.3 kg/m2; the average Hb is 13.58 g/dL, the average albumin is 3.77 g/dL, the average NLR is 6.06, and the average PLR is 275.5. Conclusion: HNC is a big challenging public health problem worldwide, especially in the high prevalence of betel nut consumption area Taiwan. Besides the definite risk factors of smoking, drinking and betel nut related, the other biomarkers may play significant prognosticators in the HNC outcomes. We concluded that the average BMI is less than 22 kg/m2, the average Hb is low than 12.0 g/dL, the average albumin is low than 3.3 g/dL, the average NLR is low than 3, and the average PLR is more than 170, the surgical complications and mortality will be increased, and the prognosis is poor in HNC patients.

Keywords: nutritional, immunological, neutrophil-to-lymphocyte ratio, paltelet-to-lymphocyte ratio.

Procedia PDF Downloads 77
472 Engineers 'Write' Job Description: Development of English for Specific Purposes (ESP)-Based Instructional Materials for Engineering Students

Authors: Marjorie Miguel

Abstract:

Globalization offers better career opportunities hence demands more competent professionals efficient for the job. With the transformation of the world industry from competition to collaboration coupled with the rapid development in the field of science and technology, engineers need not only to be technically proficient, but also multilingual-skilled: two characteristics that a global engineer possesses. English often serves as the global language between people from different cultures being the medium mostly used in international business. Ironically, most universities worldwide adapt engineering curriculum heavily built around the language of mathematics not realizing that the goal of an engineer is not only to create and design, but more importantly to promote his creations and designs to the general public through effective communication. This premise led to some developments in the teaching process of English subjects in the tertiary level which include the integration of the technical knowledge related to the area of specialization of the students in the English subjects that they are taking. This is also known as English for Specific Purposes. This study focused on the development of English for Specific Purposes-Based Instructional Materials for Engineering Students of Bulacan State University (BulSU). The materials were tailor-made in which the contents and structure were designed to meet the specific needs of the students as well as the industry. Based on the needs analysis, the needs of the students and the industry were determined to make the study descriptive in nature. The major respondents included fifty engineering students and ten professional engineers from selected institutions. The needs analysis was done and the results showed the common writing difficulties of the students and the writing skills needed among the engineers in the industry. The topics in the instructional materials were established after the needs analysis was conducted. Simple statistical treatment including frequency distribution, percentages, mean, standard deviation, and weighted mean were used. The findings showed that the greatest number of the respondents had an average proficiency rating in writing, and the much-needed skills that must be developed by the engineers are directly related to the preparation and presentation of technical reports about their projects, as well as to the different communications they transmit to their colleagues and superiors. The researcher undertook the following phases in the development of the instructional materials: a design phase, development phase, and evaluation phase. Evaluations are given by some college instructors about the instructional materials generally helped in its usefulness and significance making the study beneficial not only as a career enhancer for BulSU engineering students, but also creating the university one of the educational institutions ready for the new millennium.

Keywords: English for specific purposes, instructional materials, needs analysis, write (right) job description

Procedia PDF Downloads 237
471 Analysis of Distance Travelled by Plastic Consumables Used in the First 24 Hours of an Intensive Care Admission: Impacts and Methods of Mitigation

Authors: Aidan N. Smallwood, Celestine R. Weegenaar, Jack N. Evans

Abstract:

The intensive care unit (ICU) is a particularly resource heavy environment, in terms of staff, drugs and equipment required. Whilst many areas of the hospital are attempting to cut down on plastic use and minimise their impact on the environment, this has proven challenging within the confines of intensive care. Concurrently, as globalization has progressed over recent decades, there has been a tendency towards centralised manufacturing with international distribution networks for products, often covering large distances. In this study, we have modelled the standard consumption of plastic single-use items over the course of the first 24-hours of an average individual patient’s stay in a 12 bed ICU in the United Kingdom (UK). We have identified the country of manufacture and calculated the minimum possible distance travelled by each item from factory to patient. We have assumed direct transport via the shortest possible straight line from country of origin to the UK and have not accounted for transport within either country. Assuming an intubated patient with invasive haemodynamic monitoring and central venous access, there are a total of 52 distincts, largely plastic, disposable products which would reasonably be required in the first 24-hours after admission. Each product type has only been counted once to account for multiple items being shipped as one package. Travel distances from origin were summed to give the total distance combined for all 52 products. The minimum possible total distance travelled from country of origin to the UK for all types of product was 273,353 km, equivalent to 6.82 circumnavigations of the globe, or 71% of the way to the moon. The mean distance travelled was 5,256 km, approximately the distance from London to Mecca. With individual packaging for each item, the total weight of consumed products was 4.121 kg. The CO2 produced shipping these items by air freight would equate to 30.1 kg, however doing the same by sea would produce 0.2 kg CO2. Extrapolating these results to the 211,932 UK annual ICU admissions (2018-2019), even with the underestimates of distance and weight of our assumptions, air freight would account for 6586 tons CO2 emitted annually, approximately 130 times that of sea freight. Given the drive towards cost saving within the UK health service, and the decline of the local manufacturing industry, buying from intercontinental manufacturers is inevitable However, transporting all consumables by sea where feasible would be environmentally beneficial, as well as being less costly than air freight. At present, the NHS supply chain purchases from medical device companies, and there is no freely available information as to the transport mode used to deliver the product to the UK. This must be made available to purchasers in order to give a fuller picture of life cycle impact and allow for informed decision making in this regard.

Keywords: CO2, intensive care, plastic, transport

Procedia PDF Downloads 176
470 Performance Analysis of Double Gate FinFET at Sub-10NM Node

Authors: Suruchi Saini, Hitender Kumar Tyagi

Abstract:

With the rapid progress of the nanotechnology industry, it is becoming increasingly important to have compact semiconductor devices to function and offer the best results at various technology nodes. While performing the scaling of the device, several short-channel effects occur. To minimize these scaling limitations, some device architectures have been developed in the semiconductor industry. FinFET is one of the most promising structures. Also, the double-gate 2D Fin field effect transistor has the benefit of suppressing short channel effects (SCE) and functioning well for less than 14 nm technology nodes. In the present research, the MuGFET simulation tool is used to analyze and explain the electrical behaviour of a double-gate 2D Fin field effect transistor. The drift-diffusion and Poisson equations are solved self-consistently. Various models, such as Fermi-Dirac distribution, bandgap narrowing, carrier scattering, and concentration-dependent mobility models, are used for device simulation. The transfer and output characteristics of the double-gate 2D Fin field effect transistor are determined at 10 nm technology node. The performance parameters are extracted in terms of threshold voltage, trans-conductance, leakage current and current on-off ratio. In this paper, the device performance is analyzed at different structure parameters. The utilization of the Id-Vg curve is a robust technique that holds significant importance in the modeling of transistors, circuit design, optimization of performance, and quality control in electronic devices and integrated circuits for comprehending field-effect transistors. The FinFET structure is optimized to increase the current on-off ratio and transconductance. Through this analysis, the impact of different channel widths, source and drain lengths on the Id-Vg and transconductance is examined. Device performance was affected by the difficulty of maintaining effective gate control over the channel at decreasing feature sizes. For every set of simulations, the device's features are simulated at two different drain voltages, 50 mV and 0.7 V. In low-power and precision applications, the off-state current is a significant factor to consider. Therefore, it is crucial to minimize the off-state current to maximize circuit performance and efficiency. The findings demonstrate that the performance of the current on-off ratio is maximum with the channel width of 3 nm for a gate length of 10 nm, but there is no significant effect of source and drain length on the current on-off ratio. The transconductance value plays a pivotal role in various electronic applications and should be considered carefully. In this research, it is also concluded that the transconductance value of 340 S/m is achieved with the fin width of 3 nm at a gate length of 10 nm and 2380 S/m for the source and drain extension length of 5 nm, respectively.

Keywords: current on-off ratio, FinFET, short-channel effects, transconductance

Procedia PDF Downloads 58
469 Providing Health Promotion Information by Digital Animation to International Visitors in Japan: A Factorial Design View of Nurses

Authors: Mariko Nishikawa, Masaaki Yamanaka, Ayami Kondo

Abstract:

Background: International visitors to Japan are at a risk of travel-related illnesses or injury that could result in hospitalization in a country where the language and customs are unique. Over twelve million international visitors came to Japan in 2015, and more are expected leading up to the Tokyo Olympics. One aspect of this is the potentially greater demand on healthcare services by foreign visitors. Nurses who take care of them have anxieties and concerns of their knowledge of the Japanese health system. Objectives: An effective distribution of travel-health information is vital for facilitating care for international visitors. Our research investigates whether a four-minute digital animation (Mari Info Japan), designed and developed by the authors and applied to a survey of 513 nurses who take care of foreigners daily, could clarify travel health procedures, reduce anxieties, while making it enjoyable to learn. Methodology: Respondents to a survey were divided into two groups. The intervention group watched Mari Info Japan. The control group read a standard guidebook. The participants were requested to fill a two-page questionnaire called Mari Meter-X, STAI-Y in English and mark a face scale, before and after the interventions. The questions dealt with knowledge of health promotion, the Japanese healthcare system, cultural concerns, anxieties, and attitudes in Japan. Data were collected from an intervention group (n=83) and control group (n=83) of nurses in a hospital, Japan for foreigners from February to March, 2016. We analyzed the data using Text Mining Studio for open-ended questions and JMP for statistical significance. Results: We found that the intervention group displayed more confidence and less anxiety to take care of foreign patients compared to the control group. The intervention group indicated a greater comfort after watching the animation. However, both groups were most likely to be concerned about language, the cost of medical expenses, informed consent, and choice of hospital. Conclusions: From the viewpoint of nurses, the provision of travel-health information by digital animation to international visitors to Japan was more effective than traditional methods as it helped them be better prepared to treat travel-related diseases and injury among international visitors. This study was registered number UMIN000020867. Funding: Grant–in-Aid for Challenging Exploratory Research 2010-2012 & 2014-16, Japanese Government.

Keywords: digital animation, health promotion, international visitor, Japan, nurse

Procedia PDF Downloads 303
468 Biodegradation Ability of Polycyclic Aromatic Hydrocarbon (PAHs) Degrading Bacillus cereus Strain JMG-01 Isolated from PAHs Contaminated Soil

Authors: Momita Das, Sofia Banu, Jibon Kotoky

Abstract:

Environmental contamination of natural resources with persistent organic pollutants is of great world-wide apprehension. Polycyclic aromatic hydrocarbons (PAHs) are among the organic pollutants, released due to various anthropogenic activities. Due to their toxic, carcinogenic and mutagenic properties, PAHs are of environmental and human concern. Presently, bioremediation has evolved as the most promising biotechnology for cleanup of such contaminants because of its economical and less cost effectiveness. In the present study, distribution of 16 USEPA priority PAHs was determined in the soil samples collected from fifteen different sites of Guwahati City, the Gateway of the North East Region of India. The total concentrations of 16 PAHs (Σ16 PAHs) ranged from 42.7-742.3 µg/g. Higher concentration of total PAHs was found more in the Industrial areas compared to all the sites (742.3 µg/g and 628 µg/g). It is noted that among all the PAHs, Naphthalene, Acenaphthylene, Anthracene, Fluoranthene, Chrysene and Benzo(a)Pyrene were the most available and contain the higher concentration of all the PAHs. Since microbial activity has been deemed the most influential and significant cause of PAH removal; further, twenty-three bacteria were isolated from the most contaminated sites using the enrichment process. These strains were acclimatized to utilize naphthalene and anthracene, each at 100 µg/g concentration as sole carbon source. Among them, one Gram-positive strain (JMG-01) was selected, and biodegradation ability and initial catabolic genes of PAHs degradation were investigated. Based on 16S rDNA analysis, the isolate was identified as Bacillus cereus strain JMG-01. Topographic images obtained using Scanning Electron Microscope (SEM) and Atomic Force Microscope (AFM) at scheduled time intervals of 7, 14 and 21 days, determined the variation in cell morphology during the period of degradation. AFM and SEM micrograph of biomass showed high filamentous growth leading to aggregation of cells in the form of biofilm with reference to the incubation period. The percentage degradation analysis using gas chromatography and mass analyses (GC-MS) suggested that more than 95% of the PAHs degraded when the concentration was at 500 µg/g. Naphthalene, naphthalene-2-methy, benzaldehyde-4-propyl, 1, 2, benzene di-carboxylic acid and benzene acetic acid were the major metabolites produced after degradation. Moreover, PCR experiments with specific primers for catabolic genes, ndo B and Cat A suggested that JMG-01 possess genes for PAHs degradation. Thus, the study concludes that Bacillus cereus strain JMG-01 has efficient biodegrading ability and can trigger the clean-up of PAHs contaminated soil.

Keywords: AFM, Bacillus cereus strain JMG-01, degradation, polycyclic aromatic hydrocarbon, SEM

Procedia PDF Downloads 268
467 Quantifying Multivariate Spatiotemporal Dynamics of Malaria Risk Using Graph-Based Optimization in Southern Ethiopia

Authors: Yonas Shuke Kitawa

Abstract:

Background: Although malaria incidence has substantially fallen sharply over the past few years, the rate of decline varies by district, time, and malaria type. Despite this turn-down, malaria remains a major public health threat in various districts of Ethiopia. Consequently, the present study is aimed at developing a predictive model that helps to identify the spatio-temporal variation in malaria risk by multiple plasmodium species. Methods: We propose a multivariate spatio-temporal Bayesian model to obtain a more coherent picture of the temporally varying spatial variation in disease risk. The spatial autocorrelation in such a data set is typically modeled by a set of random effects that assign a conditional autoregressive prior distribution. However, the autocorrelation considered in such cases depends on a binary neighborhood matrix specified through the border-sharing rule. Over here, we propose a graph-based optimization algorithm for estimating the neighborhood matrix that merely represents the spatial correlation by exploring the areal units as the vertices of a graph and the neighbor relations as the series of edges. Furthermore, we used aggregated malaria count in southern Ethiopia from August 2013 to May 2019. Results: We recognized that precipitation, temperature, and humidity are positively associated with the malaria threat in the area. On the other hand, enhanced vegetation index, nighttime light (NTL), and distance from coastal areas are negatively associated. Moreover, nonlinear relationships were observed between malaria incidence and precipitation, temperature, and NTL. Additionally, lagged effects of temperature and humidity have a significant effect on malaria risk by either species. More elevated risk of P. falciparum was observed following the rainy season, and unstable transmission of P. vivax was observed in the area. Finally, P. vivax risks are less sensitive to environmental factors than those of P. falciparum. Conclusion: The improved inference was gained by employing the proposed approach in comparison to the commonly used border-sharing rule. Additionally, different covariates are identified, including delayed effects, and elevated risks of either of the cases were observed in districts found in the central and western regions. As malaria transmission operates in a spatially continuous manner, a spatially continuous model should be employed when it is computationally feasible.

Keywords: disease mapping, MSTCAR, graph-based optimization algorithm, P. falciparum, P. vivax, waiting matrix

Procedia PDF Downloads 72
466 Mental Balance, Emotional Balance, and Stress Management: The Role of Ancient Vedic Philosophy from India

Authors: Emily Schulz

Abstract:

The ancient Vedic culture from India had traditions that supported all aspects of health, including psychological health, and are relevant in the current era. These traditions have been compiled by Professor Dr. Purna, a rare Himalayan Master, into the Purna Health Management System (PHMS). The PHMS is a unique, holistic, and integrated approach to health management. It is comprised of four key factors: Health, Fitness, and Nutrition (HF&N), Life Balance (Stress Management) (LB-SM), Spiritual Growth and Development (SG&D); and Living in Harmony with the Natural Environment (LHWNE). The purpose of the PHMS is to give people the tools to take responsibility for managing their own holistic health and wellbeing. A study using a cross-sectional mixed-methods anonymous online survey was conducted during 2017-2018. Adult students of Professor Dr. Purna were invited to participate through announcements made at various events He held throughout the globe. Follow-up emails were sent with consenting language for interested parties and provided them with a link to the survey. Participation in the study was completely voluntary and no incentives were given to respond to the survey. The overall aim of the study was to investigate the effectiveness of implementation of the PHMS on practitioners' emotional balance. However, given the holistic nature of the PHMS, survey questions also inquired about participants’ physical health, stress level, ability to manage stress, and wellbeing using Likert scales. The survey also included some open-ended questions to gain an understanding of the participants’ experiences with the PHMS relative to their emotional balance. In total, 52 people out of 253 potential respondents participated in the study. Data were analyzed using nonparametric Spearman’s Rho correlation coefficient (rs) since the data were not on a normal distribution. Statistical significance was set at p < .05. Results of the study suggested that there are moderate to strong statistically significant relationships (p < .001) between participants' frequent implementation of each of the four key factors of the PHMS and self-reported mental/emotional health (HF&N rs = 0.42; LB-SM rs = 0.54; SG&D rs = 0.49; LHWNE rs = 0.45) Results also demonstrated statistically significant relationships (p < .001) between participants' frequent implementation of each of the four key factors of the PHMS and their self-reported ability to manage stress (HF&N rs = 0.44; LB-SM rs = 0.55; SG&D rs = 0.39; LHWNE rs = 0.55). Additionally, those who reported experiencing better physical health also reported better mental/emotional health (rs = 0.49, p < .001) and better ability to manage stress (rs = 0.46, p < .001). The findings of this study suggest that wisdom from the ancient Vedic culture may be useful for those working in the field of psychology and related fields who would like to assist clients in calming their mind and emotions and managing their stress levels.

Keywords: balanced emotions, balanced mind, stress management, Vedic philosophy

Procedia PDF Downloads 114
465 Restricted Boltzmann Machines and Deep Belief Nets for Market Basket Analysis: Statistical Performance and Managerial Implications

Authors: H. Hruschka

Abstract:

This paper presents the first comparison of the performance of the restricted Boltzmann machine and the deep belief net on binary market basket data relative to binary factor analysis and the two best-known topic models, namely Dirichlet allocation and the correlated topic model. This comparison shows that the restricted Boltzmann machine and the deep belief net are superior to both binary factor analysis and topic models. Managerial implications that differ between the investigated models are treated as well. The restricted Boltzmann machine is defined as joint Boltzmann distribution of hidden variables and observed variables (purchases). It comprises one layer of observed variables and one layer of hidden variables. Note that variables of the same layer are not connected. The comparison also includes deep belief nets with three layers. The first layer is a restricted Boltzmann machine based on category purchases. Hidden variables of the first layer are used as input variables by the second-layer restricted Boltzmann machine which then generates second-layer hidden variables. Finally, in the third layer hidden variables are related to purchases. A public data set is analyzed which contains one month of real-world point-of-sale transactions in a typical local grocery outlet. It consists of 9,835 market baskets referring to 169 product categories. This data set is randomly split into two halves. One half is used for estimation, the other serves as holdout data. Each model is evaluated by the log likelihood for the holdout data. Performance of the topic models is disappointing as the holdout log likelihood of the correlated topic model – which is better than Dirichlet allocation - is lower by more than 25,000 compared to the best binary factor analysis model. On the other hand, binary factor analysis on its own is clearly surpassed by both the restricted Boltzmann machine and the deep belief net whose holdout log likelihoods are higher by more than 23,000. Overall, the deep belief net performs best. We also interpret hidden variables discovered by binary factor analysis, the restricted Boltzmann machine and the deep belief net. Hidden variables characterized by the product categories to which they are related differ strongly between these three models. To derive managerial implications we assess the effect of promoting each category on total basket size, i.e., the number of purchased product categories, due to each category's interdependence with all the other categories. The investigated models lead to very different implications as they disagree about which categories are associated with higher basket size increases due to a promotion. Of course, recommendations based on better performing models should be preferred. The impressive performance advantages of the restricted Boltzmann machine and the deep belief net suggest continuing research by appropriate extensions. To include predictors, especially marketing variables such as price, seems to be an obvious next step. It might also be feasible to take a more detailed perspective by considering purchases of brands instead of purchases of product categories.

Keywords: binary factor analysis, deep belief net, market basket analysis, restricted Boltzmann machine, topic models

Procedia PDF Downloads 194
464 In-Plume H₂O, CO₂, H₂S and SO₂ in the Fumarolic Field of La Fossa Cone (Vulcano Island, Aeolian Archipelago)

Authors: Cinzia Federico, Gaetano Giudice, Salvatore Inguaggiato, Marco Liuzzo, Maria Pedone, Fabio Vita, Christoph Kern, Leonardo La Pica, Giovannella Pecoraino, Lorenzo Calderone, Vincenzo Francofonte

Abstract:

The periods of increased fumarolic activity at La Fossa volcano have been characterized, since early 80's, by changes in the gas chemistry and in the output rate of fumaroles. Excepting the direct measurements of the steam output from fumaroles performed from 1983 to 1995, the mass output of the single gas species has been recently measured, with various methods, only sporadically or for short periods. Since 2008, a scanning DOAS system is operating in the Palizzi area for the remote measurement of the in-plume SO₂ flux. On these grounds, the need of a cross-comparison of different methods for the in situ measurement of the output rate of different gas species is envisaged. In 2015, two field campaigns have been carried out, aimed at: 1. The mapping of the concentration of CO₂, H₂S and SO₂ in the fumarolic plume at 1 m from the surface, by using specific open-path diode tunable lasers (GasFinder Boreal Europe Ltd.) and an Active DOAS for SO₂, respectively; these measurements, coupled to simultaneous ultrasonic wind speed and meteorological data, have been elaborated to obtain the dispersion map and the output rate of single species in the overall fumarolic field; 2. The mapping of the concentrations of CO₂, H₂S, SO₂, H₂O in the fumarolic plume at 0.5 m from the soil, by using an integrated system, including IR spectrometers and specific electrochemical sensors; this has provided the concentration ratios of the analysed gas species and their distribution in the fumarolic field; 3. The in-fumarole sampling of vapour and measurement of the steam output, to validate the remote measurements. The dispersion map of CO₂, obtained from the tunable laser measurements, shows a maximum CO₂ concentration at 1m from the soil of 1000 ppmv along the rim, and 1800 ppmv in the inner slopes. As observed, the largest contribution derives from a wide fumarole of the inner-slope, despite its present outlet temperature of 230°C, almost 200°C lower than those measured at the rim fumaroles. Actually, fumaroles in the inner slopes are among those emitting the largest amount of magmatic vapour and, during the 1989-1991 crisis, reached the temperature of 690°C. The estimated CO₂ and H₂S fluxes are 400 t/d and 4.4 t/d, respectively. The coeval SO₂ flux, measured by the scanning DOAS system, is 9±1 t/d. The steam output, recomputed from CO₂ flux measurements, is about 2000 t/d. The various direct and remote methods (as described at points 1-3) have produced coherent results, which encourage to the use of daily and automatic DOAS SO₂ data, coupled with periodic in-plume measurements of different acidic gases, to obtain the total mass rates.

Keywords: DOAS, fumaroles, plume, tunable laser

Procedia PDF Downloads 395
463 Coastal Resources Spatial Planning and Potential Oil Risk Analysis: Case Study of Misratah’s Coastal Resources, Libya

Authors: Abduladim Maitieg, Kevin Lynch, Mark Johnson

Abstract:

The goal of the Libyan Environmental General Authority (EGA) and National Oil Corporation (Department of Health, Safety & Environment) during the last 5 years has been to adopt a common approach to coastal and marine spatial planning. Protection and planning of the coastal zone is a significant for Libya, due to the length of coast and, the high rate of oil export, and spills’ potential negative impacts on coastal and marine habitats. Coastal resource scenarios constitute an important tool for exploring the long-term and short-term consequences of oil spill impact and available response options that would provide an integrated perspective on mitigation. To investigate that, this paper reviews the Misratah coastal parameters to present the physical and human controls and attributes of coastal habitats as the first step in understanding how they may be damaged by an oil spill. This paper also investigates costal resources, providing a better understanding of the resources and factors that impact the integrity of the ecosystem. Therefore, the study described the potential spatial distribution of oil spill risk and the coastal resources value, and also created spatial maps of coastal resources and their vulnerability to oil spills along the coast. This study proposes an analysis of coastal resources condition at a local level in the Misratah region of the Mediterranean Sea, considering the implementation of coastal and marine spatial planning over time as an indication of the will to manage urban development. Oil spill contamination analysis and their impact on the coastal resources depend on (1) oil spill sequence, (2) oil spill location, (3) oil spill movement near the coastal area. The resulting maps show natural, socio-economic activity, environmental resources along of the coast, and oil spill location. Moreover, the study provides significant geodatabase information which is required for coastal sensitivity index mapping and coastal management studies. The outcome of study provides the information necessary to set an Environmental Sensitivity Index (ESI) for the Misratah shoreline, which can be used for management of coastal resources and setting boundaries for each coastal sensitivity sectors, as well as to help planners measure the impact of oil spills on coastal resources. Geographic Information System (GIS) tools were used in order to store and illustrate the spatial convergence of existing socio-economic activities such as fishing, tourism, and the salt industry, and ecosystem components such as sea turtle nesting area, Sabkha habitats, and migratory birds feeding sites. These geodatabases help planners investigate the vulnerability of coastal resources to an oil spill.

Keywords: coastal and marine spatial planning advancement training, GIS mapping, human uses, ecosystem components, Misratah coast, Libyan, oil spill

Procedia PDF Downloads 360
462 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives

Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes

Abstract:

The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.

Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system

Procedia PDF Downloads 117
461 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images

Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod

Abstract:

The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.

Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck

Procedia PDF Downloads 213
460 Well Inventory Data Entry: Utilization of Developed Technologies to Progress the Integrated Asset Plan

Authors: Danah Al-Selahi, Sulaiman Al-Ghunaim, Bashayer Sadiq, Fatma Al-Otaibi, Ali Ameen

Abstract:

In light of recent changes affecting the Oil & Gas Industry, optimization measures have become imperative for all companies globally, including Kuwait Oil Company (KOC). To keep abreast of the dynamic market, a detailed Integrated Asset Plan (IAP) was developed to drive optimization across the organization, which was facilitated through the in-house developed software “Well Inventory Data Entry” (WIDE). This comprehensive and integrated approach enabled centralization of all planned asset components for better well planning, enhancement of performance, and to facilitate continuous improvement through performance tracking and midterm forecasting. Traditionally, this was hard to achieve as, in the past, various legacy methods were used. This paper briefly describes the methods successfully adopted to meet the company’s objective. IAPs were initially designed using computerized spreadsheets. However, as data captured became more complex and the number of stakeholders requiring and updating this information grew, the need to automate the conventional spreadsheets became apparent. WIDE, existing in other aspects of the company (namely, the Workover Optimization project), was utilized to meet the dynamic requirements of the IAP cycle. With the growth of extensive features to enhance the planning process, the tool evolved into a centralized data-hub for all asset-groups and technical support functions to analyze and infer from, leading WIDE to become the reference two-year operational plan for the entire company. To achieve WIDE’s goal of operational efficiency, asset-groups continuously add their parameters in a series of predefined workflows that enable the creation of a structured process which allows risk factors to be flagged and helps mitigation of the same. This tool dictates assigned responsibilities for all stakeholders in a method that enables continuous updates for daily performance measures and operational use. The reliable availability of WIDE, combined with its user-friendliness and easy accessibility, created a platform of cross-functionality amongst all asset-groups and technical support groups to update contents of their respective planning parameters. The home-grown entity was implemented across the entire company and tailored to feed in internal processes of several stakeholders across the company. Furthermore, the implementation of change management and root cause analysis techniques captured the dysfunctionality of previous plans, which in turn resulted in the improvement of already existing mechanisms of planning within the IAP. The detailed elucidation of the 2 year plan flagged any upcoming risks and shortfalls foreseen in the plan. All results were translated into a series of developments that propelled the tool’s capabilities beyond planning and into operations (such as Asset Production Forecasts, setting KPIs, and estimating operational needs). This process exemplifies the ability and reach of applying advanced development techniques to seamlessly integrated the planning parameters of various assets and technical support groups. These techniques enables the enhancement of integrating planning data workflows that ultimately lay the founding plans towards an epoch of accuracy and reliability. As such, benchmarks of establishing a set of standard goals are created to ensure the constant improvement of the efficiency of the entire planning and operational structure.

Keywords: automation, integration, value, communication

Procedia PDF Downloads 142
459 Optimal Uses of Rainwater to Maintain Water Level in Gomti Nagar, Uttar Pradesh, India

Authors: Alok Saini, Rajkumar Ghosh

Abstract:

Water is nature's important resource for survival of all living things, but freshwater scarcity exists in some parts of world. This study has predicted that Gomti Nagar area (49.2 sq. km.) will harvest about 91110 ML of rainwater till 2051 (assuming constant and present annual rainfall). But 17.71 ML of rainwater was harvested from only 53 buildings in Gomti Nagar area in the year 2021. Water level will be increased (rise) by 13 cm in Gomti Nagar from such groundwater recharge. The total annual groundwater abstraction from Gomti Nagar area was 35332 ML (in 2021). Due to hydrogeological constraints and lower annual rainfall, groundwater recharge is less than groundwater abstraction. The recent scenario is only 0.07% of rainwater recharges by RTRWHs in Gomti Nagar. But if RTRWHs would be installed in all buildings then 12.39% of rainwater could recharge groundwater table in Gomti Nagar area. But if RTRWHs would be installed in all buildings then 12.39% of rainwater could recharge groundwater table in Gomti Nagar area. Gomti Nagar is situated in 'Zone–A' (water distribution area) and groundwater is the primary source of freshwater supply. Current scenario indicates only 0.07% of rainwater recharges by RTRWHs in Gomti Nagar. In Gomti Nagar, the difference between groundwater abstraction and recharge will be 735570 ML in 30 yrs. Statistically, all buildings at Gomti Nagar (new and renovated) could harvest 3037 ML of rainwater through RTRWHs annually. The most recent monsoonal recharge in Gomti Nagar was 10813 ML/yr. Harvested rainwater collected from RTRWHs can be used for rooftop irrigation, and residential kitchen and gardens (home grown fruit and vegetables). According to bylaws, RTRWH installations are required in both newly constructed and existing buildings plot areas of 300 sq. m or above. Harvested rainwater is of higher quality than contaminated groundwater. Harvested rainwater from RTRWHs can be considered water self-sufficient. Rooftop Rainwater Harvesting Systems (RTRWHs) are least expensive, eco-friendly, most sustainable, and alternative water resource for artificial recharge. This study also predicts about 3.9 m of water level rise in Gomti Nagar area till 2051, only when all buildings will install RTRWHs and harvest for groundwater recharging. As a result, this current study responds to an impact assessment study of RTRWHs implementation for the water scarcity problem in the Gomti Nagar area (1.36 sq.km.). This study suggests that common storage tanks (recharge wells) should be built for a group of at least ten (10) households and optimal amount of harvested rainwater will be stored annually. Artificial recharge from alternative water sources will be required to improve the declining water level trend and balance the groundwater table in this area. This over-exploitation of groundwater may lead to land subsidence, and development of vertical cracks.

Keywords: aquifer, aquitard, artificial recharge, bylaws, groundwater, monsoon, rainfall, rooftop rainwater harvesting system, RTRWHs water table, water level

Procedia PDF Downloads 88