Search results for: building simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8519

Search results for: building simulation

89 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation

Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong

Abstract:

Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation

Procedia PDF Downloads 175
88 Strategies for the Optimization of Ground Resistance in Large Scale Foundations for Optimum Lightning Protection

Authors: Oibar Martinez, Clara Oliver, Jose Miguel Miranda

Abstract:

In this paper, we discuss the standard improvements which can be made to reduce the earth resistance in difficult terrains for optimum lightning protection, what are the practical limitations, and how the modeling can be refined for accurate diagnostics and ground resistance minimization. Ground resistance minimization can be made via three different approaches: burying vertical electrodes connected in parallel, burying horizontal conductive plates or meshes, or modifying the own terrain, either by changing the entire terrain material in a large volume or by adding earth-enhancing compounds. The use of vertical electrodes connected in parallel pose several practical limitations. In order to prevent loss of effectiveness, it is necessary to keep a minimum distance between each electrode, which is typically around five times larger than the electrode length. Otherwise, the overlapping of the local equipotential lines around each electrode reduces the efficiency of the configuration. The addition of parallel electrodes reduces the resistance and facilitates the measurement, but the basic parallel resistor formula of circuit theory will always underestimate the final resistance. Numerical simulation of equipotential lines around the electrodes overcomes this limitation. The resistance of a single electrode will always be proportional to the soil resistivity. The electrodes are usually installed with a backfilling material of high conductivity, which increases the effective diameter. However, the improvement is marginal, since the electrode diameter counts in the estimation of the ground resistance via a logarithmic function. Substances that are used for efficient chemical treatment must be environmentally friendly and must feature stability, high hygroscopicity, low corrosivity, and high electrical conductivity. A number of earth enhancement materials are commercially available. Many are comprised of carbon-based materials or clays like bentonite. These materials can also be used as backfilling materials to reduce the resistance of an electrode. Chemical treatment of soil has environmental issues. Some products contain copper sulfate or other copper-based compounds, which may not be environmentally friendly. Carbon-based compounds are relatively inexpensive and they do have very low resistivities, but they also feature corrosion issues. Typically, the carbon can corrode and destroy a copper electrode in around five years. These compounds also have potential environmental concerns. Some earthing enhancement materials contain cement, which, after installation acquire properties that are very close to concrete. This prevents the earthing enhancement material from leaching into the soil. After analyzing different configurations, we conclude that a buried conductive ring with vertical electrodes connected periodically should be the optimum baseline solution for the grounding of a large size structure installed on a large resistivity terrain. In order to show this, a practical example is explained here where we simulate the ground resistance of a conductive ring buried in a terrain with a resistivity in the range of 1 kOhm·m.

Keywords: grounding improvements, large scale scientific instrument, lightning risk assessment, lightning standards

Procedia PDF Downloads 117
87 Earthquake Preparedness of School Community and E-PreS Project

Authors: A. Kourou, A. Ioakeimidou, S. Hadjiefthymiades, V. Abramea

Abstract:

During the last decades, the task of engaging governments, communities and citizens to reduce risk and vulnerability of the populations has made variable progress. Experience has demonstrated that lack of awareness, education and preparedness may result in significant material and other losses both on the onset of the disaster. Schools play a vital role in the community and are important elements of values and culture of the society. A proper school education not only teaches children, but also is a key factor in the promotion of a safety culture into the wider community. In Greece School Earthquake Safety Initiative has been undertaken by Earthquake Planning and Protection Ogranization with specific actions (seminars, lectures, guidelines, educational material, campaigns, national or EU projects, drills etc.). The objective of this initiative is to develop disaster-resilient school communities through awareness, self-help, cooperation and education. School preparedness requires the participation of Principals, teachers, students, parents, and competent authorities. Preparation and earthquake readiness involves: a) learning what should be done before, during, and after earthquake; b) doing or preparing to do these things now, before the next earthquake; and c) developing teachers’ and students’ skills to cope efficiently in case of an earthquake. In the above given framework this paper presents the results of a survey aimed to identify the level of education and preparedness of school community in Greece. More specifically, the survey questionnaire investigates issues regarding earthquake protection actions, appropriate attitudes and behaviors during an earthquake and existence of contingency plans at elementary and secondary schools. The questionnaires were administered to Principals and teachers from different regions of the country that attend the EPPO national training project 'Earthquake Safety at Schools'. A closed-form questionnaire was developed for the survey, which contained questions regarding the following: a) knowledge of self protective actions b) existence of emergency planning at home and c) existence of emergency planning at school (hazard mitigation actions, evacuation plan, and performance of drills). Survey results revealed that a high percentage of teachers have taken the appropriate preparedness measures concerning non-structural hazards at schools, emergency school plan and simulation drills every year. In order to improve the action-planning for ongoing school disaster risk reduction, the implementation of earthquake drills, the involvement of students with disabilities and the evaluation of school emergency plans, EPPO participates in E-PreS project. The main objective of this project is to create smart tools which define, simulate and evaluate all hazards emergency steps customized to the unique district and school. The project comes up with a holistic methodology using real-time evaluation involving different categories of actors, districts, steps and metrics. The project is supported by EU Civil Protection Financial Instrument with a duration of two years. Coordinator is the Kapodistrian University of Athens and partners are from four countries; Greece, Italy, Romania and Bulgaria.

Keywords: drills, earthquake, emergency plans, E-PreS project

Procedia PDF Downloads 213
86 Developing Telehealth-Focused Advanced Practice Nurse Educational Partnerships

Authors: Shelley Y. Hawkins

Abstract:

Introduction/Background: As technology has grown exponentially in healthcare, nurse educators must prepare Advanced Practice Registered Nurse (APRN) graduates with the knowledge and skills in information systems/technology to support and improve patient care and health care systems. APRN’s are expected to lead in caring for populations who lack accessibility and availability through the use of technology, specifically telehealth. The capacity to effectively and efficiently use technology in patient care delivery is clearly delineated in the American Association of Colleges of Nursing (AACN) Doctor of Nursing Practice (DNP) and Master of Science in Nursing (MSN) Essentials. However, APRN’s have minimal, or no, exposure to formalized telehealth education and lack necessary technical skills needed to incorporate telehealth into their patient care. APRN’s must successfully master the technology using telehealth/telemedicine, electronic health records, health information technology, and clinical decision support systems to advance health. Furthermore, APRN’s must be prepared to lead the coordination and collaboration with other healthcare providers in their use and application. Aim/Goal/Purpose: The purpose of this presentation is to establish and operationalize telehealth-focused educational partnerships between one University School of Nursing and two health care systems in order to enhance the preparation of APRN NP students for practice, teaching, and/or scholarly endeavors. Methods: The proposed project was initially presented by the project director to selected multidisciplinary stakeholders including leadership, home telehealth personnel, primary care providers, and decision support systems within two major health care systems to garner their support for acceptance and implementation. Concurrently, backing was obtained from key university-affiliated colleagues including the Director of Simulation and Innovative Learning Lab and Coordinator of the Health Care Informatics Program. Technology experts skilled in design and production in web applications and electronic modules were secured from two local based technology companies. Results: Two telehealth-focused APRN Program academic/practice partnerships have been established. Students have opportunities to engage in clinically based telehealth experiences focused on: (1) providing patient care while incorporating various technology with a specific emphasis on telehealth; (2) conducting research and/or evidence-based practice projects in order to further develop the scientific foundation regarding incorporation of telehealth with patient care; and (3) participating in the production of patient-level educational materials related to specific topical areas. Conclusions: Evidence-based APRN student telehealth clinical experiences will assist in preparing graduates who can effectively incorporate telehealth into their clinical practice. Greater access for diverse populations will be available as a result of the telehealth service model as well as better care and better outcomes at lower costs. Furthermore, APRN’s will provide the necessary leadership and coordination through interprofessional practice by transforming health care through new innovative care models using information systems and technology.

Keywords: academic/practice partnerships, advanced practice nursing, nursing education, telehealth

Procedia PDF Downloads 220
85 Investigation of Software Integration for Simulations of Buoyancy-Driven Heat Transfer in a Vehicle Underhood during Thermal Soak

Authors: R. Yuan, S. Sivasankaran, N. Dutta, K. Ebrahimi

Abstract:

This paper investigates the software capability and computer-aided engineering (CAE) method of modelling transient heat transfer process occurred in the vehicle underhood region during vehicle thermal soak phase. The heat retention from the soak period will be beneficial to the cold start with reduced friction loss for the second 14°C worldwide harmonized light-duty vehicle test procedure (WLTP) cycle, therefore provides benefits on both CO₂ emission reduction and fuel economy. When vehicle undergoes soak stage, the airflow and the associated convective heat transfer around and inside the engine bay is driven by the buoyancy effect. This effect along with thermal radiation and conduction are the key factors to the thermal simulation of the engine bay to obtain the accurate fluids and metal temperature cool-down trajectories and to predict the temperatures at the end of the soak period. Method development has been investigated in this study on a light-duty passenger vehicle using coupled aerodynamic-heat transfer thermal transient modelling method for the full vehicle under 9 hours of thermal soak. The 3D underhood flow dynamics were solved inherently transient by the Lattice-Boltzmann Method (LBM) method using the PowerFlow software. This was further coupled with heat transfer modelling using the PowerTHERM software provided by Exa Corporation. The particle-based LBM method was capable of accurately handling extremely complicated transient flow behavior on complex surface geometries. The detailed thermal modelling, including heat conduction, radiation, and buoyancy-driven heat convection, were integrated solved by PowerTHERM. The 9 hours cool-down period was simulated and compared with the vehicle testing data of the key fluid (coolant, oil) and metal temperatures. The developed CAE method was able to predict the cool-down behaviour of the key fluids and components in agreement with the experimental data and also visualised the air leakage paths and thermal retention around the engine bay. The cool-down trajectories of the key components obtained for the 9 hours thermal soak period provide vital information and a basis for the further development of reduced-order modelling studies in future work. This allows a fast-running model to be developed and be further imbedded with the holistic study of vehicle energy modelling and thermal management. It is also found that the buoyancy effect plays an important part at the first stage of the 9 hours soak and the flow development during this stage is vital to accurately predict the heat transfer coefficients for the heat retention modelling. The developed method has demonstrated the software integration for simulating buoyancy-driven heat transfer in a vehicle underhood region during thermal soak with satisfying accuracy and efficient computing time. The CAE method developed will allow integration of the design of engine encapsulations for improving fuel consumption and reducing CO₂ emissions in a timely and robust manner, aiding the development of low-carbon transport technologies.

Keywords: ATCT/WLTC driving cycle, buoyancy-driven heat transfer, CAE method, heat retention, underhood modeling, vehicle thermal soak

Procedia PDF Downloads 133
84 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 123
83 Numerical and Experimental Comparison of Surface Pressures around a Scaled Ship Wind-Assisted Propulsion System

Authors: James Cairns, Marco Vezza, Richard Green, Donald MacVicar

Abstract:

Significant legislative changes are set to revolutionise the commercial shipping industry. Upcoming emissions restrictions will force operators to look at technologies that can improve the efficiency of their vessels -reducing fuel consumption and emissions. A device which may help in this challenge is the Ship Wind-Assisted Propulsion system (SWAP), an actively controlled aerofoil mounted vertically on the deck of a ship. The device functions in a similar manner to a sail on a yacht, whereby the aerodynamic forces generated by the sail reach an equilibrium with the hydrodynamic forces on the hull and a forward velocity results. Numerical and experimental testing of the SWAP device is presented in this study. Circulation control takes the form of a co-flow jet aerofoil, utilising both blowing from the leading edge and suction from the trailing edge. A jet at the leading edge uses the Coanda effect to energise the boundary layer in order to delay flow separation and create high lift with low drag. The SWAP concept has been originated by the research and development team at SMAR Azure Ltd. The device will be retrofitted to existing ships so that a component of the aerodynamic forces acts forward and partially reduces the reliance on existing propulsion systems. Wind tunnel tests have been carried out at the de Havilland wind tunnel at the University of Glasgow on a 1:20 scale model of this system. The tests aim to understand the airflow characteristics around the aerofoil and investigate the approximate lift and drag coefficients that an early iteration of the SWAP device may produce. The data exhibits clear trends of increasing lift as injection momentum increases, with critical flow attachment points being identified at specific combinations of jet momentum coefficient, Cµ, and angle of attack, AOA. Various combinations of flow conditions were tested, with the jet momentum coefficient ranging from 0 to 0.7 and the AOA ranging from 0° to 35°. The Reynolds number across the tested conditions ranged from 80,000 to 240,000. Comparisons between 2D computational fluid dynamics (CFD) simulations and the experimental data are presented for multiple Reynolds-Averaged Navier-Stokes (RANS) turbulence models in the form of normalised surface pressure comparisons. These show good agreement for most of the tested cases. However, certain simulation conditions exhibited a well-documented shortcoming of RANS-based turbulence models for circulation control flows and over-predicted surface pressures and lift coefficient for fully attached flow cases. Work must be continued in finding an all-encompassing modelling approach which predicts surface pressures well for all combinations of jet injection momentum and AOA.

Keywords: CFD, circulation control, Coanda, turbo wing sail, wind tunnel

Procedia PDF Downloads 120
82 Numerical Investigations of Unstable Pressure Fluctuations Behavior in a Side Channel Pump

Authors: Desmond Appiah, Fan Zhang, Shouqi Yuan, Wei Xueyuan, Stephen N. Asomani

Abstract:

The side channel pump has distinctive hydraulic performance characteristics over other vane pumps because of its generation of high pressure heads in only one impeller revolution. Hence, there is soaring utilization and application in the fields of petrochemical, food processing fields, automotive and aerospace fuel pumping where high heads are required at low flows. The side channel pump is characterized by unstable flow because after fluid flows into the impeller passage, it moves into the side channel and comes back to the impeller again and then moves to the next circulation. Consequently, the flow leaves the side channel pump following a helical path. However, the pressure fluctuation exhibited in the flow greatly contributes to the unwanted noise and vibration which is associated with the flow. In this paper, a side channel pump prototype was examined thoroughly through numerical calculations based on SST k-ω turbulence model to ascertain the pressure fluctuation behavior. The pressure fluctuation intensity of the 3D unstable flow dynamics were carefully investigated under different working conditions 0.8QBEP, 1.0 QBEP and 1.2QBEP. The results showed that the pressure fluctuation distribution around the pressure side of the blade is greater than the suction side at the impeller and side channel interface (z=0) for all three operating conditions. Part-load condition 0.8QBEP recorded the highest pressure fluctuation distribution because of the high circulation velocity thus causing an intense exchanged flow between the impeller and side channel. Time and frequency domains spectra of the pressure fluctuation patterns in the impeller and the side channel were also analyzed under the best efficiency point value, QBEP using the solution from the numerical calculations. It was observed from the time-domain analysis that the pressure fluctuation characteristics in the impeller flow passage increased steadily until the flow reached the interrupter which separates low-pressure at the inflow from high pressure at the outflow. The pressure fluctuation amplitudes in the frequency domain spectrum at the different monitoring points depicted a gentle decreasing trend of the pressure amplitudes which was common among the operating conditions. The frequency domain also revealed that the main excitation frequencies occurred at 600Hz, 1200Hz, and 1800Hz and continued in the integers of the rotating shaft frequency. Also, the mass flow exchange plots indicated that the side channel pump is characterized with many vortex flows. Operating conditions 0.8QBEP, 1.0 QBEP depicted less and similar vortex flow while 1.2Q recorded many vortex flows around the inflow, middle and outflow regions. The results of the numerical calculations were finally verified experimentally. The performance characteristics curves from the simulated results showed that 0.8QBEP working condition recorded a head increase of 43.03% and efficiency decrease of 6.73% compared to 1.0QBEP. It can be concluded that for industrial applications where the high heads are mostly required, the side channel pump can be designed to operate at part-load conditions. This paper can serve as a source of information in order to optimize a reliable performance and widen the applications of the side channel pumps.

Keywords: exchanged flow, pressure fluctuation, numerical simulation, side channel pump

Procedia PDF Downloads 118
81 Permeable Asphalt Pavement as a Measure of Urban Green Infrastructure in the Extreme Events Mitigation

Authors: Márcia Afonso, Cristina Fael, Marisa Dinis-Almeida

Abstract:

Population growth in cities has led to an increase in the infrastructures construction, including buildings and roadways. This aspect leads directly to the soils waterproofing. In turn, changes in precipitation patterns are developing into higher and more frequent intensities. Thus, these two conjugated aspects decrease the rainwater infiltration into soils and increase the volume of surface runoff. The practice of green and sustainable urban solutions has encouraged research in these areas. The porous asphalt pavement, as a green infrastructure, is part of practical solutions set to address urban challenges related to land use and adaptation to climate change. In this field, permeable pavements with porous asphalt mixtures (PA) have several advantages in terms of reducing the runoff generated by the floods. The porous structure of these pavements, compared to a conventional asphalt pavement, allows the rainwater infiltration in the subsoil, and consequently, the water quality improvement. This green infrastructure solution can be applied in cities, particularly in streets or parking lots to mitigate the floods effects. Over the years, the pores of these pavements can be filled by sediment, reducing their function in the rainwater infiltration. Thus, double layer porous asphalt (DLPA) was developed to mitigate the clogging effect and facilitate the water infiltration into the lower layers. This study intends to deepen the knowledge of the performance of DLPA when subjected to clogging. The experimental methodology consisted on four evaluation phases of the DLPA infiltration capacity submitted to three precipitation events (100, 200 and 300 mm/h) in each phase. The evaluation first phase determined the behavior after DLPA construction. In phases two and three, two 500 g/m2 clogging cycles were performed, totaling a 1000 g/m2 final simulation. Sand with gradation accented in fine particles was used as clogging material. In the last phase, the DLPA was subjected to simple sweeping and vacuuming maintenance. A precipitation simulator, type sprinkler, capable of simulating the real precipitation was developed for this purpose. The main conclusions show that the DLPA has the capacity to drain the water, even after two clogging cycles. The infiltration results of flows lead to an efficient performance of the DPLA in the surface runoff attenuation, since this was not observed in any of the evaluation phases, even at intensities of 200 and 300 mm/h, simulating intense precipitation events. The infiltration capacity under clogging conditions decreased about 7% on average in the three intensities relative to the initial performance that is after construction. However, this was restored when subjected to simple maintenance, recovering the DLPA hydraulic functionality. In summary, the study proved the efficacy of using a DLPA when it retains thicker surface sediments and limits the fine sediments entry to the remaining layers. At the same time, it is guaranteed the rainwater infiltration and the surface runoff reduction and is therefore a viable solution to put into practice in permeable pavements.

Keywords: clogging, double layer porous asphalt, infiltration capacity, rainfall intensity

Procedia PDF Downloads 471
80 Wellbeing Effects from Family Literacy Education: An Ecological Study

Authors: Jane Furness, Neville Robertson, Judy Hunter, Darrin Hodgetts, Linda Nikora

Abstract:

Background and significance: This paper describes the first use of community psychology theories to investigate family-focused literacy education programmes, enabling a wide range of wellbeing effects of such programmes to be identified for the first time. Evaluations of family literacy programmes usually focus on the economic advantage of gains in literacy skills. By identifying other effects on aspects of participants’ lives that are important to them, and how they occur, understanding of how such programmes contribute to wellbeing and social justice is augmented. Drawn from community psychology, an ecological systems-based, culturally adaptive framework for personal, relational and collective wellbeing illuminated outcomes of family literacy programmes that enhanced wellbeing and quality of life for adult participants, their families and their communities. All programmes, irrespective of their institutional location, could be similarly scrutinized. Methodology: The study traced the experiences of nineteen adult participants in four family-focused literacy programmes located in geographically and culturally different communities throughout New Zealand. A critical social constructionist paradigm framed this interpretive study. Participants were mainly Māori, Pacific islands, or European New Zealanders. Seventy-nine repeated conversational interviews were conducted over 18 months with the adult participants, programme staff and people who knew the participants well. Twelve participant observations of programme sessions were conducted, and programme documentation was reviewed. Latent theoretical thematic analysis of data drew on broad perspectives of literacy and ecological systems theory, network theory and holistic, integrative theories of wellbeing. Steps taken to co-construct meaning with participants included the repeated conversational interviews and participant checking of interview transcripts and section drafts. The researcher (this paper’s first author) followed methodological guidelines developed by indigenous peoples for non-indigenous researchers. Findings: The study found that the four family literacy programmes, differing in structure, content, aims and foci, nevertheless shared common principles and practices that reflected programme staff’s overarching concern for people’s wellbeing along with their desire to enhance literacy abilities. A human rights and strengths-based based view of people based on respect for diverse culturally based values and practices were evident in staff expression of their values and beliefs and in their practices. This enacted stance influenced the outcomes of programme participation for the adult participants, their families and their communities. Alongside the literacy and learning gains identified, participants experienced positive social and relational events and changes, affirmation and strengthening of their culturally based values, and affirmation and building of positive identity. Systemically, interconnectedness of programme effects with participants’ personal histories and circumstances; the flow on of effects to other aspects of people’s lives and to their families and communities; and the personalised character of the pathways people journeyed towards enhanced wellbeing were identified. Concluding statement: This paper demonstrates the critical contribution of community psychology to a fuller understanding of family-focused educational programme outcomes than has been previously attainable, the meaning of these broader outcomes to people in their lives, and their role in wellbeing and social justice.

Keywords: community psychology, ecological theory, family literacy education, flow on effects, holistic wellbeing

Procedia PDF Downloads 230
79 The Monitor for Neutron Dose in Hadrontherapy Project: Secondary Neutron Measurement in Particle Therapy

Authors: V. Giacometti, R. Mirabelli, V. Patera, D. Pinci, A. Sarti, A. Sciubba, G. Traini, M. Marafini

Abstract:

The particle therapy (PT) is a very modern technique of non invasive radiotherapy mainly devoted to the treatment of tumours untreatable with surgery or conventional radiotherapy, because localised closely to organ at risk (OaR). Nowadays, PT is available in about 55 centres in the word and only the 20\% of them are able to treat with carbon ion beam. However, the efficiency of the ion-beam treatments is so impressive that many new centres are in construction. The interest in this powerful technology lies to the main characteristic of PT: the high irradiation precision and conformity of the dose released to the tumour with the simultaneous preservation of the adjacent healthy tissue. However, the beam interactions with the patient produce a large component of secondary particles whose additional dose has to be taken into account during the definition of the treatment planning. Despite, the largest fraction of the dose is released to the tumour volume, a non-negligible amount is deposed in other body regions, mainly due to the scattering and nuclear interactions of the neutrons within the patient body. One of the main concerns in PT treatments is the possible occurrence of secondary malignant neoplasm (SMN). While SMNs can be developed up to decades after the treatments, their incidence impacts directly life quality of the cancer survivors, in particular in pediatric patients. Dedicated Treatment Planning Systems (TPS) are used to predict the normal tissue toxicity including the risk of late complications induced by the additional dose released by secondary neutrons. However, no precise measurement of secondary neutrons flux is available, as well as their energy and angular distributions: an accurate characterization is needed in order to improve TPS and reduce safety margins. The project MONDO (MOnitor for Neutron Dose in hadrOntherapy) is devoted to the construction of a secondary neutron tracker tailored to the characterization of that secondary neutron component. The detector, based on the tracking of the recoil protons produced in double-elastic scattering interactions, is a matrix of thin scintillating fibres, arranged in layer x-y oriented. The final size of the object is 10 x 10 x 20 cm3 (squared 250µm scint. fibres, double cladding). The readout of the fibres is carried out with a dedicated SPAD Array Sensor (SBAM) realised in CMOS technology by FBK (Fondazione Bruno Kessler). The detector is under development as well as the SBAM sensor and it is expected to be fully constructed for the end of the year. MONDO will make data tacking campaigns at the TIFPA Proton Therapy Center of Trento, at the CNAO (Pavia) and at HIT (Heidelberg) with carbon ion in order to characterize the neutron component and predict the additional dose delivered on the patients with much more precision and to drastically reduce the actual safety margins. Preliminary measurements with charged particles beams and MonteCarlo FLUKA simulation will be presented.

Keywords: secondary neutrons, particle therapy, tracking detector, elastic scattering

Procedia PDF Downloads 209
78 Farm-Women in Technology Transfer to Foster the Capacity Building of Agriculture: A Forecast from a Draught-Prone Rural Setting in India

Authors: Pradipta Chandra, Titas Bhattacharjee, Bhaskar Bhowmick

Abstract:

The foundation of economy in India is primarily based on agriculture while this is the most neglected in the rural setting. More significantly, household women take part in agriculture with higher involvement. However, because of lower education of women they have limited access towards financial decisions, land ownership and technology but they have vital role towards the individual family level. There are limited studies on the institution-wise training barriers with the focus of gender disparity. The main purpose of this paper is to find out the factors of institution-wise training (non-formal education) barriers in technology transfer with the focus of participation of rural women in agriculture. For this study primary and secondary data were collected in the line of qualitative and quantitative approach. Qualitative data were collected by several field visits in the adjacent areas of Seva-Bharati, Seva Bharati Krishi Vigyan Kendra through semi-structured questionnaires. In the next level detailed field surveys were conducted with close-ended questionnaires scored on the seven-point Likert scale. Sample size was considered as 162. During the data collection the focus was to include women although some biasness from the end of respondents and interviewer might exist due to dissimilarity in observation, views etc. In addition to that the heterogeneity of sample is not very high although female participation is more than fifty percent. Data were analyzed using Exploratory Factor Analysis (EFA) technique with the outcome of three significant factors of training barriers in technology adoption by farmers: (a) Failure of technology transfer training (TTT) comprehension interprets that the technology takers, i.e., farmers can’t understand the technology either language barrier or way of demonstration exhibited by the experts/ trainers. (b) Failure of TTT customization, articulates that the training for individual farmer, gender crop or season-wise is not tailored. (c) Failure of TTT generalization conveys that absence of common training methods for individual trainers for specific crops is more prominent at the community level. The central finding is that the technology transfer training method can’t fulfill the need of the farmers under an economically challenged area. The impact of such study is very high in the area of dry lateritic and resource crunch area of Jangalmahal under Paschim Medinipur district, West Bengal and areas with similar socio-economy. Towards the policy level decision this research may help in framing digital agriculture for implementation of the appropriate information technology for the farming community, effective and timely investment by the government with the selection of beneficiary, formation of farmers club/ farm science club etc. The most important research implication of this study lies upon the contribution towards the knowledge diffusion mechanism of the agricultural sector in India. Farmers may overcome the barriers to achieve higher productivity through adoption of modern farm practices. Corporates will be interested in agro-sector through investment under corporate social responsibility (CSR). The research will help in framing public or industry policy and land use pattern. Consequently, a huge mass of rural farm-women will be empowered and farmer community will be benefitted.

Keywords: dry lateritic zone, institutional barriers, technology transfer in India, farm-women participation

Procedia PDF Downloads 355
77 Successful Public-Private Partnership Through the Impact of Environmental Education: A Case Study on Transforming Community Confrict into Harmony in the Dongpian Community

Authors: Men An Pan, Ho Hsiung Huang, Jui Chuan Lin, Tsui Hsun Wu, Hsing Yuan Yen

Abstract:

Pingtung County, located in the southernmost region of Taiwan, has the largest number of pig farms in the country. In the past, livestock operators in Dongpian Village discharged their wastewater into the nearby water bodies, causing water pollution in the local rivers and polluting the air with the stench of the pig excrement. These resulted in many complaints from the local residents. In response to a long time fighting back of the community against the livestock farms due to the confrict, the County Government's Environmental Protection Bureau (PTEPB) examined potential wayouts in addition to heavy fines to the perpetrators. Through helping the livestock farms to upgrade their pollution prevention equipment, promoting the reuse of biogas residue and slurry from the pig excrement, and environmental education, the confrict was successfully resolved. The properly treated wastewater from the livestock farms has been freely provided to the neighboring farmlands via pipelines and tankers. Thus, extensive cultivation of bananas, papaya, red dragon fruit, Inca nut, and cocoa has resulted in 34% resource utilization of biogas residue as a fertilizer. This has encouraged farmers to reduce chemical fertilizers and use microbial materials like photosynthetic bacteria after banning herbicides while lowering the cost of wastewater treatment in livestock farms and alleviating environmental pollution simultaneously. That is, the livestock farms fully demonstrate the determination to fulfill their corporate social responsibility (CSR). Due to the success, Eight farms jointly established a social enterprise - "Dongpian Gemstone Village Co., Ltd." to promote organic farming through a "shared farm." The company appropriates 5% of its total revenue back to the community through caregiving services for the elderly and a fund for young local farmers. The community adopted the Satoyama Initiative in accordance with the Conference of the CBD COP10. Through the positive impact of environmental education, the community seeks to realize the coexistence between society and nature while maintaining and developing socio-economic activities (including agriculture) with respect for nature and building a harmonic relationship between humans and nature. By way of sustainable management of resources and ensuring biodiversity, the community is transforming into a socio-ecological production landscape. Apart from nature conservation and watercourse ecology, preserving local culture is also a key focus of the environmental education. To mitigate the impact of global warming and climate change, the community and the government have worked together to develop a disaster prevention and relief system, strive to establish a low-carbon emitting homeland, and become a model for resilient communities. By the power of environmental education, this community has turned its residents’ hearts and minds into concrete action, fulfilled social responsibility, and moved towards realizing the UN SDGs. Even though it is not the only community to integrate government agencies, research institutions, and NGOs for environmental education, it is a prime example of a low-carbon sustainable community that achieves more than 9 SDGs, including responsible consumption and production, climate change action, and diverse partnerships. The community is also leveraging environmental education to become a net-zero carbon community targeted by COP26.

Keywords: environmental education, biogas residue, biogas slurry, CSR, SDGs, climate change, net-zero carbon emissions

Procedia PDF Downloads 125
76 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology

Authors: Amarendar Reddy Addula

Abstract:

Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.

Keywords: artificial intelligence, ethics & human rights issues, laws, international laws

Procedia PDF Downloads 77
75 Toward the Decarbonisation of EU Transport Sector: Impacts and Challenges of the Diffusion of Electric Vehicles

Authors: Francesca Fermi, Paola Astegiano, Angelo Martino, Stephanie Heitel, Michael Krail

Abstract:

In order to achieve the targeted emission reductions for the decarbonisation of the European economy by 2050, fundamental contributions are required from both energy and transport sectors. The objective of this paper is to analyse the impacts of a largescale diffusion of e-vehicles, either battery-based or fuel cells, together with the implementation of transport policies aiming at decreasing the use of motorised private modes in order to achieve greenhouse gas emission reduction goals, in the context of a future high share of renewable energy. The analysis of the impacts and challenges of future scenarios on transport sector is performed with the ASTRA (ASsessment of TRAnsport Strategies) model. ASTRA is a strategic system-dynamic model at European scale (EU28 countries, Switzerland and Norway), consisting of different sub-modules related to specific aspects: the transport system (e.g. passenger trips, tonnes moved), the vehicle fleet (composition and evolution of technologies), the demographic system, the economic system, the environmental system (energy consumption, emissions). A key feature of ASTRA is that the modules are linked together: changes in one system are transmitted to other systems and can feed-back to the original source of variation. Thanks to its multidimensional structure, ASTRA is capable to simulate a wide range of impacts stemming from the application of transport policy measures: the model addresses direct impacts as well as second-level and third-level impacts. The simulation of the different scenarios is performed within the REFLEX project, where the ASTRA model is employed in combination with several energy models in a comprehensive Modelling System. From the transport sector perspective, some of the impacts are driven by the trend of electricity price estimated from the energy modelling system. Nevertheless, the major drivers to a low carbon transport sector are policies related to increased fuel efficiency of conventional drivetrain technologies, improvement of demand management (e.g. increase of public transport and car sharing services/usage) and diffusion of environmentally friendly vehicles (e.g. electric vehicles). The final modelling results of the REFLEX project will be available from October 2018. The analysis of the impacts and challenges of future scenarios is performed in terms of transport, environmental and social indicators. The diffusion of e-vehicles produces a consistent reduction of future greenhouse gas emissions, although the decarbonisation target can be achieved only with the contribution of complementary transport policies on demand management and supporting the deployment of low-emission alternative energy for non-road transport modes. The paper explores the implications through time of transport policy measures on mobility and environment, underlying to what extent they can contribute to a decarbonisation of the transport sector. Acknowledgements: The results refer to the REFLEX project which has received grants from the European Union’s Horizon 2020 research and innovation program under Grant Agreement No. 691685.

Keywords: decarbonisation, greenhouse gas emissions, e-mobility, transport policies, energy

Procedia PDF Downloads 134
74 An Innovation Decision Process View in an Adoption of Total Laboratory Automation

Authors: Chia-Jung Chen, Yu-Chi Hsu, June-Dong Lin, Kun-Chen Chan, Chieh-Tien Wang, Li-Ching Wu, Chung-Feng Liu

Abstract:

With fast advances in healthcare technology, various total laboratory automation (TLA) processes have been proposed. However, adopting TLA needs quite high funding. This study explores an early adoption experience by Taiwan’s large-scale hospital group, the Chimei Hospital Group (CMG), which owns three branch hospitals (Yongkang, Liouying and Chiali, in order by service scale), based on the five stages of Everett Rogers’ Diffusion Decision Process. 1.Knowledge stage: Over the years, two weaknesses exists in laboratory department of CMG: 1) only a few examination categories (e.g., sugar testing and HbA1c) can now be completed and reported within a day during an outpatient clinical visit; 2) the Yongkang Hospital laboratory space is dispersed across three buildings, resulting in duplicated investment in analysis instruments and inconvenient artificial specimen transportation. Thus, the senior management of the department raised a crucial question, was it time to process the redesign of the laboratory department? 2.Persuasion stage: At the end of 2013, Yongkang Hospital’s new building and restructuring project created a great opportunity for the redesign of the laboratory department. However, not all laboratory colleagues had the consensus for change. Thus, the top managers arranged a series of benchmark visits to stimulate colleagues into being aware of and accepting TLA. Later, the director of the department proposed a formal report to the top management of CMG with the results of the benchmark visits, preliminary feasibility analysis, potential benefits and so on. 3.Decision stage: This TLA suggestion was well-supported by the top management of CMG and, finally, they made a decision to carry out the project with an instrument-leasing strategy. After the announcement of a request for proposal and several vendor briefings, CMG confirmed their laboratory automation architecture and finally completed the contracts. At the same time, a cross-department project team was formed and the laboratory department assigned a section leader to the National Taiwan University Hospital for one month of relevant training. 4.Implementation stage: During the implementation, the project team called for regular meetings to review the results of the operations and to offer an immediate response to the adjustment. The main project tasks included: 1) completion of the preparatory work for beginning the automation procedures; 2) ensuring information security and privacy protection; 3) formulating automated examination process protocols; 4) evaluating the performance of new instruments and the instrument connectivity; 5)ensuring good integration with hospital information systems (HIS)/laboratory information systems (LIS); and 6) ensuring continued compliance with ISO 15189 certification. 5.Confirmation stage: In short, the core process changes include: 1) cancellation of signature seals on the specimen tubes; 2) transfer of daily examination reports to a data warehouse; 3) routine pre-admission blood drawing and formal inpatient morning blood drawing can be incorporated into an automatically-prepared tube mechanism. The study summarizes below the continuous improvement orientations: (1) Flexible reference range set-up for new instruments in LIS. (2) Restructure of the specimen category. (3) Continuous review and improvements to the examination process. (4) Whether installing the tube (specimen) delivery tracks need further evaluation.

Keywords: innovation decision process, total laboratory automation, health care

Procedia PDF Downloads 403
73 Sinhala Sign Language to Grammatically Correct Sentences using NLP

Authors: Anjalika Fernando, Banuka Athuraliya

Abstract:

This paper presents a comprehensive approach for converting Sinhala Sign Language (SSL) into grammatically correct sentences using Natural Language Processing (NLP) techniques in real-time. While previous studies have explored various aspects of SSL translation, the research gap lies in the absence of grammar checking for SSL. This work aims to bridge this gap by proposing a two-stage methodology that leverages deep learning models to detect signs and translate them into coherent sentences, ensuring grammatical accuracy. The first stage of the approach involves the utilization of a Long Short-Term Memory (LSTM) deep learning model to recognize and interpret SSL signs. By training the LSTM model on a dataset of SSL gestures, it learns to accurately classify and translate these signs into textual representations. The LSTM model achieves a commendable accuracy rate of 94%, demonstrating its effectiveness in accurately recognizing and translating SSL gestures. Building upon the successful recognition and translation of SSL signs, the second stage of the methodology focuses on improving the grammatical correctness of the translated sentences. The project employs a Neural Machine Translation (NMT) architecture, consisting of an encoder and decoder with LSTM components, to enhance the syntactical structure of the generated sentences. By training the NMT model on a parallel corpus of Sinhala wrong sentences and their corresponding grammatically correct translations, it learns to generate coherent and grammatically accurate sentences. The NMT model achieves an impressive accuracy rate of 98%, affirming its capability to produce linguistically sound translations. The proposed approach offers significant contributions to the field of SSL translation and grammar correction. Addressing the critical issue of grammar checking, it enhances the usability and reliability of SSL translation systems, facilitating effective communication between hearing-impaired and non-sign language users. Furthermore, the integration of deep learning techniques, such as LSTM and NMT, ensures the accuracy and robustness of the translation process. This research holds great potential for practical applications, including educational platforms, accessibility tools, and communication aids for the hearing-impaired. Furthermore, it lays the foundation for future advancements in SSL translation systems, fostering inclusive and equal opportunities for the deaf community. Future work includes expanding the existing datasets to further improve the accuracy and generalization of the SSL translation system. Additionally, the development of a dedicated mobile application would enhance the accessibility and convenience of SSL translation on handheld devices. Furthermore, efforts will be made to enhance the current application for educational purposes, enabling individuals to learn and practice SSL more effectively. Another area of future exploration involves enabling two-way communication, allowing seamless interaction between sign-language users and non-sign-language users.In conclusion, this paper presents a novel approach for converting Sinhala Sign Language gestures into grammatically correct sentences using NLP techniques in real time. The two-stage methodology, comprising an LSTM model for sign detection and translation and an NMT model for grammar correction, achieves high accuracy rates of 94% and 98%, respectively. By addressing the lack of grammar checking in existing SSL translation research, this work contributes significantly to the development of more accurate and reliable SSL translation systems, thereby fostering effective communication and inclusivity for the hearing-impaired community

Keywords: Sinhala sign language, sign Language, NLP, LSTM, NMT

Procedia PDF Downloads 78
72 The Systematic Impact of Climatic Disasters on the Maternal Health in Pakistan

Authors: Yiqi Zhu, Jean Francois Trani, Rameez Ulhassan

Abstract:

Extreme weather phenomena increased by 46% between 2007 and 2017 and have become more intense with the rise in global average temperatures. This increased intensity of climate variations often induces humanitarian crises and particularly affects vulnerable populations in low- and middle-income countries (LMICs). Expectant and lactating mothers are among the most vulnerable groups. Pakistan ranks 10th among the most affected countries by climate disasters. In 2022, monsoon floods submerged a third of the country, causing the loss of 1,500 lives. Approximately 650,000 expectant and lactating mothers faced systematic stress from climatic disasters. Our study used participatory methods to investigate the systematic impact of climatic disasters on maternal health. In March 2023, we conducted six Group Model Building (GMB) workshops with healthcare workers, fathers, and mothers separately in two of the most affected areas in Pakistan. This study was approved by the Islamic Relief Research Review Board. GMB workshops consist of three sessions. In the first session, participants discussed the factors that impact maternal health. After identifying the factors, they discussed the connections among them and explored the system structures that collectively impact maternal health. Based on the discussion, a causal loop diagram (CLD) was created. Finally, participants discussed action ideas that could improve the system to enhance maternal health. Based on our discussions and the causal loop diagram, we identified interconnected factors at the family, community, and policy levels. Mothers and children are directly impacted by three interrelated factors: food insecurity, unstable housing, and lack of income. These factors create a reinforcing cycle that negatively affects both mothers and newborns. After the flood, many mothers were unable to produce sufficient breastmilk due to their health status. Without breastmilk and sufficient food for complementary feeding, babies tend to get sick in damp and unhygienic environments resulting from temporary or unstable housing. When parents take care of sick children, they miss out on income-generating opportunities. At the community level, the lack of access to clean water and sanitation (WASH) and maternal healthcare further worsens the situation. Structural failures such as a lack of safety nets and programs associated with flood preparedness make families increasingly vulnerable with each disaster. Several families reported that they had not fully recovered from a flood that occurred ten years ago, and this latest disaster destroyed their lives again. Although over twenty non-profit organizations are working in these villages, few of them provide sustainable support. Therefore, participants called for systemic changes in response to the increasing frequency of climate disasters. The study reveals the systematic vulnerabilities of mothers and children after climatic disasters. The most vulnerable populations are often affected the most by climate change. Collaborative efforts are required to improve water and forest management, strengthen public infrastructure, increase access to WASH, and gradually build climate-resilient communities. Governments, non-governmental organizations, and the community should work together to develop and implement effective strategies to prevent, mitigate, and adapt to climate change and its impacts.

Keywords: climatic disasters, maternal health, Pakistan, systematic impact, flood, disaster relief.

Procedia PDF Downloads 54
71 Molecular Migration in Polyvinyl Acetate Matrix: Impact of Compatibility, Number of Migrants and Stress on Surface and Internal Microstructure

Authors: O. Squillace, R. L. Thompson

Abstract:

Migration of small molecules to, and across the surface of polymer matrices is a little-studied problem with important industrial applications. Tackifiers in adhesives, flavors in foods and binding agents in paints all present situations where the function of a product depends on the ability of small molecules to migrate through a polymer matrix to achieve the desired properties such as softness, dispersion of fillers, and to deliver an effect that is felt (or tasted) on a surface. It’s been shown that the chemical and molecular structure, surface free energies, phase behavior, close environment and compatibility of the system, influence the migrants’ motion. When differences in behavior, such as occurrence of segregation to the surface or not, are observed it is then of crucial importance to identify and get a better understanding of the driving forces involved in the process of molecular migration. In this aim, experience is meant to be allied with theory in order to deliver a validated theoretical and computational toolkit to describe and predict these phenomena. The systems that have been chosen for this study aim to address the effect of polarity mismatch between the migrants and the polymer matrix and that of a second migrant over the first one. As a non-polar resin polymer, polyvinyl acetate is used as the material to which more or less polar migrants (sorbitol, carvone, octanoic acid (OA), triacetin) are to be added. Through contact angle measurement a surface excess is seen for sorbitol (polar) mixed with PVAc as the surface energy is lowered compare to the one of pure PVAc. This effect is increased upon the addition of carvon or triacetin (non-polars). Surface micro-structures are also evidenced by atomic force microscopy (AFM). Ion beam analysis (Nuclear Reaction Analysis), supplemented by neutron reflectometry can accurately characterize the self-organization of surfactants, oligomers, aromatic molecules in polymer films in order to relate the macroscopic behavior to the length scales that are amenable to simulation. The nuclear reaction analysis (NRA) data for deuterated OA 20% shows the evidence of a surface excess which is enhanced after annealing. The addition of 10% triacetin, as a second migrant, results in the formation of an underlying layer enriched in triacetin below the surface excess of OA. The results show that molecules in polarity mismatch with the matrix tend to segregate to the surface, and this is favored by the addition of a second migrant of the same polarity than the matrix. As studies have been restricted to materials that are model supported films under static conditions in a first step, it is also wished to address the more challenging conditions of materials under controlled stress or strain. To achieve this, a simple rig and PDMS cell have been designed to stretch the material to a defined strain and to probe these mechanical effects by ion beam analysis and atomic force microscopy. This will make a significant step towards exploring the influence of extensional strain on surface segregation, flavor release in cross-linked rubbers.

Keywords: polymers, surface segregation, thin films, molecular migration

Procedia PDF Downloads 116
70 Targeting Peptide Based Therapeutics: Integrated Computational and Experimental Studies of Autophagic Regulation in Host-Parasite Interaction

Authors: Vrushali Guhe, Shailza Singh

Abstract:

Cutaneous leishmaniasis is neglected tropical disease present worldwide caused by the protozoan parasite Leishmania major, the therapeutic armamentarium for leishmaniasis are showing several limitations as drugs are showing toxic effects with increasing resistance by a parasite. Thus identification of novel therapeutic targets is of paramount importance. Previous studies have shown that autophagy, a cellular process, can either facilitate infection or aid in the elimination of the parasite, depending on the specific parasite species and host background in leishmaniasis. In the present study, our objective was to target the essential autophagy protein ATG8, which plays a crucial role in the survival, infection dynamics, and differentiation of the Leishmania parasite. ATG8 in Leishmania major and its homologue, LC3, in Homo sapiens, act as autophagic markers. Present study manifested the crucial role of ATG8 protein as a potential target for combating Leishmania major infection. Through bioinformatics analysis, we identified non-conserved motifs within the ATG8 protein of Leishmania major, which are not present in LC3 of Homo sapiens. Against these two non-conserved motifs, we generated a peptide library of 60 peptides on the basis of physicochemical properties. These peptides underwent a filtering process based on various parameters, including feasibility of synthesis and purification, compatibility with Selective Reaction Monitoring (SRM)/Multiple reaction monitoring (MRM), hydrophobicity, hydropathy index, average molecular weight (Mw average), monoisotopic molecular weight (Mw monoisotopic), theoretical isoelectric point (pI), and half-life. Further filtering criterion shortlisted three peptides by using molecular docking and molecular dynamics simulations. The direct interaction between ATG8 and the shortlisted peptides was confirmed through Surface Plasmon Resonance (SPR) experiments. Notably, these peptides exhibited the remarkable ability to penetrate the parasite membrane and exert profound effects on Leishmania major. The treatment with these peptides significantly impacted parasite survival, leading to alterations in the cell cycle and morphology. Furthermore, the peptides were found to modulate autophagosome formation, particularly under starved conditions, suggesting their involvement in disrupting the regulation of autophagy within Leishmania major. In vitro, studies demonstrated that the selected peptides effectively reduced the parasite load within infected host cells. Encouragingly, these findings were corroborated by in vivo experiments, which showed a reduction in parasite burden upon peptide administration. Additionally, the peptides were observed to affect the levels of LC3II within host cells. In conclusion, our findings highlight the efficacy of these novel peptides in targeting Leishmania major’s ATG8 and disrupting parasite survival. These results provide valuable insights into the development of innovative therapeutic strategies against leishmaniasis via targeting autophagy protein ATG8 of Leishmania major.

Keywords: ATG8, leishmaniasis, surface plasmon resonance, MD simulation, molecular docking, peptide designing, therapeutics

Procedia PDF Downloads 60
69 Media, Myth and Hero: Sacred Political Narrative in Semiotic and Anthropological Analysis

Authors: Guilherme Oliveira

Abstract:

The assimilation of images and their potential symbolism into lived experiences is inherent. It is through this exercise of recognition via imagistic records that the questioning of the origins of a constant narrative stimulated by the media arises. The construction of the "Man" archetype and the reflections of active masculine imagery in the 21st century, when conveyed through media channels, could potentially have detrimental effects. Addressing this systematic behavioral chronology of virile cisgender, permeated imagistically through these means, involves exploring potential resolutions. Thus, an investigation process is initiated into the potential representation of the 'hero' in this media emulation through idols contextualized in the political sphere, with the purpose of elucidating the processes of simulation and emulation of narratives based on mythical, historical, and sacred accounts. In this process of sharing, the narratives contained in the imagistic structuring offered by information dissemination channels seek validation through a process of public acceptance. To achieve this consensus, a visual set adorned with mythological and sacred symbolisms adapted to the intended environment is promoted, thus utilizing sociocultural characteristics in favor of political marketing. Visual recognition, therefore, becomes a direct reflection of a cultural heritage acquired through lived human experience, stimulated by continuous representations throughout history. Echoes of imagery and narratives undergo a constant process of resignification of their concepts, sharpened by their premises, and adapted to the environment in which they seek to establish themselves. Political figures analyzed in this article employ the practice of taking possession of symbolisms, mythological stories, and heroisms and adapt their visual construction through a continuous praxis of emulation. Thus, they utilize iconic mythological narratives to gain credibility through belief. Utilizing iconic mythological narratives for credibility through belief, the idol becomes the very act of release of trauma, offering believers liberation from preconceived concepts and allowing for the attribution of new meanings. To dissolve this issue and highlight the subjectivities within the intention of the image, a linguistic, semiotic, and anthropological methodology is created. Linguistics uses expressions like 'Blaming the Image' to create a mechanism of expressive action in questioning why to blame a construction or visual composition and thus seek answers in the first act. Semiotics and anthropology develop an imagistic atlas of graphic analysis, seeking to make connections, comparisons, and relations between modern and sacred/mystical narratives, emphasizing the different subjective layers of embedded symbolism. Thus, it constitutes a performative act of disarming the image. It creates a disenchantment of the superficial gaze under the constant reproduction of visual content stimulated by virtual networks, enabling a discussion about the acceptance of caricatures characterized by past fables.

Keywords: image, heroic narrative, media heroism, virile politics, political, myth, sacred performance, visual mythmaking, characterization dynamics

Procedia PDF Downloads 33
68 Upsouth: Digitally Empowering Rangatahi (Youth) and Whaanau (Families) to Build Skills in Critical and Creative Thinking to Achieve More Active Citizenship in Aotearoa New Zealand

Authors: Ayla Hoeta

Abstract:

In a post-colonial Aotearoa New Zealand, solutions by rangatahi (youth) for rangatahi are essential as is civic participation and building economic agency in an increasingly tough economic climate. Upsouth was an online community crowdsourcing platform developed by The Southern Initiative, in collaboration with Itsnoon that provides rangatahi and whānau (family) a safe space to share lived experience, thoughts and ideas about local kaupapa (issues/topics) of importance to them. The target participants were Māori indigenous peoples and Pacifica groups, aged 14 - 21 years. In the Aotearoa New Zealand context, this participant group is not likely to engage in traditional consultation processes despite being an essential constituent in helping shape better local communities, whānau and futures. The Upsouth platform was active for two years from 2018-2019 where it completed 42 callups with 4300+ participants. The web platform collates the ideas, voices, feedback, and content of users around a callup that has been commissioned by a sponsor, such as Auckland Council, Z Energy or Auckland Transport. A callup may be about a pressing challenge in a community such as climate change, a new housing development, homelessness etc. Each callup was funded by the sponsor with Upsouths main point of difference being that participants are given koha (money donation) through digital wallets for their ideas. Depending on the quality of what participants upload, the koha varies between small micropayments and larger payments. This encouraged participants to develop creative and critical thinking - upskilling for future focussed jobs, enterprise and democratic skills while earning pocket money at the same time. Upsouth enables youth-led action and voice, and empowers them to be a part of a reciprocal and creative economy. Rangatahi are encouraged to express themselves culturally, creatively, freely and in a way they are free to choose - for example, spoken word, song, dance, video, drawings, and/or poems. This challenges and changes what is considered acceptable as community engagement feedback by the local government. Many traditional engagement platforms are not as consultative, do not accept diverse types of feedback, nor incentivise this valuable expression of feedback. Upsouth is also empowering for rangatahi, since it allows them the opportunity to express their opinions directly to the government. Upsouth gained national and international recognition for the way it engages with youth: winning the Supreme Award and the Accessibility and Transparency Award at Auckland Council’s 2018 Engagement Awards, becoming a finalist in the 2018 Digital Equity and Accessibility category of International Data Corporation’s Smart City Asia and Pacific Awards. This paper will fully contextualize the challenges of rangatahi and whānau civic engagement in Aotearoa New Zealand and then present a reflective case study of the Upsouth project, with examples from some of the callups. This is intended to form part of the Divided Cities 22 conference New Ground sub-theme as a critical reflection on a design intervention, which was conceived and implemented by the lead author to overcome the post-colonial divisions of Māori, Pacifica and minority ethnic rangatahi in Aotearoa New Zealand.

Keywords: rangatahi, youth empowerment, civic engagement, enabling, relating, digital platform, participation

Procedia PDF Downloads 54
67 Structural Molecular Dynamics Modelling of FH2 Domain of Formin DAAM

Authors: Rauan Sakenov, Peter Bukovics, Peter Gaszler, Veronika Tokacs-Kollar, Beata Bugyi

Abstract:

FH2 (formin homology-2) domains of several proteins, collectively known as formins, including DAAM, DAAM1 and mDia1, promote G-actin nucleation and elongation. FH2 domains of these formins exist as oligomers. Chain dimerization by ring structure formation serves as a structural basis for actin polymerization function of FH2 domain. Proper single chain configuration and specific interactions between its various regions are necessary for individual chains to form a dimer functional in G-actin nucleation and elongation. FH1 and WH2 domain-containing formins were shown to behave as intrinsically disordered proteins. Thus, the aim of this research was to study structural dynamics of FH2 domain of DAAM. To investigate structural features of FH2 domain of DAAM, molecular dynamics simulation of chain A of FH2 domain of DAAM solvated in water box in 50 mM NaCl was conducted at temperatures from 293.15 to 353.15K, with VMD 1.9.2, NAMD 2.14 and Amber Tools 21 using 2z6e and 1v9d PDB structures of DAAM was obtained on I-TASSER webserver. Calcium and ATP bound G-actin 3hbt PDB structure was used as a reference protein with well-described structural dynamics of denaturation. Topology and parameter information of CHARMM 2012 additive all-atom force fields for proteins, carbohydrate derivatives, water and ions were used in NAMD 2.14 and ff19SB force field for proteins in Amber Tools 21. The systems were energy minimized for the first 1000 steps, equilibrated and produced in NPT ensemble for 1ns using stochastic Langevin dynamics and the particle mesh Ewald method. Our root-mean square deviation (RMSD) analysis of molecular dynamics of chain A of FH2 domains of DAAM revealed similar insignificant changes of total molecular average RMSD values of FH2 domain of these formins at temperatures from 293.15 to 353.15K. In contrast, total molecular average RMSD values of G-actin showed considerable increase at 328K, which corresponds to the denaturation of G-actin molecule at this temperature and its transition from native, ordered, to denatured, disordered, state which is well-described in the literature. RMSD values of lasso and tail regions of chain A of FH2 domain of DAAM exhibited higher than total molecular average RMSD at temperatures from 293.15 to 353.15K. These regions are functional in intra- and interchain interactions and contain highly conserved tryptophan residues of lasso region, highly conserved GNYMN sequence of post region and amino acids of the shell of hydrophobic pocket of the salt bridge between Arg171 and Asp321, which are important for structural stability and ordered state of FH2 domain of DAAM and its functions in FH2 domain dimerization. In conclusion, higher than total molecular average RMSD values of lasso and post regions of chain A of FH2 domain of DAAM may explain disordered state of FH2 domain of DAAM at temperatures from 293.15 to 353.15K. Finally, absence of marked transition, in terms of significant changes in average molecular RMSD values between native and denatured states of FH2 domain of DAAM at temperatures from 293.15 to 353.15K, can make it possible to attribute these formins to the group of intrinsically disordered proteins rather than to the group of intrinsically ordered proteins such as G-actin.

Keywords: FH2 domain, DAAM, formins, molecular modelling, computational biophysics

Procedia PDF Downloads 118
66 The Role of Virtual Reality in Mediating the Vulnerability of Distant Suffering: Distance, Agency, and the Hierarchies of Human Life

Authors: Z. Xu

Abstract:

Immersive virtual reality (VR) has gained momentum in humanitarian communication due to its utopian promises of co-presence, immediacy, and transcendence. These potential benefits have led the United Nations (UN) to tirelessly produce and distribute VR series to evoke global empathy and encourage policymakers, philanthropic business tycoons and citizens around the world to actually do something (i.e. give a donation). However, it is unclear whether or not VR can cultivate cosmopolitans with a sense of social responsibility towards the geographically, socially/culturally and morally mediated misfortune of faraway others. Drawing upon existing works on the mediation of distant suffering, this article constructs an analytical framework to articulate the issue. Applying this framework on a case study of five of the UN’s VR pieces, the article identifies three paradoxes that exist between cyber-utopian and cyber-dystopian narratives. In the “paradox of distance”, VR relies on the notions of “presence” and “storyliving” to implicitly link audiences spatially and temporally to distant suffering, creating global connectivity and reducing perceived distances between audiences and others; yet it also enables audiences to fully occupy the point of view of distant sufferers (creating too close/absolute proximity), which may cause them to feel naive self-righteousness or narcissism with their pleasures and desire, thereby destroying the “proper distance”. In the “paradox of agency”, VR simulates a superficially “real” encounter for visual intimacy, thereby establishing an “audiences–beneficiary” relationship in humanitarian communication; yet in this case the mediated hyperreality is not an authentic reality, and its simulation does not fill the gap between reality and the virtual world. In the “paradox of the hierarchies of human life”, VR enables an audience to experience virtually fundamental “freedom”, epitomizing an attitude of cultural relativism that informs a great deal of contemporary multiculturalism, providing vast possibilities for a more egalitarian representation of distant sufferers; yet it also takes the spectator’s personally empathic feelings as the focus of intervention, rather than structural inequality and political exclusion (an economic and political power relations of viewing). Thus, the audience can potentially remain trapped within the minefield of hegemonic humanitarianism. This study is significant in two respects. First, it advances the turn of digitalization in studies of media and morality in the polymedia milieu; it is motivated by the necessary call for a move beyond traditional technological environments to arrive at a more novel understanding of the asymmetry of power between the safety of spectators and the vulnerability of mediated sufferers. Second, it not only reminds humanitarian journalists and NGOs that they should not rely entirely on the richer news experience or powerful response-ability enabled by VR to gain a “moral bond” with distant sufferers, but also argues that when fully-fledged VR technology is developed, it can serve as a kind of alchemy and should not be underestimated merely as a “bugaboo” of an alarmist philosophical and fictional dystopia.

Keywords: audience, cosmopolitan, distant suffering, virtual reality, humanitarian communication

Procedia PDF Downloads 124
65 Numerical Analysis of the Computational Fluid Dynamics of Co-Digestion in a Large-Scale Continuous Stirred Tank Reactor

Authors: Sylvana A. Vega, Cesar E. Huilinir, Carlos J. Gonzalez

Abstract:

Co-digestion in anaerobic biodigesters is a technology improving hydrolysis by increasing methane generation. In the present study, the dimensional computational fluid dynamics (CFD) is numerically analyzed using Ansys Fluent software for agitation in a full-scale Continuous Stirred Tank Reactor (CSTR) biodigester during the co-digestion process. For this, a rheological study of the substrate is carried out, establishing rotation speeds of the stirrers depending on the microbial activity and energy ranges. The substrate is organic waste from industrial sources of sanitary water, butcher, fishmonger, and dairy. Once the rheological behavior curves have been obtained, it is obtained that it is a non-Newtonian fluid of the pseudoplastic type, with a solids rate of 12%. In the simulation, the rheological results of the fluid are considered, and the full-scale CSTR biodigester is modeled. It was coupling the second-order continuity differential equations, the three-dimensional Navier Stokes, the power-law model for non-Newtonian fluids, and three turbulence models: k-ε RNG, k-ε Realizable, and RMS (Reynolds Stress Model), for a 45° tilt vane impeller. It is simulated for three minutes since it is desired to study an intermittent mixture with a saving benefit of energy consumed. The results show that the absolute errors of the power number associated with the k-ε RNG, k-ε Realizable, and RMS models were 7.62%, 1.85%, and 5.05%, respectively, the numbers of power obtained from the analytical-experimental equation of Nagata. The results of the generalized Reynolds number show that the fluid dynamics have a transition-turbulent flow regime. Concerning the Froude number, the result indicates there is no need to implement baffles in the biodigester design, and the power number provides a steady trend close to 1.5. It is observed that the levels of design speeds within the biodigester are approximately 0.1 m/s, which are speeds suitable for the microbial community, where they can coexist and feed on the substrate in co-digestion. It is concluded that the model that more accurately predicts the behavior of fluid dynamics within the reactor is the k-ε Realizable model. The flow paths obtained are consistent with what is stated in the referenced literature, where the 45° inclination PBT impeller is the right type of agitator to keep particles in suspension and, in turn, increase the dispersion of gas in the liquid phase. If a 24/7 complete mix is considered under stirred agitation, with a plant factor of 80%, 51,840 kWh/year are estimated. On the contrary, if intermittent agitations of 3 min every 15 min are used under the same design conditions, reduce almost 80% of energy costs. It is a feasible solution to predict the energy expenditure of an anaerobic biodigester CSTR. It is recommended to use high mixing intensities, at the beginning and end of the joint phase acetogenesis/methanogenesis. This high intensity of mixing, in the beginning, produces the activation of the bacteria, and once reaching the end of the Hydraulic Retention Time period, it produces another increase in the mixing agitations, favoring the final dispersion of the biogas that may be trapped in the biodigester bottom.

Keywords: anaerobic co-digestion, computational fluid dynamics, CFD, net power, organic waste

Procedia PDF Downloads 91
64 Mining and Ecological Events and its Impact on the Genesis and Geo-Distribution of Ebola Outbreaks in Africa

Authors: E Tambo, O. O. Olalubi, E. C. Ugwu, J. Y. Ngogang

Abstract:

Despite the World Health Organization (WHO) declaration of international health emergency concern, the status quo of responses and efforts to stem the worst-recorded Ebola epidemic Ebola outbreak is still precariously inadequate in most of the affected in West. Mining natural resources have been shown to play a key role in both motivating and fuelling ethnic, civil and armed conflicts that have plagued a number of African countries over the last decade. Revenues from the exploitation of natural resources are not only used in sustaining the national economy but also armies, personal enrichment and building political support. Little is documented on the mining and ecological impact on the emergence and geographical distribution of Ebola in Africa over time and space. We aimed to provide a better understanding of the interconnectedness among issues of mining natural, resource management, mining conflict and post-conflict on Ebola outbreak and how wealth generated from abundant natural resources could be better managed in promoting research and development towards strengthening environmental, socioeconomic and health systems sustainability on Ebola outbreak and other emerging diseases surveillance and responses systems prevention and control, early warning alert, durable peace and sustainable development rather than to fuel conflicts, resurgence and emerging diseases epidemics in the perspective of community and national/regional approach. Our results showed the first assessment of systematic impact of all major minerals conflict events diffusion over space and time and mining activities on nine Ebola genesis and geo-distribution in affected countries across Africa. We demonstrate how, where and when mining activities in Africa increase ecological degradation, conflicts at the local level and then spreads violence across territory and time by enhancing the financial capacities of fighting groups/ethnics and diseases onset. In addition, led process of developing minimum standards for natural resource governance; improving governmental and civil society capacity for natural resource management, including the strengthening of monitoring and enforcement mechanisms; understanding the post-mining and conflicts community or national reconstruction and rehabilitation programmes in strengthening or developing community health systems and regulatory mechanisms. In addition the quest for the control over these resources and illegal mining across the landscape forest incursion provided increase environmental and ecological instability and displacement and disequilibrium, therefore affecting the intensity and duration of mining and conflict/wars and episode of Ebola outbreaks over time and space. We highlight the key findings and lessons learnt in promoting country or community-led process in transforming natural resource wealth from a peace liability to a peace asset. The imperative necessity for advocacy and through facilitating intergovernmental deliberations on critical issues and challenges affecting Africa community transforming exploitation of natural resources from a peace liability to outbreak prevention and control. The vital role of mining in increasing government revenues and expenditures, equitable distribution of wealth and health to all stakeholders, in particular local communities requires coordination, cooperative leadership and partnership in fostering sustainable developmental initiatives from mining context to outbreak and other infectious diseases surveillance responses systems in prevention and control, and judicious resource management.

Keywords: mining, mining conflicts, mines, ecological, Ebola, outbreak, mining companies, miners, impact

Procedia PDF Downloads 280
63 Improving Recovery Reuse and Irrigation Scheme Efficiency – North Gaza Emergency Sewage Treatment Project as Case Study

Authors: Yaser S. Kishawi, Sadi R. Ali

Abstract:

Part of Palestine, Gaza Strip (365 km2 and 1.8 million inhabitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely cover the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is finding non-conventional water resource from treated wastewater to cover agricultural requirements and serve the population. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, only phase A is functioning. Nearly 23 Mm3 of partially treated wastewater were infiltrated into the aquifer. Phase B and phase C witnessed many delays and this forced a reassessment of the RRS original design. An Environmental Management Plan was conducted from Jul 2013 to Jun 2014 on 13 existing monitoring wells surrounding the project location. This is to measure the efficiency of the SAT system and the spread of the contamination plume with relation to the efficiency of the proposed RRS. Along with the proposed location of the 27 recovery wells as part of the proposed RRS. The results of monitored wells were assessed compared with PWA baseline data. This was put into a groundwater model to simulate the plume to propose the best suitable solution to the delays. The redesign mainly manipulated the pumping rate of wells, proposed locations and functioning schedules (including wells groupings). The proposed simulations were examined using visual MODFLOW V4.2 to simulate the results. The results of monitored wells were assessed based on the location of the monitoring wells related to the proposed recovery wells locations (200m, 500m and 750m away from the IBs). Near the 500m line (the first row of proposed recovery wells), an increase of nitrate (from 30 to 70mg/L) compare to a decrease in Chloride (1500 to below 900mg/L) was found during the monitoring period which indicated an expansion of plume to this distance. On this rate with the required time to construct the recovery scheme, keeping the original design the RRS will fail to capture the plume. Based on that many simulations were conducted leading into three main scenarios. The scenarios manipulated the starting dates, the pumping rate and the locations of recovery wells. A simulation of plume expansion and path-lines were extracted from the model monitoring how to prevent the expansion towards the nearby municipal wells. It was concluded that the location is the most important factor in determining the RRS efficiency. Scenario III was adopted and showed an effective results even with a reduced pumping rates. This scenario proposed adding two additional recovery wells in a location beyond the 750m line to compensate the delays and effectively capture the plume. A continuous monitoring program for current and future monitoring wells should be in place to support the proposed scenario and ensure maximum protection.

Keywords: soil aquifer treatment, recovery and reuse scheme, infiltration basins, north gaza

Procedia PDF Downloads 299
62 Promoting Environmental Sustainability in Rural Areas with CMUH Green Experiential Education Center

Authors: Yi-Chu Liu, Hsiu-Huei Hung, Li-Hui Yang, Ming-Jyh Chen

Abstract:

introduction: To promote environmental sustainability, the hospital formed a corporate volunteer team in 2016 to build the Green Experiential Education Center. Our green creation center utilizes attic space to achieve sustainability objectives such as energy efficiency and carbon reduction. Other than executing sustainable plans, the center emphasizes experiential education. We invite our community to actively participate in building a sustainable, economically viable environment. Since 2020, the China Medical University Hospital has provided medical care to the Tgbin community in Taichung City's Heping District. The tribe, primarily composed of Atayal people, the elderly comprise 18% of the total population, and these families' per capita income is relatively low compared to Taiwanese citizens elsewhere. Purpose / Methods: With the experiences at the Green Experiential Education Center, CMUH team identifies the following objectives: Create an aquaponic system to supply vulnerable local households with food. Create a solar renewable energy system to meet the electricity needs of vulnerable local households. Promote the purchase of green electricity certificates to reduce the hospital's carbon emissions and generate additional revenue for the local community. Materials and Methods: In March 2020, we visited the community and installed The aquaponic system in January 2021. CMUH spent 150,000NT (approximately 5000US dollars) in March 2021 to build a 100-square-meter aquaponic system. The production of vegetables and fish caught determines the number of vulnerable families that can be supported. The aquaponics system is a kind of Low energy consumption and environmentally friendly production method, and can simultaneously achieve energy saving, water saving, and fertilizer saving .In September 2023, CMUH will complete a solar renewable energy system. The system will cover an area of 308 square meters and costs approximately NT$240,000 (approximately US$8,000). The installation of electricity meters will enable statistical analysis of power generation. And complete the Taiwan National Renewable Energy Certificate application process. The green electricity certificate will be obtained based on the monthly power generation from the solar renewable energy system. Results: I Food availability and access are crucial considering the remote location and aging population. By creating a fish and vegetable symbiosis system, the vegetables and catches produced will enable economically disadvantaged families to lower food costs. In 2021 and 2022, the aquaponic system produced 52 kilograms of vegetables and 75 kilograms of catch. The production ensures the daily needs of 8 disadvantaged families. Conclusions: The hospital serves as a fortress for public health and the ideal setting for corporate social responsibility. China Medical University Hospital and the Green Experiential Education Center work to strengthen ties with rural communities and offer top-notch specialty medical care. We are committed to assisting people in escaping poverty and hunger as part of the 2030 Sustainable Development Goals.

Keywords: environmental education, sustainability, energy conservation, carbon emissions, rural area development

Procedia PDF Downloads 60
61 Assessment of Efficiency of Underwater Undulatory Swimming Strategies Using a Two-Dimensional CFD Method

Authors: Dorian Audot, Isobel Margaret Thompson, Dominic Hudson, Joseph Banks, Martin Warner

Abstract:

In competitive swimming, after dives and turns, athletes perform underwater undulatory swimming (UUS), copying marine mammals’ method of locomotion. The body, performing this wave-like motion, accelerates the fluid downstream in its vicinity, generating propulsion with minimal resistance. Through this technique, swimmers can maintain greater speeds than surface swimming and take advantage of the overspeed granted by the dive (or push-off). Almost all previous work has considered UUS when performed at maximum effort. Critical parameters to maximize UUS speed are frequently discussed; however, this does not apply to most races. In only 3 out of the 16 individual competitive swimming events are athletes likely to attempt to perform UUS with the greatest speed, without thinking of the cost of locomotion. In the other cases, athletes will want to control the speed of their underwater swimming, attempting to maximise speed whilst considering energy expenditure appropriate to the duration of the event. Hence, there is a need to understand how swimmers adapt their underwater strategies to optimize the speed within the allocated energetic cost. This paper develops a consistent methodology that enables different sets of UUS kinematics to be investigated. These may have different propulsive efficiencies and force generation mechanisms (e.g.: force distribution along with the body and force magnitude). The developed methodology, therefore, needs to: (i) provide an understanding of the UUS propulsive mechanisms at different speeds, (ii) investigate the key performance parameters when UUS is not performed solely for maximizing speed; (iii) consistently determine the propulsive efficiency of a UUS technique. The methodology is separated into two distinct parts: kinematic data acquisition and computational fluid dynamics (CFD) analysis. For the kinematic acquisition, the position of several joints along the body and their sequencing were either obtained by video digitization or by underwater motion capture (Qualisys system). During data acquisition, the swimmers were asked to perform UUS at a constant depth in a prone position (facing the bottom of the pool) at different speeds: maximum effort, 100m pace, 200m pace and 400m pace. The kinematic data were input to a CFD algorithm employing a two-dimensional Large Eddy Simulation (LES). The algorithm adopted was specifically developed in order to perform quick unsteady simulations of deforming bodies and is therefore suitable for swimmers performing UUS. Despite its approximations, the algorithm is applied such that simulations are performed with the inflow velocity updated at every time step. It also enables calculations of the resistive forces (total and applied to each segment) and the power input of the modeled swimmer. Validation of the methodology is achieved by comparing the data obtained from the computations with the original data (e.g.: sustained swimming speed). This method is applied to the different kinematic datasets and provides data on swimmers’ natural responses to pacing instructions. The results show how kinematics affect force generation mechanisms and hence how the propulsive efficiency of UUS varies for different race strategies.

Keywords: CFD, efficiency, human swimming, hydrodynamics, underwater undulatory swimming

Procedia PDF Downloads 197
60 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 74