Search results for: concept evaluation
1499 Estimating Evapotranspiration Irrigated Maize in Brazil Using a Hybrid Modelling Approach and Satellite Image Inputs
Authors: Ivo Zution Goncalves, Christopher M. U. Neale, Hiran Medeiros, Everardo Mantovani, Natalia Souza
Abstract:
Multispectral and thermal infrared imagery from satellite sensors coupled with climate and soil datasets were used to estimate evapotranspiration and biomass in center pivots planted to maize in Brazil during the 2016 season. The hybrid remote sensing based model named Spatial EvapoTranspiration Modelling Interface (SETMI) was applied using multispectral and thermal infrared imagery from the Landsat Thematic Mapper instrument. Field data collected by the IRRIGER center pivot management company included daily weather information such as maximum and minimum temperature, precipitation, relative humidity for estimating reference evapotranspiration. In addition, soil water content data were obtained every 0.20 m in the soil profile down to 0.60 m depth throughout the season. Early season soil samples were used to obtain water-holding capacity, wilting point, saturated hydraulic conductivity, initial volumetric soil water content, layer thickness, and saturated volumetric water content. Crop canopy development parameters and irrigation application depths were also inputs of the model. The modeling approach is based on the reflectance-based crop coefficient approach contained within the SETMI hybrid ET model using relationships developed in Nebraska. The model was applied to several fields located in Minas Gerais State in Brazil with approximate latitude: -16.630434 and longitude: -47.192876. The model provides estimates of real crop evapotranspiration (ET), crop irrigation requirements and all soil water balance outputs, including biomass estimation using multi-temporal satellite image inputs. An interpolation scheme based on the growing degree-day concept was used to model the periods between satellite inputs, filling the gaps between image dates and obtaining daily data. Actual and accumulated ET, accumulated cold temperature and water stress and crop water requirements estimated by the model were compared with data measured at the experimental fields. Results indicate that the SETMI modeling approach using data assimilation, showed reliable daily ET and crop water requirements for maize, interpolated between remote sensing observations, confirming the applicability of the SETMI model using new relationships developed in Nebraska for estimating mainly ET and water requirements in Brazil under tropical conditions.Keywords: basal crop coefficient, irrigation, remote sensing, SETMI
Procedia PDF Downloads 1401498 Hands on Tools to Improve Knowlege, Confidence and Skill of Clinical Disaster Providers
Authors: Lancer Scott
Abstract:
Purpose: High quality clinical disaster medicine requires providers working collaboratively to care for multiple patients in chaotic environments; however, many providers lack adequate training. To address this deficit, we created a competency-based, 5-hour Emergency Preparedness Training (EPT) curriculum using didactics, small-group discussion, and kinetic learning. The goal was to evaluate the effect of a short course on improving provider knowledge, confidence and skills in disaster scenarios. Methods: Diverse groups of medical university students, health care professionals, and community members were enrolled between 2011 and 2014. The course consisted of didactic lectures, small group exercises, and two live, multi-patient mass casualty incident (MCI) scenarios. The outcome measures were based on core competencies and performance objectives developed by a curriculum task force and assessed via trained facilitator observation, pre- and post-testing, and a course evaluation. Results: 708 participants completed were trained between November 2011 and August 2014, including 49.9% physicians, 31.9% medical students, 7.2% nurses, and 11% various other healthcare professions. 100% of participants completed the pre-test and 71.9% completed the post-test, with average correct answers increasing from 39% to 60%. Following didactics, trainees met 73% and 96% of performance objectives for the two small group exercises and 68.5% and 61.1% of performance objectives for the two MCI scenarios. Average trainee self-assessment of both overall knowledge and skill with clinical disasters improved from 33/100 to 74/100 (overall knowledge) and 33/100 to 77/100 (overall skill). The course assessment was completed by 34.3% participants, of whom 91.5% highly recommended the course. Conclusion: A relatively short, intensive EPT course can improve the ability of a diverse group of disaster care providers to respond effectively to mass casualty scenarios.Keywords: clinical disaster medicine, training, hospital preparedness, surge capacity, education, curriculum, research, performance, training, student, physicians, nurses, health care providers, health care
Procedia PDF Downloads 1931497 Resilient Design Solutions for Megathermal Climates of the Global South
Authors: Bobuchi Ken-Opurum
Abstract:
The impacts of climate change on urban settlements is growing. In the global south, communities are even more vulnerable and suffer there is an increased vulnerability from due to climate change disasters such as flooding and high temperatures. This is primarily due to high intensity rainfall, low-lying coasts, inadequate infrastructure, and limited resources. According to the Emergency Events Database, floods were the leading cause of disaster -based deaths in the global south between 2006 and 2015. This includes deaths from heat stress related health outcomes. Adapting to climate vulnerabilities is paramount in reducing the significant redevelopment costs from climate disasters. Governments and urban planners provide top-down approaches such as evacuation, and disaster and emergency communication. While they address infrastructure and public services, they are not always able to address the immediate and critical day to day needs of poor and vulnerable populations. There is growing evidence that some bottom-up strategies and grassroots initiatives of self-build housing such as in urban informal settlements are successful in coping and adapting to hydroclimatic impacts. However, these research findings are not consolidated and the evaluation of the resilience outcomes of the bottom-up strategies are limited. Using self-build housing as a model for sustainable and resilient urban planning, this research aimed to consolidate the flood and heat stress resilient design solutions, analyze the effectiveness of these solutions, and develop guidelines and methods for adopting these design solutions into mainstream housing in megathermal climates. The methodological approach comprised of analyses of over 40 ethnographic based peer reviewed literature, white papers, and reports between the years 2000 and 2019 to identify coping strategies and grassroots initiatives that have been applied by occupants and communities of the global south. The results of the research provide a consolidated source and prioritized list of the best bottom-up strategies for communities in megathermal climates to improve the lives of people in some of the most vulnerable places in the world.Keywords: resilient, design, megathermal, climate change
Procedia PDF Downloads 1261496 Family Medicine Residents in End-of-Life Care
Authors: Goldie Lynn Diaz, Ma. Teresa Tricia G. Bautista, Elisabeth Engeljakob, Mary Glaze Rosal
Abstract:
Introduction: Residents are expected to convey unfavorable news, discuss prognoses, and relieve suffering, and address do-not-resuscitate orders, yet some report a lack of competence in providing this type of care. Recognizing this need, Family Medicine residency programs are incorporating end-of-life care from symptom and pain control, counseling, and humanistic qualities as core proficiencies in training. Objective: This study determined the competency of Family Medicine Residents from various institutions in Metro Manila on rendering care for the dying. Materials and Methods: Trainees completed a Palliative Care Evaluation tool to assess their degree of confidence in patient and family interactions, patient management, and attitudes towards hospice care. Results: Remarkably, only a small fraction of participants were confident in performing independent management of terminal delirium and dyspnea. Fewer than 30% of residents can do the following without supervision: discuss medication effects and patient wishes after death, coping with pain, vomiting and constipation, and reacting to limited patient decision-making capacity. Half of the respondents had confidence in supporting the patient or family member when they become upset. Majority expressed confidence in many end-of-life care skills if supervision, coaching and consultation will be provided. Most trainees believed that pain medication should be given as needed to terminally ill patients. There was also uncertainty as to the most appropriate person to make end-of-life decisions. These attitudes may be influenced by personal beliefs rooted in cultural upbringing as well as by personal experiences with death in the family, which may also affect their participation and confidence in caring for the dying. Conclusion: Enhancing the quality and quantity of end-of-life care experiences during residency with sufficient supervision and role modeling may lead to knowledge and skill improvement to ensure quality of care. Fostering bedside learning opportunities during residency is an appropriate venue for teaching interventions in end-of-life care education.Keywords: end of life care, geriatrics, palliative care, residency training skill
Procedia PDF Downloads 2571495 An Approach to Determine Proper Daylighting Design Solution Considering Visual Comfort and Lighting Energy Efficiency in High-Rise Residential Building
Authors: Zehra Aybike Kılıç, Alpin Köknel Yener
Abstract:
Daylight is a powerful driver in terms of improving human health, enhancing productivity and creating sustainable solutions by minimizing energy demand. A proper daylighting system allows not only a pleasant and attractive visual and thermal environment, but also reduces lighting energy consumption and heating/cooling energy load with the optimization of aperture size, glazing type and solar control strategy, which are the major design parameters of daylighting system design. Particularly, in high-rise buildings where large openings that allow maximum daylight and view out are preferred, evaluation of daylight performance by considering the major parameters of the building envelope design becomes crucial in terms of ensuring occupants’ comfort and improving energy efficiency. Moreover, it is increasingly necessary to examine the daylighting design of high-rise residential buildings, considering the share of residential buildings in the construction sector, the duration of occupation and the changing space requirements. This study aims to identify a proper daylighting design solution considering window area, glazing type and solar control strategy for a high-residential building in terms of visual comfort and lighting energy efficiency. The dynamic simulations are carried out/conducted by DIVA for Rhino version 4.1.0.12. The results are evaluated with Daylight Autonomy (DA) to demonstrate daylight availability in the space and Daylight Glare Probability (DGP) to describe the visual comfort conditions related to glare. Furthermore, it is also analyzed that the lighting energy consumption occurred in each scenario to determine the optimum solution reducing lighting energy consumption by optimizing daylight performance. The results revealed that it is only possible that reduction in lighting energy consumption as well as providing visual comfort conditions in buildings with the proper daylighting design decision regarding glazing type, transparency ratio and solar control device.Keywords: daylighting , glazing type, lighting energy efficiency, residential building, solar control strategy, visual comfort
Procedia PDF Downloads 1761494 Partnerships for Environmental Sustainability: An Effective Multistakeholder Governance Regime for Oil and Gas Producing Areas
Authors: Joy Debski
Abstract:
Due to the varying degrees of the problem posed by global warming, environmental sustainability dominates international discourse. International initiatives' aims and expectations have proven particularly challenging to put into practice in developing nations. To reduce human exploitation of the environment, stricter measures are urgently needed. However, putting them into practice has proven more difficult. Relatively recent information from the Climate Accountability Institute and academic researchers shows that fossil fuel companies are major contributors to the climate crisis. Host communities in oil and gas-producing areas, particularly in developing nations, have grown hostile toward both oil and gas companies and government policies. It is now essential that the three main stakeholders—government, the oil and gas sector, and host communities—cooperate to achieve the shared objective of environmental sustainability. This research, therefore, advocates a governance system for Nigeria that facilitates the achieving the goal of environmental sustainability. This objective is achieved by the research's examination of the main institutional framework for environmental sustainability, evaluation of the strategies used by major oil companies to increase stakeholder engagement in environmental sustainability, and examination of the involvement of host communities in environmental sustainability. The study reveals that while environmental sustainability is important to the identified stakeholders, it's challenging to accomplish without an informed synergy. Hence the research advocates the centralisation of CSR through a CSR commission for environmental sustainability. The commission’s mandate is to facilitate, partner with, and endorse companies. The commission is strongly advised to incorporate host community liaison offices into the process of negotiating contracts with oil and gas firms, as well as to play a facilitative role in helping firms adhere to both domestic and international regulations. The recommendations can benefit Nigerian policymakers in enhancing their unsuccessful efforts to pass CSR legislation. Through the research-proposed CSR department, which has competent training and stakeholder engagement strategies, oil and gas companies can enhance and centralise their goals for environmental sustainability. Finally, the CSR Commission's expertise would give host communities more leverage when negotiating their memorandum of understanding with oil and gas companies.Keywords: environmental sustainability, corporate social responsibility, CSR, oil and gas, nigeria
Procedia PDF Downloads 821493 Commissioning, Test and Characterization of Low-Tar Biomass Gasifier for Rural Applications and Small-Scale Plant
Authors: M. Mashiur Rahman, Ulrik Birk Henriksen, Jesper Ahrenfeldt, Maria Puig Arnavat
Abstract:
Using biomass gasification to make producer gas is one of the promising sustainable energy options available for small scale plant and rural applications for power and electricity. Tar content in producer gas is the main problem if it is used directly as a fuel. A low-tar biomass (LTB) gasifier of approximately 30 kW capacity has been developed to solve this. Moving bed gasifier with internal recirculation of pyrolysis gas has been the basic principle of the LTB gasifier. The gasifier focuses on the concept of mixing the pyrolysis gases with gasifying air and burning the mixture in separate combustion chamber. Five tests were carried out with the use of wood pellets and wood chips separately, with moisture content of 9-34%. The LTB gasifier offers excellent opportunities for handling extremely low-tar in the producer gas. The gasifiers producer gas had an extremely low tar content of 21.2 mg/Nm³ (avg.) and an average lower heating value (LHV) of 4.69 MJ/Nm³. Tar content found in different tests in the ranges of 10.6-29.8 mg/Nm³. This low tar content makes the producer gas suitable for direct use in internal combustion engine. Using mass and energy balances, the average gasifier capacity and cold gas efficiency (CGE) observed 23.1 kW and 82.7% for wood chips, and 33.1 kW and 60.5% for wood pellets, respectively. Average heat loss in term of higher heating value (HHV) observed 3.2% of thermal input for wood chips and 1% for wood pellets, where heat loss was found 1% of thermal input in term of enthalpy. Thus, the LTB gasifier performs better compared to typical gasifiers in term of heat loss. Equivalence ratio (ER) in the range of 0.29 to 0.41 gives better performance in terms of heating value and CGE. The specific gas production yields at the above ER range were in the range of 2.1-3.2 Nm³/kg. Heating value and CGE changes proportionally with the producer gas yield. The average gas compositions (H₂-19%, CO-19%, CO₂-10%, CH₄-0.7% and N₂-51%) obtained for wood chips are higher than the typical producer gas composition. Again, the temperature profile of the LTB gasifier observed relatively low temperature compared to typical moving bed gasifier. The average partial oxidation zone temperature of 970°C observed for wood chips. The use of separate combustor in the partial oxidation zone substantially lowers the bed temperature to 750°C. During the test, the engine was started and operated completely with the producer gas. The engine operated well on the produced gas, and no deposits were observed in the engine afterwards. Part of the producer gas flow was used for engine operation, and corresponding electrical power was found to be 1.5 kW continuously, and maximum power of 2.5 kW was also observed, while maximum generator capacity is 3 kW. A thermodynamic equilibrium model is good agreement with the experimental results and correctly predicts the equilibrium bed temperature, gas composition, LHV of the producer gas and ER with the experimental data, when the heat loss of 4% of the energy input is considered.Keywords: biomass gasification, low-tar biomass gasifier, tar elimination, engine, deposits, condensate
Procedia PDF Downloads 1141492 Astronomical Object Classification
Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan
Abstract:
We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis
Procedia PDF Downloads 801491 Comparison between the Quadratic and the Cubic Linked Interpolation on the Mindlin Plate Four-Node Quadrilateral Finite Elements
Authors: Dragan Ribarić
Abstract:
We employ the so-called problem-dependent linked interpolation concept to develop two cubic 4-node quadrilateral Mindlin plate finite elements with 12 external degrees of freedom. In the problem-independent linked interpolation, the interpolation functions are independent of any problem material parameters and the rotation fields are not expressed in terms of the nodal displacement parameters. On the contrary, in the problem-dependent linked interpolation, the interpolation functions depend on the material parameters and the rotation fields are expressed in terms of the nodal displacement parameters. Two cubic 4-node quadrilateral plate elements are presented, named Q4-U3 and Q4-U3R5. The first one is modelled with one displacement and two rotation degrees of freedom in every of the four element nodes and the second element has five additional internal degrees of freedom to get polynomial completeness of the cubic form and which can be statically condensed within the element. Both elements are able to pass the constant-bending patch test exactly as well as the non-zero constant-shear patch test on the oriented regular mesh geometry in the case of cylindrical bending. In any mesh shape, the elements have the correct rank and only the three eigenvalues, corresponding to the solid body motions are zero. There are no additional spurious zero modes responsible for instability of the finite element models. In comparison with the problem-independent cubic linked interpolation implemented in Q9-U3, the nine-node plate element, significantly less degrees of freedom are employed in the model while retaining the interpolation conformity between adjacent elements. The presented elements are also compared to the existing problem-independent quadratic linked-interpolation element Q4-U2 and to the other known elements that also use the quadratic or the cubic linked interpolation, by testing them on several benchmark examples. Simple functional upgrading from the quadratic to the cubic linked interpolation, implemented in Q4-U3 element, showed no significant improvement compared to the quadratic linked form of the Q4-U2 element. Only when the additional bubble terms are incorporated in the displacement and rotation function fields, which complete the full cubic linked interpolation form, qualitative improvement is fulfilled in the Q4-U3R5 element. Nevertheless, the locking problem exists even for the both presented elements, like in all pure displacement elements when applied to very thin plates modelled by coarse meshes. But good and even slightly better performance can be noticed for the Q4-U3R5 element when compared with elements from the literature, if the model meshes are moderately dense and the plate thickness not extremely thin. In some cases, it is comparable to or even better than Q9-U3 element which has as many as 12 more external degrees of freedom. A significant improvement can be noticed in particular when modeling very skew plates and models with singularities in the stress fields as well as circular plates with distorted meshes.Keywords: Mindlin plate theory, problem-independent linked interpolation, problem-dependent interpolation, quadrilateral displacement-based plate finite elements
Procedia PDF Downloads 3121490 Drug Delivery Nanoparticles of Amino Acid Based Biodegradable Polymers
Authors: Sophio Kobauri, Tengiz Kantaria, Temur Kantaria, David Tugushi, Nina Kulikova, Ramaz Katsarava
Abstract:
Nanosized environmentally responsive materials are of special interest for various applications, including targeted drug to a considerable potential for treatment of many human diseases. The important technological advantages of nanoparticles (NPs) usage as drug carriers (nanocontainers) are their high stability, high carrier capacity, feasibility of encapsulation of both hydrophilic or hydrophobic substances, as well as a high variety of possible administration routes, including oral application and inhalation. NPs can also be designed to allow controlled (sustained) drug release from the matrix. These properties of NPs enable improvement of drug bioavailability and might allow drug dosage decrease. The targeted and controlled administration of drugs using NPs might also help to overcome drug resistance, which is one of the major obstacles in the control of epidemics. Various degradable and non-degradable polymers of both natural and synthetic origin have been used for NPs construction. One of the most promising for the design of NPs are amino acid-based biodegradable polymers (AABBPs) which can clear from the body after the fulfillment of their function. The AABBPs are composed of naturally occurring and non-toxic building blocks such as α-amino acids, fatty diols and dicarboxylic acids. The particles designed from these polymers are expected to have an improved bioavailability along with a high biocompatibility. The present work deals with a systematic study of the preparation of NPs by cost-effective polymer deposition/solvent displacement method using AABBPs. The influence of the nature and concentration of surfactants, concentration of organic phase (polymer solution), and the ratio organic phase/inorganic (water) phase, as well as of some other factors on the size of the fabricated NPs have been studied. It was established that depending on the used conditions the NPs size could be tuned within 40-330 nm. As the next step of this research an evaluation of biocompatibility and bioavailability of the synthesized NPs has been performed, using two stable human cell culture lines – HeLa and A549. This part of study is still in progress now.Keywords: amino acids, biodegradable polymers, nanoparticles (NPs), non-toxic building blocks
Procedia PDF Downloads 4321489 An Automatic Large Classroom Attendance Conceptual Model Using Face Counting
Authors: Sirajdin Olagoke Adeshina, Haidi Ibrahim, Akeem Salawu
Abstract:
large lecture theatres cannot be covered by a single camera but rather by a multicamera setup because of their size, shape, and seating arrangements. Although, classroom capture is achievable through a single camera. Therefore, a design and implementation of a multicamera setup for a large lecture hall were considered. Researchers have shown emphasis on the impact of class attendance taken on the academic performance of students. However, the traditional method of carrying out this exercise is below standard, especially for large lecture theatres, because of the student population, the time required, sophistication, exhaustiveness, and manipulative influence. An automated large classroom attendance system is, therefore, imperative. The common approach in this system is face detection and recognition, where known student faces are captured and stored for recognition purposes. This approach will require constant face database updates due to constant changes in the facial features. Alternatively, face counting can be performed by cropping the localized faces on the video or image into a folder and then count them. This research aims to develop a face localization-based approach to detect student faces in classroom images captured using a multicamera setup. A selected Haar-like feature cascade face detector trained with an asymmetric goal to minimize the False Rejection Rate (FRR) relative to the False Acceptance Rate (FAR) was applied on Raspberry Pi 4B. A relationship between the two factors (FRR and FAR) was established using a constant (λ) as a trade-off between the two factors for automatic adjustment during training. An evaluation of the proposed approach and the conventional AdaBoost on classroom datasets shows an improvement of 8% TPR (output result of low FRR) and 7% minimization of the FRR. The average learning speed of the proposed approach was improved with 1.19s execution time per image compared to 2.38s of the improved AdaBoost. Consequently, the proposed approach achieved 97% TPR with an overhead constraint time of 22.9s compared to 46.7s of the improved Adaboost when evaluated on images obtained from a large lecture hall (DK5) USM.Keywords: automatic attendance, face detection, haar-like cascade, manual attendance
Procedia PDF Downloads 721488 The Evaluation of Antioxidant and Antimicrobial Activities of Essential Oil and Aqueous, Methanol, Ethanol, Ethyl Acetate and Acetone Extract of Hypericum scabrum
Authors: A. Heshmati, M. Y Alikhani, M. T. Godarzi, M. R. Sadeghimanesh
Abstract:
Herbal essential oil and extracts are a good source of natural antioxidants and antimicrobial compounds. Hypericum is one of the potential sources of these compounds. In this study, the antioxidant and antimicrobial activity of essential oil and aqueous, methanol, ethanol, ethyl acetate and acetone extract of Hypericum scabrum was assessed. Flowers of Hypericum scabrum were collected from the surrounding mountains of Hamadan province and after drying in the shade, the essential oil of the plant was extracted by Clevenger and water, methanol, ethanol, ethyl acetate and acetone extract was obtained by maceration method. Essential oil compounds were identified using the GC-Mass. The Folin-Ciocalteau and aluminum chloride (AlCl3) colorimetric method was used to measure the amount of phenolic acid and flavonoids, respectively. Antioxidant activity was evaluated using DPPH and FRAP. The minimum inhibitory concentration (MIC) and the minimum bacterial/fungicide concentration (MBC/MFC) of essential oil and extracts were evaluated against Staphylococcus aureus, Bacillus cereus, Pseudomonas aeruginosa, Salmonella typhimurium, Aspergillus flavus and Candida albicans. The essential oil yield of was 0.35%, the lowest and highest extract yield was related to ethyl acetate and water extract. The most component of essential oil was α-Pinene (46.35%). The methanol extracts had the highest phenolic acid (95.65 ± 4.72 µg galic acid equivalent/g dry plant) and flavonoids (25.39 ± 2.73 µg quercetin equivalent/g dry plant). The percentage of DPPH radical inhibition showed positive correlation with concentrations of essential oil or extract. The methanol and ethanol extract had the highest DDPH radical inhibitory. Essential oil and extracts of Hypericum had antimicrobial activity against the microorganisms studied in this research. The MIC and MBC values for essential oils were in the range of 25-25.6 and 25-50 μg/mL, respectively. For the extracts, these values were 1.5625-100 and 3.125-100 μg/mL, respectively. Methanol extracts had the highest antimicrobial activity. Essential oil and extract of Hypericum scabrum, especially methanol extract, have proper antimicrobial and antioxidant activity, and it can be used to control the oxidation and inhibit the growth of pathogenic and spoilage microorganisms. In addition, it can be used as a substitute for synthetic antioxidant and antimicrobial compounds.Keywords: antimicrobial, antioxidant, extract, hypericum
Procedia PDF Downloads 3281487 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor Under Scour, and Anchor Transportation and Installation (T&I)
Authors: Vinay Kumar Vanjakula, Frank Adam
Abstract:
The generation of electricity through wind power is one of the leading renewable energy generation methods. Due to abundant higher wind speeds far away from shore, the construction of offshore wind turbines began in the last decades. However, the installation of offshore foundation-based (monopiles) wind turbines in deep waters are often associated with technical and financial challenges. To overcome such challenges, the concept of floating wind turbines is expanded as the basis of the oil and gas industry. For such a floating system, stabilization in harsh conditions is a challenging task. For that, a robust heavy-weight gravity anchor is needed. Transportation of such anchor requires a heavy vessel that increases the cost. To lower the cost, the gravity anchor is designed with ballast chambers that allow the anchor to float while towing and filled with water when lowering to the planned seabed location. The presence of such a large structure may influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes influence the installation process. Also, after installation and under operating conditions, the flow around the anchor may allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scouring on fixed structures (bridges and monopiles) in rivers and oceans have been carried out, and very limited research work on scouring around a bluff-shaped gravity anchor. The objective of this study involves the application of different numerical models to simulate the anchor towing under waves and calm water conditions. Anchor lowering involves the investigation of anchor movements at certain water depths under wave/current. The motions of anchor drift, heave, and pitch is of special focus. The further study involves anchor scour, where the anchor is installed in the seabed; the flow of underwater current around the anchor induces vortices mainly at the front and corners that develop soil erosion. The study of scouring on a submerged gravity anchor is an interesting research question since the flow not only passes around the anchor but also over the structure that forms different flow vortices. The achieved results and the numerical model will be a basis for the development of other designs and concepts for marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM and other similar software.Keywords: anchor lowering, anchor towing, gravity anchor, computational fluid dynamics, scour
Procedia PDF Downloads 1691486 Locally Produced Solid Biofuels – Carbon Dioxide Emissions and Competitiveness with Conventional Ways of Individual Space Heating
Authors: Jiri Beranovsky, Jaroslav Knapek, Tomas Kralik, Kamila Vavrova
Abstract:
The paper deals with the results of research focused on the complex aspects of the use of intentionally grown biomass on agricultural land for the production of solid biofuels as an alternative for individual household heating. . The study primarily deals with the analysis of CO2 emissions of the logistics cycle of biomass for the production of energy pellets. Growing, harvesting, transport and storage are evaluated in the pellet production cycle. The aim is also to take into account the consumption profile during the year in terms of heating of common family houses, which are typical end-market segment for these fuels. It is assumed that in family houses, bio-pellets are able to substitute typical fossil fuels, such as brown coal and old wood burning heating devices and also electric boilers. One of the competing technology with the pellets are heat pumps. The results show the CO2 emissions related with considered fuels and technologies for their utilization. Comparative analysis is aimed biopellets from intentionally grown biomass, brown coal, natural gas and electricity used in electric boilers and heat pumps. Analysis combines CO2 emissions related with individual fuels utilization with costs of these fuels utilization. Cost of biopellets from intentionally grown biomass is derived from the economic models of individual energy crop plantations. At the same time, the restrictions imposed by EU legislation on Ecodesign's fuel and combustion equipment requirements and NOx emissions are discussed. Preliminary results of analyzes show that to achieve the competitiveness of pellets produced from specifically grown biomass, it would be necessary to either significantly ecological tax on coal (from about 0.3 to 3-3.5 EUR/GJ), or to multiply the agricultural subsidy per area. In addition to the Czech Republic, the results are also relevant for other countries, such as Bulgaria and Poland, which also have a high proportion of solid fuels for household heating.Keywords: CO2 emissions, heating costs, energy crop, pellets, brown coal, heat pumps, economical evaluation
Procedia PDF Downloads 1131485 Developing a Roadmap by Integrating of Environmental Indicators with the Nitrogen Footprint in an Agriculture Region, Hualien, Taiwan
Authors: Ming-Chien Su, Yi-Zih Chen, Nien-Hsin Kao, Hideaki Shibata
Abstract:
The major component of the atmosphere is nitrogen, yet atmospheric nitrogen has limited availability for biological use. Human activities have produced different types of nitrogen related compounds such as nitrogen oxides from combustion, nitrogen fertilizers from farming, and the nitrogen compounds from waste and wastewater, all of which have impacted the environment. Many studies have indicated the N-footprint is dominated by food, followed by housing, transportation, and goods and services sectors. To solve the impact issues from agricultural land, nitrogen cycle research is one of the key solutions. The study site is located in Hualien County, Taiwan, a major rice and food production area of Taiwan. Importantly, environmentally friendly farming has been promoted for years, and an environmental indicator system has been established by previous authors based on the concept of resilience capacity index (RCI) and environmental performance index (EPI). Nitrogen management is required for food production, as excess N causes environmental pollution. Therefore it is very important to develop a roadmap of the nitrogen footprint, and to integrate it with environmental indicators. The key focus of the study thus addresses (1) understanding the environmental impact caused by the nitrogen cycle of food products and (2) uncovering the trend of the N-footprint of agricultural products in Hualien, Taiwan. The N-footprint model was applied, which included both crops and energy consumption in the area. All data were adapted from government statistics databases and crosschecked for consistency before modeling. The actions involved with agricultural production were evaluated and analyzed for nitrogen loss to the environment, as well as measuring the impacts to humans and the environment. The results showed that rice makes up the largest share of agricultural production by weight, at 80%. The dominant meat production is pork (52%) and poultry (40%); fish and seafood were at similar levels to pork production. The average per capita food consumption in Taiwan is 2643.38 kcal capita−1 d−1, primarily from rice (430.58 kcal), meats (184.93 kcal) and wheat (ca. 356.44 kcal). The average protein uptake is 87.34 g capita−1 d−1, and 51% is mainly from meat, milk, and eggs. The preliminary results showed that the nitrogen footprint of food production is 34 kg N per capita per year, congruent with the results of Shibata et al. (2014) for Japan. These results provide a better understanding of the nitrogen demand and loss in the environment, and the roadmap can furthermore support the establishment of nitrogen policy and strategy. Additionally, the results serve to develop a roadmap of the nitrogen cycle of an environmentally friendly farming area, thus illuminating the nitrogen demand and loss of such areas.Keywords: agriculture productions, energy consumption, environmental indicator, nitrogen footprint
Procedia PDF Downloads 3021484 Going Viral: Expanding a Student-Run COVID-19 Journal Club to Social Media
Authors: Joseph Dodson, Robert Roth, Alexander Hodakowski, Leah Greenfield, Melissa Porterhouse, Natalie Maltby, Rachel Sadowsky
Abstract:
Introduction: Throughout the COVID-19 pandemic, countless research publications were released regarding SARS-CoV-2 and its variants, suggested treatments, and vaccine safety and efficacy. Daily publication of research became overwhelming for health professionals and the general public to stay informed. To address this problem, a group of 70 students across the four colleges at Rush University created the “Rush University COVID-19 Journal Club.” To broaden the available audience, the journal club then expanded to social media. Methods: Easily accessible and understandable summaries of the research were written by students and sent to faculty sponsors for feedback. Following the revision, summaries were published weekly on the Rush University COVID-19 Journal Club website for clinicians and students to use for reference. An Instagram page was then created, and information was further condensed into succinct posts to address COVID-19 “FAQs.” Next, a survey was distributed to followers of the Instagram page with questions meant to assess the effectiveness of the platform and gain feedback. A 5-point Likert scale was used as the primary question format. Results: The Instagram page accrued 749 followers and posted 52 unique posts over a 2 year period. Preliminary results from the surveys demonstrate that over 80% of respondents strongly agree that the Instagram posts 1) are an effective platform for the public presentation of factual COVID-19-related information; 2) provide relevant and valuable information; 3) provide information that is clear, concise, and can be easily understood. Conclusion: These results suggest that the Rush COVID-19 Journal Club was able to successfully create a social media presence and convey information without sacrificing scholarly integrity. Other academic institutions may benefit from the application of this model to help students and clinicians with the interpretation and evaluation of research topics with large bodies of evidence.Keywords: SARS-CoV-2, COVID-19, public health, social media, SARS-CoV-2 vaccine, SARS-CoV-2 variants
Procedia PDF Downloads 1281483 The Effects of Irregular Immigration Originating from Syria on Turkey's Security Issues
Authors: Muzaffer Topgul, Hasan Atac
Abstract:
After the September 11 attacks, fight against terrorism has risen to higher levels in security concepts of the countries. The following reactions of some nation states have led to the formation of unstable areas in different parts of the World. Especially, in Iraq and Syria, the influences of radical groups have risen with the weakening of the central governments. Turkey (with the geographical proximity to the current crisis) has become a stop on the movement of people who were displaced because of terrorism. In the process, the policies of the Syrian regime resulted in a civil war which is still going on since 2011, and remain as an unresolved crisis. With the extension of the problem, changes occurred in foreign policies of the World Powers; moreover, the ongoing effects of the riots, conflicts of interests of foreign powers, conflicts in the region because of the activities of radical groups increased instability within the country. This case continues to affect the security of Turkey, particularly illegal immigration. It has exceeded the number of two million Syrians who took refuge in Turkey due to the civil war, while continuing uncertainty about the legal status of asylum seekers, besides the security problems of asylum-seekers themselves, there are problems in education, health and communication (language) as well. In this study, we will evaluate the term of immigration through the eyes of national and international law, place the disorganized and illegal immigration in security sphere, and define the elements/components of irregular migration within the changing security concept. Ultimately, this article will assess the effects of the Syrian refuges to Turkey’s short-term, mid-term, and long-term security in the light of the national and international data flows and solutions will be presented to the ongoing problem. While explaining the security problems the data and the donnees obtained from the nation and international corporations will be examined thorough the human security dimensions such as living conditions of the immigrants, the ratio of the genders, especially birth rate occasions, the education circumstances of the immigrant children, the effects of the illegal passing on the public order. In addition, the demographic change caused by the immigrants will be analyzed, the changing economical conditions where the immigrants mostly accumulate, and their participation in public life will be worked on and the economical obstacles sourcing due to irregular immigration will be clarified. By the entire datum gathered from the educational, cultural, social, economic, demographical extents, the regional factors affecting the migration and the role of irregular migration in Turkey’s future security will be revealed by implication to current knowledge sources.Keywords: displaced people, human security, irregular migration, refugees
Procedia PDF Downloads 3081482 Changes in Kidney Tissue at Postmortem Magnetic Resonance Imaging Depending on the Time of Fetal Death
Authors: Uliana N. Tumanova, Viacheslav M. Lyapin, Vladimir G. Bychenko, Alexandr I. Shchegolev, Gennady T. Sukhikh
Abstract:
All cases of stillbirth undoubtedly subject to postmortem examination, since it is necessary to find out the cause of the stillbirths, as well as a forecast of future pregnancies and their outcomes. Determination of the time of death is an important issue which is addressed during the examination of the body of a stillborn. It is mean the period from the time of death until the birth of the fetus. The time for fetal deaths determination is based on the assessment of the severity of the processes of maceration. To study the possibilities of postmortem magnetic resonance imaging (MRI) for determining the time of intrauterine fetal death based on the evaluation of maceration in the kidney. We have conducted MRI morphological comparisons of 7 dead fetuses (18-21 gestational weeks) and 26 stillbirths (22-39 gestational weeks), and 15 bodies of died newborns at the age of 2 hours – 36 days. Postmortem MRI 3T was performed before the autopsy. The signal intensity of the kidney tissue (SIK), pleural fluid (SIF), external air (SIA) was determined on T1-WI and T2-WI. Macroscopic and histological signs of maceration severity and time of death were evaluated in the autopsy. Based on the results of the morphological study, the degree of maceration varied from 0 to 4. In 13 cases, the time of intrauterine death was up to 6 hours, in 2 cases - 6-12 hours, in 4 -12-24 hours, in 9 -2-3 days, in 3 -1 week, in 2 -1,5-2 weeks. At 15 dead newborns, signs of maceration were absent, naturally. Based on the data from SIK, SIF, SIA on MR-tomograms, we calculated the coefficient of MR-maceration (M). The calculation of the time of intrauterine death (MP-t) (hours) was performed by our formula: МR-t = 16,87+95,38×М²-75,32×М. A direct positive correlation of MR-t and autopsy data from the dead at the gestational ages 22-40 weeks, with a dead time, not more than 1 week, was received. The maceration at the antenatal fetal death is characterized by changes in T1-WI and T2-WI signals at postmortem MRI. The calculation of MP-t allows defining accurately the time of intrauterine death within one week at the stillbirths who died on 22-40 gestational weeks. Thus, our study convincingly demonstrates that radiological methods can be used for postmortem study of the bodies, in particular, the bodies of stillborn to determine the time of intrauterine death. Postmortem MRI allows for an objective and sufficiently accurate analysis of pathological processes with the possibility of their documentation, storage, and analysis after the burial of the body.Keywords: intrauterine death, maceration, postmortem MRI, stillborn
Procedia PDF Downloads 1251481 The Ethical Imperative of Corporate Social Responsibility Practice and Disclosure by Firms in Nigeria Delta Swamplands: A Qualitative Analysis
Authors: Augustar Omoze Ehighalua, Itotenaan Henry Ogiri
Abstract:
As a mono-product economy, Nigeria relies largely on oil revenues for its foreign exchange earnings and the exploration activities of firms operating in the Niger Delta region have left in its wake tales of environmental degradation, poverty and misery. This, no doubt, have created corporate social responsibility issues in the region. The focus of this research is the critical evaluation of the ethical response to Corporate Social Responsibility (CSR) practice by firms operating in Nigeria Delta Swamplands. While CSR is becoming more popular in developed society with effective practice guidelines and reporting benchmark, there is a relatively low level of awareness and selective applicability of existing international guidelines to effectively support CSR practice in Nigeria. This study, haven identified the lack of CSR institutional framework attempts to develop an ethically-driven CSR transparency benchmark laced within a regulatory framework based on international best practices. The research adopts a qualitative methodology and makes use of primary data collected through semi-structured interviews conducted across the six core states of the Niger Delta Region. More importantly, the study adopts an inductive, interpretivist philosophical paradigm that reveal deep phenomenological insights into what local communities, civil society and government officials consider as good ethical benchmark for responsible CSR practice by organizations. The institutional theory provides for the main theoretical foundation, complemented by the stakeholder and legitimacy theories. The Nvivo software was used to analyze the data collected. This study shows that ethical responsibility is lacking in CSR practice by firms in the Niger Delta Region of Nigeria. Furthermore, findings of the study indicate key issues of environmental, health and safety, human rights, and labour as fundamental in developing an effective CSR practice guideline for Nigeria. The study has implications for public policy formulation as well as managerial perspective.Keywords: corporate social responsibility, CSR, ethics, firms, Niger-Delta Swampland, Nigeria
Procedia PDF Downloads 1061480 Efficacy of Botulinum Toxin in Alleviating Pain Syndrome in Stroke Patients with Upper Limb Spasticity
Authors: Akulov M. A., Zaharov V. O., Jurishhev P. E., Tomskij A. A.
Abstract:
Introduction: Spasticity is a severe consequence of stroke, leading to profound disability, decreased quality of life and decrease of rehabilitation efficacy [4]. Spasticity is often associated with pain syndrome, arising from joint damage of paretic limbs (postural arthropathy) or painful spasm of paretic limb muscles. It is generally accepted that injection of botulinum toxin into a cramped muscle leads to decrease of muscle tone and improves motion range in paretic limb, which is accompanied by pain alleviation. Study aim: To evaluate the change in pain syndrome intensity after incections of botulinum toxin A (Xeomin) in stroke patients with upper limb spasticity. Patients and methods. 21 patients aged 47-74 years were evaluated. Inclusion criteria were: acute stroke 4-7 months before the inclusion into the study, leading to spasticity of wrist and/or finger flexors, elbow flexor or forearm pronator, associated with severe pain syndrome. Patients received Xeomin as monotherapy 90-300 U, according to spasticity pattern. Efficacy evaluation was performed using Ashworth scale, disability assessment scale (DAS), caregiver burden scale and global treatment benefit assessment on weeks 2, 4, 8 and 12. Efficacy criterion was the decrease of pain syndrome by week 4 on PQLS and VAS. Results: The study revealed a significant improvement of measured indices after 4 weeks of treatment, which persisted until the 12 week of treatment. Xeomin is effective in reducing muscle tone of flexors of wrist, fingers and elbow, forearm pronators. By the 4th week of treatment we observed a significant improvement on DAS (р < 0,05), Ashworth scale (1-2 points) in all patients (р < 0,05), caregiver burden scale (р < 0,05). A significant decrease of pain syndrome by the 4th week of treatment on PQLS (р < 0,05) и VAS (р < 0,05) was observed. No adverse effect were registered. Conclusion: Xeomin is an effective treatment of pain syndrome in postural upper limb spasticity after stroke. Xeomin treatment leads to a significant improvement on PQLS and VAS.Keywords: botulinum toxin, pain syndrome, spasticity, stroke
Procedia PDF Downloads 3091479 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators
Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros
Abstract:
Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis
Procedia PDF Downloads 1401478 Evaluation of Indoor Radon as Air Pollutant in Schools and Control of Exposure of the Children
Authors: Kremena Ivanona, Bistra Kunovska, Jana Djunova, Desislava Djunakova, Zdenka Stojanovska
Abstract:
In recent decades, the general public has become increasingly interested in the impact of air pollutions on their health. Currently, numerous studies are aimed at identifying pollutants in the indoor environment where they carry out daily activities. Internal pollutants can be of both natural and artificial origin. With regard to natural pollutants, special attention is paid to natural radioactivity. In recent years, radon has been one of the most studied indoor pollutants because it has the greatest contribution to human exposure to natural radionuclides. It is a known fact that lung cancer can be caused by radon radiation and it is the second risk factor after smoking for the onset of the disease. The main objective of the study under the National Science Fund of Bulgaria, in the framework of grant No КП-06-Н23/1/07.12.2018 is to evaluate the indoor radon as an important air pollutant in school buildings in order to reduce the exposure to children. The measurements were performed in 48 schools located in 55 buildings in one Bulgarian administrative district (Kardjaly). The nuclear track detectors (CR-39) were used for measurements. The arithmetic and geometric means of radon concentrations are AM = 140 Bq/m3, and GM = 117 Bq/m3 respectively. In 51 school rooms, the radon levels were greater than 200 Bq/m3, and in 28 rooms, located in 17 school buildings, it exceeded the national reference level of 300 Bq/m3, defined in the Bulgarian ordinance on radiation protection (or 30% of the investigated buildings). The statistically significant difference in the values of radon concentration by municipalities (KW, р < 0.001) obtained showed that the most likely reason for the differences between the groups is the geographical location of the buildings and the possible influence of the geological composition. The combined effect of the year of construction (technical condition of the buildings) and the energy efficiency measures was considered. The values of the radon concentration in the buildings where energy efficiency measures have been implemented are higher than those in buildings where they have not been performed. This result confirms the need for investigation of radon levels before conducting the energy efficiency measures in buildings. Corrective measures for reducing the radon levels have been recommended in school buildings with high radon levels in order to decrease the children's exposure.Keywords: air pollution, indoor radon, children exposure, schools
Procedia PDF Downloads 1741477 Evaluation of the Self-Organizing Map and the Adaptive Neuro-Fuzzy Inference System Machine Learning Techniques for the Estimation of Crop Water Stress Index of Wheat under Varying Application of Irrigation Water Levels for Efficient Irrigation Scheduling
Authors: Aschalew C. Workneh, K. S. Hari Prasad, C. S. P. Ojha
Abstract:
The crop water stress index (CWSI) is a cost-effective, non-destructive, and simple technique for tracking the start of crop water stress. This study investigated the feasibility of CWSI derived from canopy temperature to detect the water status of wheat crops. Artificial intelligence (AI) techniques have become increasingly popular in recent years for determining CWSI. In this study, the performance of two AI techniques, adaptive neuro-fuzzy inference system (ANFIS) and self-organizing maps (SOM), are compared while determining the CWSI of paddy crops. Field experiments were conducted for varying irrigation water applications during two seasons in 2022 and 2023 at the irrigation field laboratory at the Civil Engineering Department, Indian Institute of Technology Roorkee, India. The ANFIS and SOM-simulated CWSI values were compared with the experimentally calculated CWSI (EP-CWSI). Multiple regression analysis was used to determine the upper and lower CWSI baselines. The upper CWSI baseline was found to be a function of crop height and wind speed, while the lower CWSI baseline was a function of crop height, air vapor pressure deficit, and wind speed. The performance of ANFIS and SOM were compared based on mean absolute error (MAE), mean bias error (MBE), root mean squared error (RMSE), index of agreement (d), Nash-Sutcliffe efficiency (NSE), and coefficient of correlation (R²). Both models successfully estimated the CWSI of the paddy crop with higher correlation coefficients and lower statistical errors. However, the ANFIS (R²=0.81, NSE=0.73, d=0.94, RMSE=0.04, MAE= 0.00-1.76 and MBE=-2.13-1.32) outperformed the SOM model (R²=0.77, NSE=0.68, d=0.90, RMSE=0.05, MAE= 0.00-2.13 and MBE=-2.29-1.45). Overall, the results suggest that ANFIS is a reliable tool for accurately determining CWSI in wheat crops compared to SOM.Keywords: adaptive neuro-fuzzy inference system, canopy temperature, crop water stress index, self-organizing map, wheat
Procedia PDF Downloads 551476 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning
Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds is not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.Keywords: structural healthcare monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation
Procedia PDF Downloads 4331475 VISMA: A Method for System Analysis in Early Lifecycle Phases
Authors: Walter Sebron, Hans Tschürtz, Peter Krebs
Abstract:
The choice of applicable analysis methods in safety or systems engineering depends on the depth of knowledge about a system, and on the respective lifecycle phase. However, the analysis method chain still shows gaps as it should support system analysis during the lifecycle of a system from a rough concept in pre-project phase until end-of-life. This paper’s goal is to discuss an analysis method, the VISSE Shell Model Analysis (VISMA) method, which aims at closing the gap in the early system lifecycle phases, like the conceptual or pre-project phase, or the project start phase. It was originally developed to aid in the definition of the system boundary of electronic system parts, like e.g. a control unit for a pump motor. Furthermore, it can be also applied to non-electronic system parts. The VISMA method is a graphical sketch-like method that stratifies a system and its parts in inner and outer shells, like the layers of an onion. It analyses a system in a two-step approach, from the innermost to the outermost components followed by the reverse direction. To ensure a complete view of a system and its environment, the VISMA should be performed by (multifunctional) development teams. To introduce the method, a set of rules and guidelines has been defined in order to enable a proper shell build-up. In the first step, the innermost system, named system under consideration (SUC), is selected, which is the focus of the subsequent analysis. Then, its directly adjacent components, responsible for providing input to and receiving output from the SUC, are identified. These components are the content of the first shell around the SUC. Next, the input and output components to the components in the first shell are identified and form the second shell around the first one. Continuing this way, shell by shell is added with its respective parts until the border of the complete system (external border) is reached. Last, two external shells are added to complete the system view, the environment and the use case shell. This system view is also stored for future use. In the second step, the shells are examined in the reverse direction (outside to inside) in order to remove superfluous components or subsystems. Input chains to the SUC, as well as output chains from the SUC are described graphically via arrows, to highlight functional chains through the system. As a result, this method offers a clear and graphical description and overview of a system, its main parts and environment; however, the focus still remains on a specific SUC. It helps to identify the interfaces and interfacing components of the SUC, as well as important external interfaces of the overall system. It supports the identification of the first internal and external hazard causes and causal chains. Additionally, the method promotes a holistic picture and cross-functional understanding of a system, its contributing parts, internal relationships and possible dangers within a multidisciplinary development team.Keywords: analysis methods, functional safety, hazard identification, system and safety engineering, system boundary definition, system safety
Procedia PDF Downloads 2251474 The Mediation Impact of Demographic and Clinical Characteristics on the Relationship between Trunk Control and Quality of Life among the Sub-Acute Stroke Population: A Cross-Sectional Study
Authors: Kumar Gular, Viswanathan S., Mastour Saeed Alshahrani, Ravi Shankar Reddy, Jaya Shanker Tedla, Snehil Dixit, Ajay Prasad Gautam, Venkata Nagaraj Kakaraparthi, Devika Rani Sangadala
Abstract:
Background: Despite trunk control’s significant contribution to improving various functional activity components, the independent effect of trunk performance on quality of life is yet to be estimated in stroke survivors. Ascertaining the correlation between trunk control and self-reported quality of life while evaluating the effect of demographic and clinical characteristics on their relationship will guide concerned healthcare professionals in designing ideal rehabilitation protocols during the late sub-acute stroke stage of recovery. The aims of the present research were to (1) investigate the associations of trunk performance with self-rated quality of life and (2) evaluate if age, body mass index (BMI), and clinical characteristics mediate the relationship between trunk motor performance and perceived quality of life in the sub-acute stroke population. Methods: Trunk motor functions and quality of life among the late sub-acute stroke population aged 57.53 ± 6.42 years were evaluated through the trunk Impairment Scale (TIS) and Stroke specific quality of life (SSQOL) questionnaire, respectively. Pearson correlation coefficients and mediation analysis were performed to elucidate the relationship of trunk motor function with quality of life and determine the mediation impact of demographic and clinical characteristics on their association, respectively. Results: The current study observed significant correlations between trunk motor functions (TIS) and quality of life (SSQOL) with r=0.68 (p<0.001). Age, BMI, and type of stroke were detected as potential mediating factors in the association between trunk performance and quality of life. Conclusion: Validated associations between trunk motor functions and perceived quality of life among the late sub-acute stroke population emphasize the importance of comprehensive evaluation of trunk control. Rehabilitation specialists should focus on appropriate strategies to enhance trunk performance anticipating the potential effects of age, BMI, and type of stroke to improve health-related quality of life in stroke survivors.Keywords: sub-acute stroke, quality of life, functional independence, trunk control
Procedia PDF Downloads 801473 Linking Milk Price and Production Costs with Greenhouse Gas Emissions of Luxembourgish Dairy Farms
Authors: Rocco Lioy, Tom Dusseldorf, Aline Lehnen, Romain Reding
Abstract:
A study concerning both the rentability and ecological performance of dairy production in Luxembourg was carried out for the years 2017, 2018 and 2019. The data of 100 dairy farms, referring to the Greenhouse gas emissions (ecology) and the profitability (economy) of dairy production, were evaluated, and the average was compared to the corresponding figures of 80 Luxembourgish dairy farms evaluated in the years 2014, 2015 and 2016. The ecological evaluation could confirm that farm efficiency (especially defined as the lowest ratio between used feedstuff and produced milk) is the key driver for significantly reducing the level of emissions in dairy farms. In both farm groups and in the two periods, the efficient farms show almost the same level of emissions per kg ECM (1,17 kg CO2-eq) in comparison with intensive farms (1,13 kg CO2-eq), and at the same time a by far lowest level of emissions related to the production surface (9,9 vs. 13,9 t CO2-eq/ha). Concerning the economic performances, it could be observed that in the years 2017, 2018 and 2019, the intensive farms (we define intensity in the first place in terms of produced milk pro ha) reached a higher profit (incomes minus costs, only consideration for subsidies) than the efficient farms (4,8 vs. 2,6 €-cent/kg ECM), in contradiction with the observation of the years 2014, 2015 and 2015 (1,5 vs. 3,7 €-cent/kg ECM). The most important reason for this divergent behavior was a change in income and cost structure in the considered periods. In the last period (2017, 2018 and 2019), the milk price was considerably higher than in the previous period, and the production costs were lower. This was of advantage for intensive farms, which produce the highest quantity of milk with a high amount of production means. In the period 2014, 2015 and 2016, with lower milk prices but comparable production costs, the advantage was with efficient farms. In conclusion, we expect that in the next future, when especially the production costs will presumably be much higher than in the last years, the profitableness of dairy farming will decrease. In this case, we assume that efficient farms will provide not only an ecologically but also an economically better performance than production-intensive farms. High milk prices and low production costs are no good incentives for carbon-smart farming.Keywords: efficiency, intensity, dairy, emissions, prices, costs
Procedia PDF Downloads 971472 The Effectiveness of the Recovering from Child Abuse Programme (RCAP) for the Treatment of CPTSD: A Pilot Study
Authors: Siobhan Hegarty, Michael Bloomfield, Kim Entholt, Dorothy Williams, Helen Kennerley
Abstract:
Complex Post-Traumatic Stress Disorder (CPTSD) confers greater risk of poor outcomes than does Post-Traumatic Stress Disorder (PTSD). Despite this, the current treatment guidelines for CPTSD aim to reduce only the ‘core’ symptoms of re-experiencing, hyper-vigilance and avoidance, while not addressing the Disturbances of Self Organisation (DSO) symptoms that distinguish this novel diagnosis from PTSD. The Recovering from Child Abuse Programme (RCAP) is a group protocol, based on the principles of cognitive behavioural therapy (CBT). Preliminary evidence suggests the program is effective at reducing DSO symptoms. This pilot study is the first to investigate the potential effectiveness of the RCAP for the specific treatment of CPTSD. This study was conducted as a service evaluation in a secondary care, traumatic stress service. Treatment was delivered once a week, in two-hour sessions, to ten existing female CPTSD patients of the service, who had experienced sexual abuse in childhood. The programme was administered by two therapists and two additional facilitators, following the RCAP protocol manual. Symptom severity was measured before the administration of therapy and was tracked across a range of measures (International Trauma Questionnaire; Patient Health Questionnaire; Community Assessment of Psychic Experience; Work and Social Adjustment Scale) at five time points, over the course of treatment. Qualitative appraisal of the programme was gathered via weekly feedback forms and from audio-taped recordings of verbal feedback given during group sessions. Preliminary results suggest the programme causes a slight reduction in CPTSD and depressive symptom severity and preliminary qualitative analysis suggests that the RCAP is both helpful and acceptable to group members. Final results and conclusions will follow completed thematic analysis of results.Keywords: Child sexual abuse, Cognitive behavioural therapy, Complex post-traumatic stress disorder, Recovering from child abuse programme
Procedia PDF Downloads 1351471 Research Regarding Resistance Characteristics of Biscuits Assortment Using Cone Penetrometer
Authors: G.–A. Constantin, G. Voicu, E.–M. Stefan, P. Tudor, G. Paraschiv, M.–G. Munteanu
Abstract:
In the activity of handling and transport of food products, the products may be subjected to mechanical stresses that may lead to their deterioration by deformation, breaking, or crushing. This is the case for biscuits, regardless of their type (gluten-free or sugary), the addition of ingredients or flour from which they are made. However, gluten-free biscuits have a higher mechanical resistance to breakage or crushing compared to easily shattered sugar biscuits (especially those for children). The paper presents the results of the experimental evaluation of the texture for four varieties of commercial biscuits, using the penetrometer equipped with needle cone at five different additional weights on the cone-rod. The assortments of biscuits tested in the laboratory were Petit Beurre, Picnic, and Maia (all three manufactured by RoStar, Romania) and Sultani diet biscuits, manufactured by Eti Burcak Sultani (Turkey, in packs of 138 g). For the four varieties of biscuits and the five additional weights (50, 77, 100, 150 and 177 g), the experimental data obtained were subjected to regression analysis in the MS Office Excel program, using Velon's relationship (h = a∙ln(t) + b). The regression curves were analysed comparatively in order to identify possible differences and to highlight the variation of the penetration depth h, in relation to the time t. Based on the penetration depth between two-time intervals (every 5 seconds), the curves of variation of the penetration speed in relation to time were then drawn. It was found that Velon's law verifies the experimental data for all assortments of biscuits and for all five additional weights. The correlation coefficient R2 had in most of the analysed cases values over 0.850. The values recorded for the penetration depth were framed, in general, within 45-55 p.u. (penetrometric units) at an additional mass of 50 g, respectively between 155-168 p.u., at an additional mass of 177 g, at Petit Beurre biscuits. For Sultani diet biscuits, the values of the penetration depth were within the limits of 32-35 p.u., at an additional weight of 50 g and between 80-114 p.u., at an additional weight of 177g. The data presented in the paper can be used by both operators on the manufacturing technology flow, as well as by the traders of these food products, in order to establish the most efficient parametric of the working regimes (when packaging and handling).Keywords: biscuits resistance/texture, penetration depth, penetration velocity, sharp pin penetrometer
Procedia PDF Downloads 1301470 The Accuracy of an In-House Developed Computer-Assisted Surgery Protocol for Mandibular Micro-Vascular Reconstruction
Authors: Christophe Spaas, Lies Pottel, Joke De Ceulaer, Johan Abeloos, Philippe Lamoral, Tom De Backer, Calix De Clercq
Abstract:
We aimed to evaluate the accuracy of an in-house developed low-cost computer-assisted surgery (CAS) protocol for osseous free flap mandibular reconstruction. All patients who underwent primary or secondary mandibular reconstruction with a free (solely or composite) osseous flap, either a fibula free flap or iliac crest free flap, between January 2014 and December 2017 were evaluated. The low-cost protocol consisted out of a virtual surgical planning, a prebend custom reconstruction plate and an individualized free flap positioning guide. The accuracy of the protocol was evaluated through comparison of the postoperative outcome with the 3D virtual planning, based on measurement of the following parameters: intercondylar distance, mandibular angle (axial and sagittal), inner angular distance, anterior-posterior distance, length of the fibular/iliac crest segments and osteotomy angles. A statistical analysis of the obtained values was done. Virtual 3D surgical planning and cutting guide design were performed with Proplan CMF® software (Materialise, Leuven, Belgium) and IPS Gate (KLS Martin, Tuttlingen, Germany). Segmentation of the DICOM data as well as outcome analysis were done with BrainLab iPlan® Software (Brainlab AG, Feldkirchen, Germany). A cost analysis of the protocol was done. Twenty-two patients (11 fibula /11 iliac crest) were included and analyzed. Based on voxel-based registration on the cranial base, 3D virtual planning landmark parameters did not significantly differ from those measured on the actual treatment outcome (p-values >0.05). A cost evaluation of the in-house developed CAS protocol revealed a 1750 euro cost reduction in comparison with a standard CAS protocol with a patient-specific reconstruction plate. Our results indicate that an accurate transfer of the planning with our in-house developed low-cost CAS protocol is feasible at a significant lower cost.Keywords: CAD/CAM, computer-assisted surgery, low-cost, mandibular reconstruction
Procedia PDF Downloads 141