Search results for: work integrated learning (WIL)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21820

Search results for: work integrated learning (WIL)

1810 Implementation of Chlorine Monitoring and Supply System for Drinking Water Tanks

Authors: Ugur Fidan, Naim Karasekreter

Abstract:

Healthy and clean water should not contain disease-causing micro-organisms and toxic chemicals and must contain the necessary minerals in a balanced manner. Today, water resources have a limited and strategic importance, necessitating the management of water reserves. Water tanks meet the water needs of people and should be regularly chlorinated to prevent waterborne diseases. For this purpose, automatic chlorination systems placed in water tanks for killing bacteria. However, the regular operation of automatic chlorination systems depends on refilling the chlorine tank when it is empty. For this reason, there is a need for a stock control system, in which chlorine levels are regularly monitored and supplied. It has become imperative to take urgent measures against epidemics caused by the fact that most of our country is not aware of the end of chlorine. The aim of this work is to rehabilitate existing water tanks and to provide a method for a modern water storage system in which chlorination is digitally monitored by turning the newly established water tanks into a closed system. A sensor network structure using GSM/GPRS communication infrastructure has been developed in the study. The system consists of two basic units: hardware and software. The hardware includes a chlorine level sensor, an RFID interlock system for authorized personnel entry into water tank, a motion sensor for animals and other elements, and a camera system to ensure process safety. It transmits the data from the hardware sensors to the host server software via the TCP/IP protocol. The main server software processes the incoming data through the security algorithm and informs the relevant unit responsible (Security forces, Chlorine supply unit, Public health, Local Administrator) by e-mail and SMS. Since the software is developed base on the web, authorized personnel are also able to monitor drinking water tank and report data on the internet. When the findings and user feedback obtained as a result of the study are evaluated, it is shown that closed drinking water tanks are built with GRP type material, and continuous monitoring in digital environment is vital for sustainable health water supply for people.

Keywords: wireless sensor networks (WSN), monitoring, chlorine, water tank, security

Procedia PDF Downloads 162
1809 The Microstructure and Corrosion Behavior of High Entropy Metallic Layers Electrodeposited by Low and High-Temperature Methods

Authors: Zbigniew Szklarz, Aldona Garbacz-Klempka, Magdalena Bisztyga-Szklarz

Abstract:

Typical metallic alloys bases on one major alloying component, where the addition of other elements is intended to improve or modify certain properties, most of all the mechanical properties. However, in 1995 a new concept of metallic alloys was described and defined. High Entropy Alloys (HEA) contains at least five alloying elements in an amount from 5 to 20 at.%. A common feature this type of alloys is an absence of intermetallic phases, high homogeneity of the microstructure and unique chemical composition, what leads to obtaining materials with very high strength indicators, stable structures (also at high temperatures) and excellent corrosion resistance. Hence, HEA can be successfully used as a substitutes for typical metallic alloys in various applications where a sufficiently high properties are desirable. For fabricating HEA, a few ways are applied: 1/ from liquid phase i.e. casting (usually arc melting); 2/ from solid phase i.e. powder metallurgy (sintering methods preceded by mechanical synthesis) and 3/ from gas phase e.g. sputtering or 4/ other deposition methods like electrodeposition from liquids. Application of different production methods creates different microstructures of HEA, which can entail differences in their properties. The last two methods also allows to obtain coatings with HEA structures, hereinafter referred to as High Entropy Films (HEF). With reference to above, the crucial aim of this work was the optimization of the manufacturing process of the multi-component metallic layers (HEF) by the low- and high temperature electrochemical deposition ( ED). The low-temperature deposition process was crried out at ambient or elevated temperature (up to 100 ᵒC) in organic electrolyte. The high-temperature electrodeposition (several hundred Celcius degrees), in turn, allowed to form the HEF layer by electrochemical reduction of metals from molten salts. The basic chemical composition of the coatings was CoCrFeMnNi (known as Cantor’s alloy). However, it was modified by other, selected elements like Al or Cu. The optimization of the parameters that allow to obtain as far as it possible homogeneous and equimolar composition of HEF is the main result of presented studies. In order to analyse and compare the microstructure, SEM/EBSD, TEM and XRD techniques were employed. Morover, the determination of corrosion resistance of the CoCrFeMnNi(Cu or Al) layers in selected electrolytes (i.e. organic and non-organic liquids) was no less important than the above mentioned objectives.

Keywords: high entropy alloys, electrodeposition, corrosion behavior, microstructure

Procedia PDF Downloads 84
1808 Soft Robotic System for Mechanical Stimulation of Scaffolds During Dynamic Cell Culture

Authors: Johanna Perdomo, Riki Lamont, Edmund Pickering, Naomi C. Paxton, Maria A. Woodruff

Abstract:

Background: Tissue Engineering (TE) has combined advanced materials, such as biomaterials, to create affordable scaffolds and dynamic systems to generate stimulation of seeded cells on these scaffolds, improving and maintaining the cellular growth process in a cell culture. However, Few TE skin products have been clinically translated, and more research is required to produce highly biomimetic skin substitutes that mimic the native elasticity of skin in a controlled manner. Therefore, this work will be focused on the fabrication of a novel mechanical system to enhance the TE treatment approaches for the reparation of damaged tissue skin. Aims: To archive this, a soft robotic device will be created to emulate different deformation of skin stress. The design of this soft robot will allow the attachment of scaffolds, which will then be mechanically actuated. This will provide a novel and highly adaptable platform for dynamic cell culture. Methods: Novel, low-cost soft robot is fabricated via 3D printed moulds and silicone. A low cost, electro-mechanical device was constructed to actuate the soft robot through the controlled combination of positive and negative air pressure to control the different state of movements. Mechanical tests were conducted to assess the performance and calibration of each electronic component. Similarly, pressure-displacement test was performed on scaffolds, which were attached to the soft robot, applying various mechanical loading regimes. Lastly, digital image correlation test was performed to obtain strain distributions over the soft robot’s surface. Results: The control system can control and stabilise positive pressure changes for long hours. Similarly, pressure-displacement test demonstrated that scaffolds with 5µm of diameter and wavy geometry can displace at 100%, applying a maximum pressure of 1.5 PSI. Lastly, during the inflation state, the displacement of silicone was measured using DIC method, and this showed a parameter of 4.78 mm and strain of 0.0652. Discussion And Conclusion: The developed soft robot system provides a novel and low-cost platform for the dynamic actuation of tissue scaffolds with a target towards dynamic cell culture.

Keywords: soft robot, tissue engineering, mechanical stimulation, dynamic cell culture, bioreactor

Procedia PDF Downloads 98
1807 Numerical Analysis of Charge Exchange in an Opposed-Piston Engine

Authors: Zbigniew Czyż, Adam Majczak, Lukasz Grabowski

Abstract:

The paper presents a description of geometric models, computational algorithms, and results of numerical analyses of charge exchange in a two-stroke opposed-piston engine. The research engine was a newly designed internal Diesel engine. The unit is characterized by three cylinders in which three pairs of opposed-pistons operate. The engine will generate a power output equal to 100 kW at a crankshaft rotation speed of 3800-4000 rpm. The numerical investigations were carried out using ANSYS FLUENT solver. Numerical research, in contrast to experimental research, allows us to validate project assumptions and avoid costly prototype preparation for experimental tests. This makes it possible to optimize the geometrical model in countless variants with no production costs. The geometrical model includes an intake manifold, a cylinder, and an outlet manifold. The study was conducted for a series of modifications of manifolds and intake and exhaust ports to optimize the charge exchange process in the engine. The calculations specified a swirl coefficient obtained under stationary conditions for a full opening of intake and exhaust ports as well as a CA value of 280° for all cylinders. In addition, mass flow rates were identified separately in all of the intake and exhaust ports to achieve the best possible uniformity of flow in the individual cylinders. For the models under consideration, velocity, pressure and streamline contours were generated in important cross sections. The developed models are designed primarily to minimize the flow drag through the intake and exhaust ports while the mass flow rate increases. Firstly, in order to calculate the swirl ratio [-], tangential velocity v [m/s] and then angular velocity ω [rad / s] with respect to the charge as the mean of each element were calculated. The paper contains comparative analyses of all the intake and exhaust manifolds of the designed engine. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: computational fluid dynamics, engine swirl, fluid mechanics, mass flow rates, numerical analysis, opposed-piston engine

Procedia PDF Downloads 200
1806 Safety Evaluation of Post-Consumer Recycled PET Materials in Chilean Industry by Overall Migration Tests

Authors: Evelyn Ilabaca, Ximena Valenzuela, Alejandra Torres, María José Galotto, Abel Guarda

Abstract:

One of the biggest problems in food packaging industry, especially with the plastic materials, is the fact that these materials are usually obtained from non-renewable resources and also remain as waste after its use, causing environmental issues. This is an international concern and particular attention is given to reduction, reuse and recycling strategies for decreasing the waste from plastic packaging industry. In general, polyethylenes represent most plastic waste and recycling process of post-consumer polyethylene terephthalate (PCR-PET) has been studied. US Food and Drug Administration (FDA), European Food Safety Authority (EFSA) and Southern Common Market (MERCOSUR) have generated different legislative documents to control the use of PCR-PET in the production of plastic packaging intended direct food contact in order to ensure the capacity of recycling process to remove possible contaminants that can migrate into food. Consequently, it is necessary to demonstrate by challenge test that the recycling process is able to remove specific contaminants, obtaining a safe recycled plastic to human health. These documents establish that the concentration limit for substitute contaminants in PET is 220 ppb (ug/kg) and the specific migration limit is 10 ppb (ug/kg) for each contaminant, in addition to assure the sensorial characteristics of food are not affected. Moreover, under the Commission Regulation (EU) N°10/2011 on plastic materials and articles intended to come into contact with food, it is established that overall migration limit is 10 mg of substances per 1 dm2 of surface area of the plastic material. Thus, the aim of this work is to determine the safety of PCR-PET-containing food packaging materials in Chile by measuring their overall migration, and their comparison with the established limits at international level. This information will serve as a basis to provide a regulation to control and regulate the use of recycled plastic materials in the manufacture of plastic packaging intended to be in direct contact with food. The methodology used involves a procedure according to EN-1186:2002 with some modifications. The food simulants used were ethanol 10 % (v/v) and acetic acid 3 % (v/v) as aqueous food simulants, and ethanol 95 % (v/v) and isooctane as substitutes of fatty food simulants. In this study, preliminary results showed that Chilean food packaging plastics with different PCR-PET percentages agree with the European Legislation for food aqueous character.

Keywords: contaminants, polyethylene terephthalate, plastic food packaging, recycling

Procedia PDF Downloads 279
1805 Behavioral Analysis of Stock Using Selective Indicators from Fundamental and Technical Analysis

Authors: Vish Putcha, Chandrasekhar Putcha, Siva Hari

Abstract:

In the current digital era of free trading and pandemic-driven remote work culture, markets worldwide gained momentum for retail investors to trade from anywhere easily. The number of retail traders rose to 24% of the market from 15% at the pre-pandemic level. Most of them are young retail traders with high-risk tolerance compared to the previous generation of retail traders. This trend boosted the growth of subscription-based market predictors and market data vendors. Young traders are betting on these predictors, assuming one of them is correct. However, 90% of retail traders are on the losing end. This paper presents multiple indicators and attempts to derive behavioral patterns from the underlying stocks. The two major indicators that traders and investors follow are technical and fundamental. The famous investor, Warren Buffett, adheres to the “Value Investing” method that is based on a stock’s fundamental Analysis. In this paper, we present multiple indicators from various methods to understand the behavior patterns of stocks. For this research, we picked five stocks with a market capitalization of more than $200M, listed on the exchange for more than 20 years, and from different industry sectors. To study the behavioral pattern over time for these five stocks, a total of 8 indicators are chosen from fundamental, technical, and financial indicators, such as Price to Earning (P/E), Price to Book Value (P/B), Debt to Equity (D/E), Beta, Volatility, Relative Strength Index (RSI), Moving Averages and Dividend yields, followed by detailed mathematical Analysis. This is an interdisciplinary paper between various disciplines of Engineering, Accounting, and Finance. The research takes a new approach to identify clear indicators affecting stocks. Statistical Analysis of the data will be performed in terms of the probabilistic distribution, then follow and then determine the probability of the stock price going over a specific target value. The Chi-square test will be used to determine the validity of the assumed distribution. Preliminary results indicate that this approach is working well. When the complete results are presented in the final paper, they will be beneficial to the community.

Keywords: stock pattern, stock market analysis, stock predictions, trading, investing, fundamental analysis, technical analysis, quantitative trading, financial analysis, behavioral analysis

Procedia PDF Downloads 87
1804 Parameter Fitting of the Discrete Element Method When Modeling the DISAMATIC Process

Authors: E. Hovad, J. H. Walther, P. Larsen, J. Thorborg, J. H. Hattel

Abstract:

In sand casting of metal parts for the automotive industry such as brake disks and engine blocks, the molten metal is poured into a sand mold to get its final shape. The DISAMATIC molding process is a way to construct these sand molds for casting of steel parts and in the present work numerical simulations of this process are presented. During the process green sand is blown into a chamber and subsequently squeezed to finally obtain the sand mould. The sand flow is modelled with the Discrete Element method (DEM) and obtaining the correct material parameters for the simulation is the main goal. Different tests will be used to find or calibrate the DEM parameters needed; Poisson ratio, Young modulus, rolling friction coefficient, sliding friction coefficient and coefficient of restitution (COR). The Young modulus and Poisson ratio are found from compression tests of the bulk material and subsequently used in the DEM model according to the Hertz-Mindlin model. The main focus will be on calibrating the rolling resistance and sliding friction in the DEM model with respect to the behavior of “real” sand piles. More specifically, the surface profile of the “real” sand pile will be compared to the sand pile predicted with the DEM for different values of the rolling and sliding friction coefficients. When the DEM parameters are found for the particle-particle (sand-sand) interaction, the particle-wall interaction parameter values are also found. Here the sliding coefficient will be found from experiments and the rolling resistance is investigated by comparing with observations of how the green sand interacts with the chamber wall during experiments and the DEM simulations will be calibrated accordingly. The coefficient of restitution will be tested with different values in the DEM simulations and compared to video footages of the DISAMATIC process. Energy dissipation will be investigated in these simulations for different particle sizes and coefficient of restitution, where scaling laws will be considered to relate the energy dissipation for these parameters. Finally, the found parameter values are used in the overall discrete element model and compared to the video footage of the DISAMATIC process.

Keywords: discrete element method, physical properties of materials, calibration, granular flow

Procedia PDF Downloads 484
1803 Effects of Covid-19 pandemic in Japan on Japanese People’s and Expatriates’ Lifestyles

Authors: Noriyuki Suyama

Abstract:

This paper looked into consumer behavioral changes by analyzing the data collected by ASMARKS Co., one of a research companies in Japan. The purpose of the paper is to understand the two differences of before vs. after COVID-19 pandemic and Japanese living in Japan. Subsequently, examining the analysis results helped obtain useful insights into new business models for business parties in Japan as a microlevel perspective. The paper also tried to explore future conditions of globalization by taking into consideration nation’s political and economic changes as a macro-level perspective. The COVID-19 has been continuing its spread across the world with more than 60 million confirmed cases in 190 countries. This pandemic with restricted scopes of behavior mandates have disrupted the consumer habits of their lifestyles. Consumers have tendency to learn new ways when they have trouble in taking routine action. For example, the government forces people to refrain from going out, they try to telecommute at home. If the situation come back to normal, people still change their lifestyles to fit in the best. Some of data show typical effects of COVID-19; forceful exposure to digitalized work-life styles; more flexible time at home; importance of trustful and useful information gathering between what's good and bad;etc. in comparison with before vs. after COVID-19 pandemic. In addition, Japanese have less changed their lifestyles than Expatriates living in Japan. For example, while 94% of the expatriates have decreased their outgo because of self-quarantine, only 55% of the Japanese have done. There are more differences in both comparisons in the analysis results. The economic downtrend resulting from COVID-19 is supposed to be at least as devastating if not more so than that of the financial crisis. With unemployment levels in the US taking two weeks to reach what took 6 months in the 2008 crisis, there is no doubt of a global recession some predict could reach 10% or above of GDP. As a result, globalization in the global supply chain of goods and services will end up with negative impact. A lot of governmental financial and economic policies are supposed to focus on their own profits and interests, exclusing other countries interests as is the case with the Recovery Act just after the global financial crisis from 2007 to 2008. Both micro- and macro-levels analysis successfully reveal important connotations and managerial implications of business in Japan for Japanese consumers as well as after COVID-19 global business.

Keywords: COVID-19, lifestyle in Japan, expatriates, consumer behavior

Procedia PDF Downloads 141
1802 Managing Construction and Demolition Wastes - A Case Study of Multi Triagem, Lda

Authors: Cláudia Moço, Maria Santos, Carlos Arsénio, Débora Mendes, Miguel Oliveira. José Paulo Da Silva

Abstract:

Construction industry generates large amounts of waste all over the world. About 450 million tons of construction and demolition wastes (C&DW) are produced annually in the European Union. C&DW are highly heterogeneous materials in size and composition, which imposes strong difficulties on their management. Directive n.º 2008/98/CE, of the European Parliament and of the Council of 6 November establishes that 70 % of the C&DW have to be recycled by 2020. To evaluate possible applications of these materials, a detailed physical, chemical and environmental characterization is necessary. Multi Triagem, Lda. is a company located in Algarve (Portugal) and was supported by the European Regional Development Fund (grant QREN 30307 Multivalor) to quantify and characterize the received C&DW, in order to evaluate their possible applications. This evaluation, performed in collaboration with the University of Algarve, involves a physical, chemical and environmental detailed characterization of the received C&DW. In this work we report on the amounts, trial procedures and properties of the C&DW received over a period of fifteen month. In this period the company received C&DW coming from 393 different origins. The total amount was 32.458 tons, mostly mixtures containing concrete, masonry/mortar and soil/rock. Most of C&DW came from demodulation constructions and diggings. The organic/inert component, namely metal, glass, wood and plastics, were screened first and account for about 3 % of the received materials. The remaining materials were screened and grouped according to their origin and contents, the latter evaluated by visual inspection. Twenty five samples were prepared and submitted to a detailed physical, chemical and environmental analysis. The C&DW aggregates show lower quality properties than natural aggregates for concrete preparation and unbound layers of road pavements. However, chemical analyzes indicated that most samples are environmentally safe. A continuous monitoring of the presence of heavy metals and organic compounds is needed in order to perform a proper screening of the C&DW. C&DW aggregates provide a good alternative to natural aggregates.

Keywords: construction and demolition wastes, waste classification, waste composition, waste screening

Procedia PDF Downloads 352
1801 Storage Assignment Strategies to Reduce Manual Picking Errors with an Emphasis on an Ageing Workforce

Authors: Heiko Diefenbach, Christoph H. Glock

Abstract:

Order picking, i.e., the order-based retrieval of items in a warehouse, is an important time- and cost-intensive process for many logistic systems. Despite the ongoing trend of automation, most order picking systems are still manual picker-to-parts systems, where human pickers walk through the warehouse to collect ordered items. Human work in warehouses is not free from errors, and order pickers may at times pick the wrong or the incorrect number of items. Errors can cause additional costs and significant correction efforts. Moreover, age might increase a person’s likelihood to make mistakes. Hence, the negative impact of picking errors might increase for an aging workforce currently witnessed in many regions globally. A significant amount of research has focused on making order picking systems more efficient. Among other factors, storage assignment, i.e., the assignment of items to storage locations (e.g., shelves) within the warehouse, has been subject to optimization. Usually, the objective is to assign items to storage locations such that order picking times are minimized. Surprisingly, there is a lack of research concerned with picking errors and respective prevention approaches. This paper hypothesize that the storage assignment of items can affect the probability of pick errors. For example, storing similar-looking items apart from one other might reduce confusion. Moreover, storing items that are hard to count or require a lot of counting at easy-to-access and easy-to-comprehend self heights might reduce the probability to pick the wrong number of items. Based on this hypothesis, the paper discusses how to incorporate error-prevention measures into mathematical models for storage assignment optimization. Various approaches with respective benefits and shortcomings are presented and mathematically modeled. To investigate the newly developed models further, they are compared to conventional storage assignment strategies in a computational study. The study specifically investigates how the importance of error prevention increases with pickers being more prone to errors due to age, for example. The results suggest that considering error-prevention measures for storage assignment can reduce error probabilities with only minor decreases in picking efficiency. The results might be especially relevant for an aging workforce.

Keywords: an aging workforce, error prevention, order picking, storage assignment

Procedia PDF Downloads 207
1800 Impact of Educational Intervention on Hygiene-knowledge and Practices of Sanitation Workers Globally: A Systematic Review

Authors: Alive Ntunja, Wilma ten Ham-Baloyi, June Teare, Oyedele Opeoluwa, Paula Melariri

Abstract:

Sanitation workers are also known as “garbage workers” who play a significant role in the sanitation chain. For many generations sanitation workers’ level of knowledge regarding hygiene practices remains low due to a lack of educational programs on hygiene. As a result, they are widely exposed to hygiene-related diseases such as cholera, skin infections and various other diseases, increasing their risk of mortality to 40%. This review aimed to explore the global impact of educational programs on the hygiene knowledge and practices of sanitation workers. The systematic literature search was conducted for studies published between 2013 and 2023 using the following databases: MEDLINE (via EBSCOHost), PubMed, and Google Scholar to identify quantitative studies on the subject. Study quality was assessed using the Joanna Briggs Institute Critical Evaluation Instruments. Data extracted from the included articles was presented using a summary of findings table and presented graphically through charts and tables, employing both descriptive and inferential statistical methods. A one-way repeated measures ANOVA assessed the pooled effect of the intervention on mean scores across studies. Statistical analysis was performed using Microsoft Office 365 (2019 version), with significance set at p<0.05. The PRISMA flow diagram was used to present the article selection process. The systematic review included 15 eligible studies from a total of 2 777 articles. At least 60% (n=9) of the reviewed studies found educational program relating to hygiene to have a positive impact on sanitation workers’ hygiene knowledge and practices. The findings further showed that the stages (pre-post) of knowledge intervention used lead to statistically significant differences in mean score obtained [F (1,7) = 22.166, p = 0.002]. Likewise, it can be observed that the stages of practice intervention used lead to statistically significant differences in mean score obtained [F (1,7) = 21.857, p = 0.003]. However, most (n=7) studies indicated that, the efficacy of programs on hygiene knowledge and practices is indirectly influenced by educational background, age and work experience (predictor factors). Educational programs regarding hygiene have the potential to significantly improve sanitation workers knowledge and practices. Findings also suggest the implementation of active and intensive intervention programs, to improve sanitation workers hygiene knowledge and practices.

Keywords: educational programs, hygiene knowledge, practices, sanitation workers

Procedia PDF Downloads 24
1799 ¹⁸F-FDG PET/CT Impact on Staging of Pancreatic Cancer

Authors: Jiri Kysucan, Dusan Klos, Katherine Vomackova, Pavel Koranda, Martin Lovecek, Cestmir Neoral, Roman Havlik

Abstract:

Aim: The prognosis of patients with pancreatic cancer is poor. The median of survival after establishing diagnosis is 3-11 months without surgical treatment, 13-20 months with surgical treatment depending on the disease stage, 5-year survival is less than 5%. Radical surgical resection remains the only hope of curing the disease. Early diagnosis with valid establishment of tumor resectability is, therefore, the most important aim for patients with pancreatic cancer. The aim of the work is to evaluate the contribution and define the role of 18F-FDG PET/CT in preoperative staging. Material and Methods: In 195 patients (103 males, 92 females, median age 66,7 years, 32-88 years) with a suspect pancreatic lesion, as part of the standard preoperative staging, in addition to standard examination methods (ultrasonography, contrast spiral CT, endoscopic ultrasonography, endoscopic ultrasonographic biopsy), a hybrid 18F-FDG PET/CT was performed. All PET/CT findings were subsequently compared with standard staging (CT, EUS, EUS FNA), with peroperative findings and definitive histology in the operated patients as reference standards. Interpretation defined the extent of the tumor according to TNM classification. Limitations of resectability were local advancement (T4) and presence of distant metastases (M1). Results: PET/CT was performed in a total of 195 patients with a suspect pancreatic lesion. In 153 patients, pancreatic carcinoma was confirmed and of these patients, 72 were not indicated for radical surgical procedure due to local inoperability or generalization of the disease. The sensitivity of PET/CT in detecting the primary lesion was 92.2%, specificity was 90.5%. A false negative finding in 12 patients, a false positive finding was seen in 4 cases, positive predictive value (PPV) 97.2%, negative predictive value (NPV) 76,0%. In evaluating regional lymph nodes, sensitivity was 51.9%, specificity 58.3%, PPV 58,3%, NPV 51.9%. In detecting distant metastases, PET/CT reached a sensitivity of 82.8%, specificity was 97.8%, PPV 96.9%, NPV 87.0%. PET/CT found distant metastases in 12 patients, which were not detected by standard methods. In 15 patients (15.6%) with potentially radically resectable findings, the procedure was contraindicated based on PET/CT findings and the treatment strategy was changed. Conclusion: PET/CT is a highly sensitive and specific method useful in preoperative staging of pancreatic cancer. It improves the selection of patients for radical surgical procedures, who can benefit from it and decreases the number of incorrectly indicated operations.

Keywords: cancer, PET/CT, staging, surgery

Procedia PDF Downloads 251
1798 Evaluating the Efficacy of Tasquinimod in Covid-19

Authors: Raphael Udeh, Luis García De Guadiana Romualdo, Xenia Dolje-Gore

Abstract:

Background: Quite disturbing is the huge public health impact of COVID-19: As at today [25th March 2021, the COVID-19 global burden shows over 123 million cases and over 2.7 million deaths worldwide. Rationale: Recent evidence shows calprotectin’s potential as a therapeutic target, stating that tasquinimod, from the Quinoline-3-Carboxamide family is capable of blocking the interaction between calprotectin and TLR4. Hence preventing the cytokine release syndrome, that heralds the functional exhaustion in COVID-19. Early preclinical studies showed that tasquinimod inhibit tumor growth and prevent angiogenesis/cytokine storm. Phase I – III clinical studies in prostate cancer showed it has a good safety profile with good radiologic progression free survival but no effect on overall survival. Rationale/hypothesis: Strategic endeavors have been amplified globally to assess new therapeutic interventions for COVID-19 management – thus the clinical and antiviral efficacy of tasquinimod in COVID-19 remains to be explored. Hence the primary objective of this trial will be to evaluate the efficacy of tasquinimod in the treatment of adult patients with severe COVID-19 infections. Therefore, I hypothesise that among adults with COVID19 infection, tasquinimod will reduce the severe respiratory distress associated with COVID-19 compared to placebo, over a 28-day study period. Method: The setting is in Europe. Design – a randomized, placebo-controlled, phase II double-blinded trial. Trial lasts for 28 days from randomization, Tasquinimod capsule given as 0.5mg daily 1st fortnight, then 1mg daily 2nd fortnight. I0 outcome - assessed using six-point ordinal scale alongside eight 20 outcomes. 125 participants to be enrolled, data collection at baseline and subsequent data points, and safety reporting monitored via serological profile. Significance: This work could potentially establish tasquinimod as an effective and safe therapeutic agent for COVID-19 by reducing the severe respiratory distress, related time to recovery, time on oxygen/admission. It will also drive future research – as in larger multi-centre RCT.

Keywords: Calprotectin, COVID-19, Phase II Trial, Tasquinimod

Procedia PDF Downloads 198
1797 Examining the Relationship Between Job Stress And Burnout Among Academic Staff During The Covid-19 Pandemic; The Importance Of Emotional Intelligence

Authors: Parisa Gharibi Khoshkar

Abstract:

The global outbreak of Covid-19 forced a swift shift in the education sector, transitioning from traditional in-person settings to remote online setups in a short period. This abrupt change, coupled with health risks and other stressors such as the lack of social interaction, has had a negative impact on academic staff, leading to increased job-related stress and psychological pressures that can result in burnout. To address this, the current research aims to investigate the relationship between job stress and burnout among academic staff in Hebron, Palestine. Furthermore, this study examines the moderating role of emotional intelligence to gain a deeper understanding of its effects in reducing burnout among academic staff and teachers. This research posits that emotional intelligence plays a vital role in helping individuals manage job-related stress and anxiety, thereby preventing burnout. Using a self-administered questionnaire, the study gathered data from 185 samples comprising teachers and administrative staff from two universities in Hebron. The data was analyzed using moderated regression analysis, ANOVA model, and interaction plots. The findings indicate that work-related stress has a direct and significant influence on burnout. Moreover, the current results highlight that emotional intelligence serves as a key determinant in managing the negative effects of the pandemic-induced stress that can lead to burnout among individuals. Given the high-demand nature of the education sector, this research strongly recommends that school authorities take proactive measures to provide much-needed support to academic staff, enabling them to better cope with job stress and fostering an environment that prioritizes individuals' wellbeing. The results of this study hold practical implications for both scholars and practitioners, as they highlight the importance of emotional intelligence in managing stress and anxiety effectively. Understanding the significance of emotional intelligence can aid in implementing targeted interventions and support systems to promote the well-being and resilience of academic staff amidst challenging circumstances.

Keywords: job stress, burnout, employee wellbeing, emotional intelligence, industrial organizational psychology, human resource management, organizational psychology

Procedia PDF Downloads 75
1796 Evaluation of Natural Waste Materials for Ammonia Removal in Biofilters

Authors: R. F. Vieira, D. Lopes, I. Baptista, S. A. Figueiredo, V. F. Domingues, R. Jorge, C. Delerue-matos, O. M. Freitas

Abstract:

Odours are generated in municipal solid wastes management plants as a result of decomposition of organic matter, especially when anaerobic degradation occurs. Information was collected about the substances and respective concentration in the surrounding atmosphere of some management plants. The main components which are associated with these unpleasant odours were identified: ammonia, hydrogen sulfide and mercaptans. The first is the most common and the one that presents the highest concentrations, reaching values of 700 mg/m3. Biofiltration, which involves simultaneously biodegradation, absorption and adsorption processes, is a sustainable technology for the treatment of these odour emissions when a natural packing material is used. The packing material should ideally be cheap, durable, and allow the maximum microbiological activity and adsorption/absorption. The presence of nutrients and water is required for biodegradation processes. Adsorption and absorption are enhanced by high specific surface area, high porosity and low density. The main purpose of this work is the exploitation of natural waste materials, locally available, as packing media: heather (Erica lusitanica), chestnut bur (from Castanea sativa), peach pits (from Prunus persica) and eucalyptus bark (from Eucalyptus globulus). Preliminary batch tests of ammonia removal were performed in order to select the most interesting materials for biofiltration, which were then characterized. The following physical and chemical parameters were evaluated: density, moisture, pH, buffer and water retention capacity. The determination of equilibrium isotherms and the adjustment to Langmuir and Freundlich models was also performed. Both models can fit the experimental results. Based both in the material performance as adsorbent and in its physical and chemical characteristics, eucalyptus bark was considered the best material. It presents a maximum adsorption capacity of 0.78±0.45 mol/kg for ammonia. The results from its characterization are: 121 kg/m3 density, 9.8% moisture, pH equal to 5.7, buffer capacity of 0.370 mmol H+/kg of dry matter and water retention capacity of 1.4 g H2O/g of dry matter. The application of natural materials locally available, with little processing, in biofiltration is an economic and sustainable alternative that should be explored.

Keywords: ammonia removal, biofiltration, natural materials, odour control

Procedia PDF Downloads 372
1795 Rhythm-Reading Success Using Conversational Solfege

Authors: Kelly Jo Hollingsworth

Abstract:

Conversational Solfege, a research-based, 12-step music literacy instructional method using the sound-before-sight approach, was used to teach rhythm-reading to 128-second grade students at a public school in the southeastern United States. For each step, multiple scripted techniques are supplied to teach each skill. Unit one was the focus of this study, which is quarter note and barred eighth note rhythms. During regular weekly music instruction, students completed method steps one through five, which includes aural discrimination, decoding familiar and unfamiliar rhythm patterns, and improvising rhythmic phrases using quarter notes and barred eighth notes. Intact classes were randomly assigned to two treatment groups for teaching steps six through eight, which was the visual presentation and identification of quarter notes and barred eighth notes, visually presenting and decoding familiar patterns, and visually presenting and decoding unfamiliar patterns using said notation. For three weeks, students practiced steps six through eight during regular weekly music class. One group spent five-minutes of class time on steps six through eight technique work, while the other group spends ten-minutes of class time practicing the same techniques. A pretest and posttest were administered, and ANOVA results reveal both the five-minute (p < .001) and ten-minute group (p < .001) reached statistical significance suggesting Conversational Solfege is an efficient, effective approach to teach rhythm-reading to second grade students. After two weeks of no instruction, students were retested to measure retention. Using a repeated-measures ANOVA, both groups reached statistical significance (p < .001) on the second posttest, suggesting both the five-minute and ten-minute group retained rhythm-reading skill after two weeks of no instruction. Statistical significance was not reached between groups (p=.252), suggesting five-minutes is equally as effective as ten-minutes of rhythm-reading practice using Conversational Solfege techniques. Future research includes replicating the study with other grades and units in the text.

Keywords: conversational solfege, length of instructional time, rhythm-reading, rhythm instruction

Procedia PDF Downloads 159
1794 Analysis and Optimized Design of a Packaged Liquid Chiller

Authors: Saeed Farivar, Mohsen Kahrom

Abstract:

The purpose of this work is to develop a physical simulation model for the purpose of studying the effect of various design parameters on the performance of packaged-liquid chillers. This paper presents a steady-state model for predicting the performance of package-Liquid chiller over a wide range of operation condition. The model inputs are inlet conditions; geometry and output of model include system performance variable such as power consumption, coefficient of performance (COP) and states of refrigerant through the refrigeration cycle. A computer model that simulates the steady-state cyclic performance of a vapor compression chiller is developed for the purpose of performing detailed physical design analysis of actual industrial chillers. The model can be used for optimizing design and for detailed energy efficiency analysis of packaged liquid chillers. The simulation model takes into account presence of all chiller components such as compressor, shell-and-tube condenser and evaporator heat exchangers, thermostatic expansion valve and connection pipes and tubing’s by thermo-hydraulic modeling of heat transfer, fluids flow and thermodynamics processes in each one of the mentioned components. To verify the validity of the developed model, a 7.5 USRT packaged-liquid chiller is used and a laboratory test stand for bringing the chiller to its standard steady-state performance condition is build. Experimental results obtained from testing the chiller in various load and temperature conditions is shown to be in good agreement with those obtained from simulating the performance of the chiller using the computer prediction model. An entropy-minimization-based optimization analysis is performed based on the developed analytical performance model of the chiller. The variation of design parameters in construction of shell-and-tube condenser and evaporator heat exchangers are studied using the developed performance and optimization analysis and simulation model and a best-match condition between the physical design and construction of chiller heat exchangers and its compressor is found to exist. It is expected that manufacturers of chillers and research organizations interested in developing energy-efficient design and analysis of compression chillers can take advantage of the presented study and its results.

Keywords: optimization, packaged liquid chiller, performance, simulation

Procedia PDF Downloads 280
1793 Observing the Observers: Journalism and the Gendered Newsroom

Authors: M. Silveirinha, P. Lobo

Abstract:

In the last few decades, many studies have documented a systematic under-representation of women in the news. Aside from being fewer than men, research has also shown that they are frequently portrayed according to traditional stereotypes that have been proven to be disadvantageous for women. When considering this problem, it has often been argued that news content will be more gender balanced when the number of female journalists increases. However, the recent so-called ‘feminization’ of media professions has shown that this assumption is too simplistic. If we want to better grasp gender biases in news content we will need to take a deeper approach into the processes of news production and into journalism culture itself, taking the study of newsmaking as a starting point and theoretical framework, with the purpose of examining the actual newsroom routines, professional values, structures and news access that eventually lead to an unbalanced media representation of women. If journalists consider themselves to be observers of everyday social and political life, of specific importance, as a vast body of research shows, is the observation of women journalist’s believes and of their roles and practices in a gendered newsroom. In order to better understand the professional and organizational context of news production, and the gender power relations in decision-making processes, we conducted a participant observation in two television newsrooms. Our approach involved a combination of methods, including overt observation itself, formal and informal interviews and the writing-up and analysis of our own diaries. Drawing insights in organizational sociology, we took newsroom practices to be a result of professional routines and socialization and focused on how women and men respond to newsroom dynamics and structures. We also analyzed the gendered organization of the newsmaking process and the subtle and/or obvious glass-ceiling obstacles often reported on. In our paper we address two levels of research: first, we look at our results and establish an overview of the patterns of continuity between the gendering of organizations, working conditions and professional journalist beliefs. At this level, the study not only interrogates how journalists handle views on gender and the practice of the profession but also highlights the structural inequalities in journalism and the pervasiveness of family–work tensions for female journalists. Secondly, we reflect on our observation method, and establish a critical assessment of the method itself.

Keywords: gender, journalism, participant observation, women

Procedia PDF Downloads 354
1792 Effect of Modification on the Properties of Blighia sapida (Ackee) Seed Starch

Authors: Olufunmilola A. Abiodun, Adegbola O. Dauda, Ayobami Ojo, Samson A. Oyeyinka

Abstract:

Blighia sapida (Ackee) seed is a neglected and under-utilised crop. The fruit is cultivated for the aril which is used as meat substitute in soup while the seed is discarded. The seed is toxic due to the presence of hypoglycin which causes vomiting and death. The seed is shining black and bigger than the legume seeds. The seed contains high starch content which could serve as a cheap source of starch hereby reducing wastage of the crop during its season. Native starch had limitation in their use; therefore, modification of starch had been reported to improve the functional properties of starches. Therefore, this work determined the effect of modification on the properties of Blighia sapida seed starch. Blighia sapida seed was dehulled manually, milled and the starch extracted using standard method. The starch was subjected to modification using four methods (acid, alkaline, oxidized and acetylated methods). The morphological structure, form factor, granule size, amylose, swelling power, hypoglycin and pasting properties of the starches were determined. The structure of Blighia sapida using light microscope showed that the seed starch demonstrated an oval, round, elliptical, dome-shaped and also irregular shape. The form factors of the starch ranged from 0.32-0.64. Blighia sapida seed starches were smaller in granule sizes ranging from 2-6 µm. Acid modified starch had the highest amylose content (24.83%) and was significantly different ( < 0.05) from other starches. Blighia sapida seed starches showed a progressive increase in swelling power as temperature increased in native, acidified, alkalized, oxidized and acetylated starches but reduced with increasing temperature in pregelatinized starch. Hypoglycin A ranged from 3.89 to 5.74 mg/100 g with pregelatinized starch having the lowest value and alkalized starch having the highest value. Hypoglycin B ranged from 7.17 to 8.47 mg/100 g. Alkali-treated starch had higher peak viscosity (3973 cP) which was not significantly different (p > 0.05) from the native starch. Alkali-treated starch also was significantly different (p > 0.05) from other starches in holding strength value while acetylated starch had higher breakdown viscosity (1161.50 cP). Native starch was significantly different (p > 0.05) from other starches in final and setback viscosities. Properties of Blighia sapida modified starches showed that it could be used as a source of starch in food and other non-food industries and the toxic compound found in the starch was very low when compared to lethal dosage.

Keywords: Blighia sapida seed, modification, starch, hypoglycin

Procedia PDF Downloads 238
1791 Factors Influencing Infection Prevention and Control Practices in the Emergency Department of Mbarara Regional Referral Hospital in Mbarara District- Uganda

Authors: Baluku Nathan

Abstract:

Infection prevention and control (IPC) is a practical, evidence-based approach that prevents patients and emergency health workers from being harmed by avoidable infections as a result of antimicrobial resistance; all hospital infection control programs put together various practices which, when used appropriately, restrict the spread of infection. A breach in these control practices facilitates the transmission of infections from patients to health workers, other patients and attendants. It is, therefore, important for all EMTs and patients to adhere to them strictly. It is also imperative for administrators to ensure the implementation of the infection control program for their facilities. Purpose: The purpose of this study was to investigate the influencing factors of prevention practices against Infection exposure among emergency medical technicians (EMTs) in the emergency department at Mbarara hospital. Methodology: This was a descriptive cross-sectional study that employed a self-reported questionnaire that was filled out by 32 EMTs in the emergency department from 12th February to 3rd march 2022. The questionnaire consisted of items concerning the defensive environment and other Factors influencing Infection prevention and control practices in the accident and emergency department of Mbarara hospital. Results: From the findings, majority16(50%) always used protective gear when doing clinical work,14 (43.8%) didn’t use protective gear, citing they were only assisting those performing resuscitations, gumboots were the least used protective gear with only3(9.4%) usage. Regarding disposal techniques of specific products like blood and sharps, results showed 10 (31.3%) said blood is disposed of in red buckets, 5(15.6%) in yellow buckets and only5(15.6%) in black buckets and 12(37.5%) didn’t respond. However, 28(87.5%) said sharps were disposed of in a sharps container. The majority, 17(53.1%), were not aware of the infection control guidelines even though they were pinned on walls of the emergency rooms,15(46.9%) said they had never had quality assurance monitoring events,14(43.8%) said monitoring was continuous while15(46.9 %) said it was discrete. Conclusions: The infection control practices at the emergency department were inadequate in view of less than 100% of the EMTs observing the five principles of infection prevention, such as the use of personal protective equipment and proper waste disposal in appropriate color-coded bins. Dysfunctional infection prevention and control committees accompanied by inadequate supervision to ensure infection control remained a big challenge.

Keywords: infection prevention, influencing factors, emergency medical technician (EMT), emergency unit

Procedia PDF Downloads 117
1790 Factors Influencing Infection Prevention and Control Practices in the Emergency Department of Mbarara Regional Referral Hospital in Mbarara District-Uganda

Authors: Baluku Nathan

Abstract:

Infection prevention and control (IPC) is a practical, evidence-based approach that prevents patients and emergency health workers from being harmed by avoidable infections as a result of antimicrobial resistance; all hospital infection control programs put together various practices which, when used appropriately, restrict the spread of infection. A breach in these control practices facilitates the transmission of infections from patients to health workers, other patients, and attendants. It is, therefore important for all emergency medical technicians (EMTs) and patients to strictly adhere to them. It is also imperative for administrators to ensure the implementation of the infection control programme for their facilities. Purpose: The purpose of this study was to investigate the influencing factors of prevention practices against infection exposure among emergency medical technicians (EMTs) in the emergency department at Mbarara hospital. Methodology: This was a descriptive cross-sectional study that employed a self-reported questionnaire that was filled out by 32 EMTs in the emergency department from 12th February to 3rd march 2022. The questionnaire consisted of items concerning the defensive environment and other factors influencing infection prevention and control practices in the accident and emergency department of Mbarara hospital. Results: From the findings, the majority 16 (50%) always used protective gear when doing clinical work, 14 (43.8%) didn’t use protective gear, citing they were only assisting those performing resuscitations, gumboots were the least used protective gear with only3(9.4%) usage. About disposal techniques of specific products like blood and sharps, results showed 10 (31.3%) said blood is disposed of in red buckets, 5 (15.6%) in yellow buckets, and only 5(15.6%) in black buckets, and 12(37.5%) didn’t respond, however, 28(87.5%) said sharps were disposed of in a sharps container. The majority, 17 (53.1%), were not aware of the infection control guidelines even though they were pinned on walls of the emergency rooms, 15(46.9%) said they have never had quality assurance monitoring events, 14(43.8%) said monitoring was continuous while 15(46.9 %) said it was discrete. Conclusions: The infection control practices at the emergency department were inadequate in view of less than 100% of the EMTs observing the five principles of infection prevention, such as the use of personal protective equipment and proper waste disposal in appropriate color-coded bins. Dysfunctional infection prevention and control committees accompanied by inadequate supervision to ensure infection control remained a big challenge.

Keywords: emergency medical technician, infection prevention, influencing factors, infection control

Procedia PDF Downloads 111
1789 Estimation of the Exergy-Aggregated Value Generated by a Manufacturing Process Using the Theory of the Exergetic Cost

Authors: German Osma, Gabriel Ordonez

Abstract:

The production of metal-rubber spares for vehicles is a sequential process that consists in the transformation of raw material through cutting activities and chemical and thermal treatments, which demand electricity and fossil fuels. The energy efficiency analysis for these cases is mostly focused on studying of each machine or production step, but is not common to study of the quality of the production process achieves from aggregated value viewpoint, which can be used as a quality measurement for determining of impact on the environment. In this paper, the theory of exergetic cost is used for determining of aggregated exergy to three metal-rubber spares, from an exergy analysis and thermoeconomic analysis. The manufacturing processing of these spares is based into batch production technique, and therefore is proposed the use of this theory for discontinuous flows from of single models of workstations; subsequently, the complete exergy model of each product is built using flowcharts. These models are a representation of exergy flows between components into the machines according to electrical, mechanical and/or thermal expressions; they determine the demanded exergy to produce the effective transformation in raw materials (aggregated exergy value), the exergy losses caused by equipment and irreversibilities. The energy resources of manufacturing process are electricity and natural gas. The workstations considered are lathes, punching presses, cutters, zinc machine, chemical treatment tanks, hydraulic vulcanizing presses and rubber mixer. The thermoeconomic analysis was done by workstation and by spare; first of them describes the operation of the components of each machine and where the exergy losses are; while the second of them estimates the exergy-aggregated value for finished product and wasted feedstock. Results indicate that exergy efficiency of a mechanical workstation is between 10% and 60% while this value in the thermal workstations is less than 5%; also that each effective exergy-aggregated value is one-thirtieth of total exergy required for operation of manufacturing process, which amounts approximately to 2 MJ. These troubles are caused mainly by technical limitations of machines, oversizing of metal feedstock that demands more mechanical transformation work, and low thermal insulation of chemical treatment tanks and hydraulic vulcanizing presses. From established information, in this case, it is possible to appreciate the usefulness of theory of exergetic cost for analyzing of aggregated value in manufacturing processes.

Keywords: exergy-aggregated value, exergy efficiency, thermoeconomics, exergy modeling

Procedia PDF Downloads 173
1788 Development of an Instrument for Measurement of Thermal Conductivity and Thermal Diffusivity of Tropical Fruit Juice

Authors: T. Ewetumo, K. D. Adedayo, Festus Ben

Abstract:

Knowledge of the thermal properties of foods is of fundamental importance in the food industry to establish the design of processing equipment. However, for tropical fruit juice, there is very little information in literature, seriously hampering processing procedures. This research work describes the development of an instrument for automated thermal conductivity and thermal diffusivity measurement of tropical fruit juice using a transient thermal probe technique based on line heat principle. The system consists of two thermocouple sensors, constant current source, heater, thermocouple amplifier, microcontroller, microSD card shield and intelligent liquid crystal. A fixed distance of 6.50mm was maintained between the two probes. When heat is applied, the temperature rise at the heater probe measured with time at time interval of 4s for 240s. The measuring element conforms as closely as possible to an infinite line source of heat in an infinite fluid. Under these conditions, thermal conductivity and thermal diffusivity are simultaneously measured, with thermal conductivity determined from the slope of a plot of the temperature rise of the heating element against the logarithm of time while thermal diffusivity was determined from the time it took the sample to attain a peak temperature and the time duration over a fixed diffusivity distance. A constant current source was designed to apply a power input of 16.33W/m to the probe throughout the experiment. The thermal probe was interfaced with a digital display and data logger by using an application program written in C++. Calibration of the instrument was done by determining the thermal properties of distilled water. Error due to convection was avoided by adding 1.5% agar to the water. The instrument has been used for measurement of thermal properties of banana, orange and watermelon. Thermal conductivity values of 0.593, 0.598, 0.586 W/m^o C and thermal diffusivity values of 1.053 ×〖10〗^(-7), 1.086 ×〖10〗^(-7), and 0.959 ×〖10〗^(-7) 〖m/s〗^2 were obtained for banana, orange and water melon respectively. Measured values were stored in a microSD card. The instrument performed very well as it measured the thermal conductivity and thermal diffusivity of the tropical fruit juice samples with statistical analysis (ANOVA) showing no significant difference (p>0.05) between the literature standards and estimated averages of each sample investigated with the developed instrument.

Keywords: thermal conductivity, thermal diffusivity, tropical fruit juice, diffusion equation

Procedia PDF Downloads 358
1787 Investigating the Sloshing Characteristics of a Liquid by Using an Image Processing Method

Authors: Ufuk Tosun, Reza Aghazadeh, Mehmet Bülent Özer

Abstract:

This study puts forward a method to analyze the sloshing characteristics of liquid in a tuned sloshing absorber system by using image processing tools. Tuned sloshing vibration absorbers have recently attracted researchers’ attention as a seismic load damper in constructions due to its practical and logistical convenience. The absorber is liquid which sloshes and applies a force in opposite phase to the motion of structure. Experimentally characterization of the sloshing behavior can be utilized as means of verifying the results of numerical analysis. It can also be used to identify the accuracy of assumptions related to the motion of the liquid. There are extensive theoretical and experimental studies in the literature related to the dynamical and structural behavior of tuned sloshing dampers. In most of these works there are efforts to estimate the sloshing behavior of the liquid such as free surface motion and total force applied by liquid to the wall of container. For these purposes the use of sensors such as load cells and ultrasonic sensors are prevalent in experimental works. Load cells are only capable of measuring the force and requires conducting tests both with and without liquid to obtain pure sloshing force. Ultrasonic level sensors give point-wise measurements and hence they are not applicable to measure the whole free surface motion. Furthermore, in the case of liquid splashing it may give incorrect data. In this work a method for evaluating the sloshing wave height by using camera records and image processing techniques is presented. In this method the motion of the liquid and its container, made of a transparent material, is recorded by a high speed camera which is aligned to the free surface of the liquid. The video captured by the camera is processed frame by frame by using MATLAB Image Processing toolbox. The process starts with cropping the desired region. By recognizing the regions containing liquid and eliminating noise and liquid splashing, the final picture depicting the free surface of liquid is achieved. This picture then is used to obtain the height of the liquid through the length of container. This process is verified by ultrasonic sensors that measured fluid height on the surface of liquid.

Keywords: fluid structure interaction, image processing, sloshing, tuned liquid damper

Procedia PDF Downloads 346
1786 Threshold Sand Detection Limits for Acoustic Monitors in Multiphase Flow

Authors: Vinod Ponnagandla, Brenton McLaury, Siamack Shirazi

Abstract:

Sand production can lead to deposition of particles or erosion. Low production rates resulting in deposition can partially clog systems and cause under deposit corrosion. Commercially available nonintrusive acoustic sand detectors are attractive as they claim to detect sand production. Acoustic sand detectors are used during oil and gas production; however, operators often do not know the threshold detection limits of these devices. It is imperative to know the detection limits to appropriately plan for cleaning of separation equipment or examine risk of erosion. These monitors are based on detecting the acoustic signature of sand as the particles impact the pipe walls. The objective of this work is to determine threshold detection limits for acoustic sand monitors that are commercially available. The minimum threshold sand concentration that can be detected in a pipe are determined as a function of flowing gas and liquid velocities. A large scale flow loop with a 4-inch test section is utilized. Commercially available sand monitors (ClampOn and Roxar) are evaluated for different flow regimes, sand sizes and pipe orientation (vertical and horizontal). The manufacturers’ recommend that the monitors be placed on a bend to maximize the number of particle impacts, so results are shown for monitors placed at 45 and 90 degree positions in a bend. Acoustic sand monitors that clamp to the outside of pipe are passive and listen for solid particle impact noise. The threshold sand rate is calculated by eliminating the background noise created by the flow of gas and liquid in the pipe for various flow regimes that are generated in horizontal and vertical test sections. The average sand sizes examined are 150 and 300 microns. For stratified and bubbly flows the threshold sand rates are much higher than other flow regimes such as slug and annular flow regimes that are investigated. However, the background noise generated by slug flow regime is very high and cause a high uncertainty in detection limits. The threshold sand rates for annular flow and dry gas conditions are the lowest because of high gas velocities. The effects of monitor placement around elbows that are in vertical and horizontal pipes are also examined for 150 micron. The results show that the threshold sand rates that are detected in vertical orientation are generally lower for all various flow regimes that are investigated.

Keywords: acoustic monitor, sand, multiphase flow, threshold

Procedia PDF Downloads 412
1785 Predictive Analytics for Theory Building

Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim

Abstract:

Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.

Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building

Procedia PDF Downloads 279
1784 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine

Authors: D. Madhushanka, Y. Liu, H. C. Fernando

Abstract:

Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.

Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2

Procedia PDF Downloads 241
1783 Geomatic Techniques to Filter Vegetation from Point Clouds

Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades

Abstract:

More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.

Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud

Procedia PDF Downloads 157
1782 Using Participatory Action Research with Episodic Volunteers: Learning from Urban Agriculture Initiatives

Authors: Rebecca Laycock

Abstract:

Many Urban Agriculture (UA) initiatives, including community/allotment gardens, Community Supported Agriculture, and community/social farms, depend on volunteers. However, initiatives supported or run by volunteers are often faced with a high turnover of labour as a result of the involvement of episodic volunteers (a term describing ad hoc, one-time, and seasonal volunteers), leading to challenges with maintaining project continuity and retaining skills/knowledge within the initiative. This is a notable challenge given that food growing is a knowledge intensive activity where the fruits of labour appear months or sometimes years after investment. Participatory Action Research (PAR) is increasingly advocated for in the field of UA as a solution-oriented approach to research, providing concrete results in addition to advancing theory. PAR is a cyclical methodological approach involving researchers and stakeholders collaboratively 'identifying' and 'theorising' an issue, 'planning' an action to address said issue, 'taking action', and 'reflecting' on the process. Through iterative cycles and prolonged engagement, the theory is developed and actions become better tailored to the issue. The demand for PAR in UA research means that understanding how to use PAR with episodic volunteers is of critical importance. The aim of this paper is to explore (1) the challenges of doing PAR in UA initiatives with episodic volunteers, and (2) how PAR can be harnessed to advance sustainable development of UA through theoretically-informed action. A 2.5 year qualitative PAR study on three English case study student-led food growing initiatives took place between 2014 and 2016. University UA initiatives were chosen as exemplars because most of their volunteers were episodic. Data were collected through 13 interviews, 6 workshops, and a research diary. The results were thematically analysed through eclectic coding using Computer-Assisted Qualitative Data Analysis Software (NVivo). It was found that the challenges of doing PAR with transient participants were (1) a superficial understanding of issues by volunteers because of short term engagement, resulting in difficulties ‘identifying’/‘theorising’ issues to research; (2) difficulties implementing ‘actions’ given those involved in the ‘planning’ phase often left by the ‘action’ phase; (3) a lack of capacity of participants to engage in research given the ongoing challenge of maintaining participation; and (4) that the introduction of the researcher acted as an ‘intervention’. The involvement of a long-term stakeholder (the researcher) changed the group dynamics, prompted critical reflections that had not previously taken place, and improved continuity. This posed challenges for providing a genuine understanding the episodic volunteering PAR initiatives, and also challenged the notion of what constitutes an ‘intervention’ or ‘action’ in PAR. It is recommended that researchers working with episodic volunteers using PAR should (1) adopt a first-person approach by inquiring into the researcher’s own experience to enable depth in theoretical analysis to manage the potentially superficial understandings by short-term participants; and (2) establish safety mechanisms to address the potential for the research to impose artificial project continuity and knowledge retention that will end when the research does. Through these means, we can more effectively use PAR to conduct solution-oriented research about UA.

Keywords: community garden, continuity, first-person research, higher education, knowledge retention, project management, transience, university

Procedia PDF Downloads 252
1781 Crowdsensing Project in the Brazilian Municipality of Florianópolis for the Number of Visitors Measurement

Authors: Carlos Roberto De Rolt, Julio da Silva Dias, Rafael Tezza, Luca Foschini, Matteo Mura

Abstract:

The seasonal population fluctuation presents a challenge to touristic cities since the number of inhabitants can double according to the season. The aim of this work is to develop a model that correlates the waste collected with the population of the city and also allow cooperation between the inhabitants and the local government. The model allows public managers to evaluate the impact of the seasonal population fluctuation on waste generation and also to improve planning resource utilization throughout the year. The study uses data from the company that collects the garbage in Florianópolis, a Brazilian city that presents the profile of a city that attracts tourists due to numerous beaches and warm weather. The fluctuations are caused by the number of people that come to the city throughout the year for holidays, summer time vacations or business events. Crowdsensing will be accomplished through smartphones with access to an app for data collection, with voluntary participation of the population. Crowdsensing participants can access information collected in waves for this portal. Crowdsensing represents an innovative and participatory approach which involves the population in gathering information to improve the quality of life. The management of crowdsensing solutions plays an essential role given the complexity to foster collaboration, establish available sensors and collect and process the collected data. Practical implications of this tool described in this paper refer, for example, to the management of seasonal tourism in a large municipality, whose public services are impacted by the floating of the population. Crowdsensing and big data support managers in predicting the arrival, permanence, and movement of people in a given urban area. Also, by linking crowdsourced data to databases from other public service providers - e.g., water, garbage collection, electricity, public transport, telecommunications - it is possible to estimate the floating of the population of an urban area affected by seasonal tourism. This approach supports the municipality in increasing the effectiveness of resource allocation while, at the same time, increasing the quality of the service as perceived by citizens and tourists.

Keywords: big data, dashboards, floating population, smart city, urban management solutions

Procedia PDF Downloads 291